CN116758048B - PET/CT tumor periphery feature extraction system and extraction method based on transducer - Google Patents

PET/CT tumor periphery feature extraction system and extraction method based on transducer Download PDF

Info

Publication number
CN116758048B
CN116758048B CN202310822439.1A CN202310822439A CN116758048B CN 116758048 B CN116758048 B CN 116758048B CN 202310822439 A CN202310822439 A CN 202310822439A CN 116758048 B CN116758048 B CN 116758048B
Authority
CN
China
Prior art keywords
module
tumor
features
image
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310822439.1A
Other languages
Chinese (zh)
Other versions
CN116758048A (en
Inventor
杨昆
宋杰
刘琨
刘爽
薛林雁
于海韵
李乐华
魏韵珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University
Original Assignee
Hebei University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University filed Critical Hebei University
Priority to CN202310822439.1A priority Critical patent/CN116758048B/en
Publication of CN116758048A publication Critical patent/CN116758048A/en
Application granted granted Critical
Publication of CN116758048B publication Critical patent/CN116758048B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Abstract

The invention discloses a PET/CT tumor periphery feature extraction system and method based on a transducer. The extraction system comprises a data preprocessing module, a tumor edge gradual change feature extraction module, a tumor regional feature weighting module and a model training and verification optimization module; the data preprocessing module preprocesses the sample image; the tumor edge gradual change feature extraction module extracts longitudinal features and tangential features of images from long-side and narrow-side images cut out of a sample image, and integrates the change features of tumor tissues in the sample image; the tumor regional characteristic weighting module adds regional characteristic weighting coefficients through a convolution method, and integrates the characteristics extracted by the tumor edge gradual change characteristic extraction module into a backbone network as added characteristics; the model training and verifying optimizing module tests and verifies the performance of the model to obtain a final optimized model. The invention effectively solves the problem that the sub-abnormal tumor area is difficult to observe on the PET/CT image.

Description

PET/CT tumor periphery feature extraction system and extraction method based on transducer
Technical Field
The invention relates to an image processing method, in particular to a PET/CT tumor periphery feature extraction system and method based on a transducer.
Background
Currently, 18F-FDG (18-fluoro-labeled glucose analog) is a tracer commonly used in PET/CT scans, which can maintain energy supply to cells when it is required to increase glucose uptake and glycolysis according to abnormal proliferation of malignant tumors, thereby allowing different types of tumors to show different degrees of glucose uptake on glycometabolism imaging images. When the tracer decays to annihilation in the patient, a pair of 511keV gamma photons with basically opposite emission directions can be generated, the position and time information of the gamma photons reaching the crystal are collected by the detector, and the collected information is reconstructed and post-processed by using an image reconstruction algorithm, so that the condition of metabolism and uptake of the tracer in the patient is obtained. Therefore, by PET imaging, the characteristic of the metabolic heterogeneity can be reflected in an early and quantitative manner, thereby providing assistance for clinical diagnosis.
On the PET/CT image, the obvious highlight region observed with naked eyes is an abnormal region of the tumor, and a gradual transition region exists between the abnormal region and a normal tissue region. This transition region is also known as the tumor subabnormal region. Directly observing the image, the tumor subabnormal region does not obviously show abnormal characteristics, but can show some abnormal characteristics in statistics.
The sub-abnormal region is a transition region with blurred boundaries, and its image data value is between that of the normal region and that of the abnormal region, so that it is difficult to distinguish the true boundary of the sub-abnormal region from the visual image morphology. It is difficult to determine its attribution by observing directly from the image with naked eyes. However, the sub-abnormal region has a large difference in the pixel data stored in the image, and exhibits a certain change rule. However, this law of variation is difficult to observe by the naked eye.
Disclosure of Invention
The invention aims to provide a PET/CT tumor periphery feature extraction system and method based on a transducer, which are used for solving the problem that a tumor sub-abnormal region is difficult to observe on a PET/CT image.
The purpose of the invention is realized in the following way:
a transducer-based PET/CT peri-tumor feature extraction system, comprising:
the data preprocessing module is used for preprocessing sample images in the PET/CT image sample data set, including anonymization, data cleaning, data set labeling and data enhancement;
the tumor edge gradual change feature extraction module is used for extracting tangential features of images from narrow-side images cut out of sample images, extracting longitudinal features of images from long-side images cut out of the sample images and integrating change features of tumor tissues in the sample images;
the tumor regional characteristic weighting module is used for adding regional characteristic weighting coefficients through a convolution method, integrating tumor characteristics with stronger regional property in the image, and integrating the characteristics extracted by the tumor edge gradual change characteristic extraction module into a backbone network as added characteristics; and
and the model training and verifying optimization module is used for testing and verifying the performance of the model focusing on different subregion characteristics inside the tumor so as to obtain a final optimized model.
Further, the tumor edge gradient feature extraction module includes:
the 1 multiplied by 3 convolution extraction feature module is used for extracting tangential features of the image from the cut narrow-edge image; and
and the self-attribute feature extraction module is used for extracting longitudinal features of the image from the cut long-side image.
Further, the tumor regional characteristic weighting module comprises:
the parallel branch feature fusion module is used for fusing the features extracted by the tumor edge gradual change feature extraction module into a backbone network;
the convolution region feature extraction module is used for integrating tumor features with stronger regional property in tumors; and
and the attention weighting calculation module is used for converting the tumor features extracted by the region feature extraction module into region weights and applying the region weights to the attention calculation process.
The object of the invention is also achieved in that:
a method for extracting PET/CT peritumor features based on a transducer comprises the following steps:
s0, setting the PET/CT tumor periphery feature extraction system.
S1, carrying out preprocessing operations including anonymization, data cleaning, data set labeling and data enhancement on sample images in a PET/CT image sample data set by utilizing a data preprocessing module.
S2, utilizing a tumor edge gradual change feature extraction module to extract tangential features of images from narrow-side images cut out of the sample images, extracting longitudinal features of images from long-side images cut out of the sample images, and integrating change features of tumor tissues in the sample images.
And S3, adding regional characteristic weighting coefficients by using a tumor regional characteristic weighting module through a convolution method, integrating tumor characteristics with stronger regional characteristics in the image, and integrating the characteristics extracted by the tumor edge gradual change characteristic extraction module into a backbone network as added characteristics.
And S4, training a model focusing on different subregion features in the tumor by using a model training and verifying optimization module and adopting a gradient descent method.
And S5, performing internal evaluation and optimization on the performance of the trained model on an internal test set by using a model training and verification optimization module.
And S6, verifying and optimizing the model subjected to internal evaluation and optimization by using a model training and verifying and optimizing module to obtain a final optimized model.
Further, the tumor edge gradual change feature extraction module comprises a 1 multiplied by 3 convolution feature extraction module and a self-attention feature extraction module; the 1X 3 convolution extraction feature module is used for extracting tangential features of the image from the cut narrow-edge image; the self-attribute feature extraction module is used for extracting longitudinal features of the image from the cut long-side image.
Further, the parameters adopted in the internal evaluation and optimization operation in step S5 include: classification accuracy, precision, recall, F1 score, area of subject work curve, and confusion matrix.
Further, the verification and optimization operation in step S6 is to collect PET/CT sample images of different hospitals as test samples, verify the model to determine the accuracy, recall ratio, accuracy, ROC curve performance index of the model, and analyze the classification differences of the model represented on different hospital data, thereby optimizing the model.
The invention uses the method of combining attention and convolution to excavate the tumor edge characteristics which are easy to be ignored in the image through the change relation between different sub-abnormal areas in the tumor, thereby more effectively utilizing the characteristics of different tumor sub-abnormal areas on the PET/CT image and better solving the problem that the sub-abnormal areas of the tumor are difficult to be observed on the PET/CT image.
Drawings
FIG. 1 is a flow chart of the PET/CT peritumoral feature extraction method of the present invention.
Fig. 2 is a Branch Block structure diagram in the tumor margin gradient feature extraction module.
Fig. 3 is a Block diagram of a Trunk Block in the tumor margin gradient feature extraction module.
Fig. 4 is a view of the Conv-Attention structure in the tumor margin gradient feature extraction module.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings.
As shown in FIG. 1, the PET/CT peritumoral feature extraction system comprises a data preprocessing module, a tumor edge gradual change feature extraction module, a tumor regional feature weighting module and a model training and verification optimization module. The data preprocessing module is used for preprocessing sample images in the PET/CT image sample data set, including anonymization, data cleaning, data set labeling and data enhancement. The tumor edge gradual change feature extraction module is used for extracting tangential features of images from narrow-side images cut out of sample images, extracting longitudinal features of images from long-side images cut out of the sample images, and integrating change features of tumor tissues in the sample images. The tumor regional characteristic weighting module is used for adding regional characteristic weighting coefficients through a convolution method, integrating tumor characteristics with stronger regional characteristics in the image, and integrating the characteristics extracted by the tumor edge gradual change characteristic extraction module into a backbone network as added characteristics. The model training and verifying optimization module is used for testing and verifying the performance of the model focusing on different subregion features inside the tumor so as to obtain a final optimized model.
The tumor edge gradual change feature extraction module comprises a 1 multiplied by 3 convolution extraction feature module and a self-preservation feature extraction module; the 1×3 convolution extraction feature module is designed for narrow-edge image features in an image with a larger length-width ratio, which is cut out from an original image, and is used for extracting tangential features in the cut-out image. The self-intensity feature extraction module is designed for long-side image features in images with larger length-width ratio, which are cut out from the original images, and is used for extracting longitudinal features in the cut-out images, and the longitudinal features span different areas of tumors.
As shown in fig. 1, the tumor edge gradient feature extraction module uses a multi-scale image Tansformer from thick to thin as a backbone network to extract features of different scales. The part is divided into four stages, and each stage comprises a Trunk Block module and a Branch Block module. Each stage downsamples the image features to a lower resolution, forming a sequence of three resolutions:images were classified by linear projection of clstoken in Branch Block.
As shown in fig. 2, the Branch Block module is designed to extract the correlation of feature gradient between different tumor sub-environments in an image. According to the existing research, tumor sub-areas with different states exist in the tumor. Aiming at the image characteristics of different sub-areas which can appear in the tumor, the invention designs a module which spans different sub-environments and integrates the characteristic change of the sub-environments, namely a Branch Block. Because the traditional neural network is not only a window of vit, but also convolution of CNN is mainly square; if the window or convolution is too small, the model cannot better focus on the changing features in this different tumor microenvironment; however, if the window is too large, the calculation amount increases dramatically, thereby affecting the calculation speed of the model. Therefore, in order to solve the problem, the invention designs the BranchBlock module to supplement the characteristic of larger visual field which cannot be seen by the square window, and simultaneously avoids the tedious operation of integrating multi-scale information from different stages.
As shown in fig. 3, the Trunk Block module designed by the present invention appears in the backbone of the whole network, and is used for extracting the features of the whole image, and performing attention calculation with the features extracted by the Branch Block module.
The tumor regional characteristic weighting module comprises a parallel branch characteristic fusion module, an attention weighting calculation module and a convolution regional characteristic extraction module; the parallel branch feature fusion module is used for fusing the features extracted by the tumor edge gradual change feature extraction module into a backbone network. The attention weighted calculation module is used for integrating tumor features with stronger regional property in tumors. The convolution region feature extraction module is used for converting the tumor features extracted by the region feature extraction module into region weights and applying the region weights to the attention calculation process.
As shown in FIG. 1, the PET/CT peritumor feature extraction method comprises the following steps:
s0, setting the PET/CT tumor periphery feature extraction system.
S1, respectively carrying out various preprocessing operations including anonymization, data cleaning, data set labeling and data enhancement on sample images in the constructed PET/CT image sample data set by utilizing a data preprocessing module.
S2, utilizing a tumor edge gradual change feature extraction module to extract tangential features of images from narrow-side images cut out of the sample images, extracting longitudinal features of images from long-side images cut out of the sample images, and integrating change features of tumor tissues in the sample images.
The tumor edge gradual change feature extraction module comprises a 1 multiplied by 3 convolution extraction feature module and a self-preservation feature extraction module. And extracting tangential features of the image from the cut narrow-edge image by using a 1X 3 convolution extraction feature module. And extracting longitudinal features of the image from the cut long-side image by using a self-attribute feature extraction module.
And S3, adding regional characteristic weighting coefficients by using a tumor regional characteristic weighting module through a convolution method, integrating tumor characteristics with stronger regional property in the image, integrating the characteristics extracted by the tumor edge gradual change characteristic extraction module into a main network as added characteristics, and not affecting the transmission of the original characteristics in the main network while performing attention calculation.
And S4, training a model focusing on different subregion features in the tumor by using a model training and verifying optimization module and adopting a gradient descent method.
And S5, performing internal evaluation and optimization on the performance of the trained model on an internal test set by using a model training and verification optimization module. Parameters employed for the internal evaluation and optimization operations include: classification accuracy, precision, recall, F1 score, area of subject work curve, and confusion matrix.
And S6, verifying and optimizing the model subjected to internal evaluation and optimization by using a model training and verifying and optimizing module to obtain a final optimized model. The verification and optimization operation is to collect PET/CT sample images of different hospitals as test samples, verify the model to determine the accuracy, recall ratio, accuracy and ROC curve performance index of the model, analyze the classification difference of the model represented on different hospital data, and optimize the model.
As shown in fig. 1, the tumor margin gradient feature extraction dieThe block takes a multiscale image Tansformer as a backbone network for extracting features of different scales. The four stages all comprise a Trunk Block module and a Branch Block module. The Trunk Block module is parallel to the Branch Block module, the two modules are respectively input into a strip-shaped image Block which is cut out from the original image in the longitudinal direction and the transverse direction and passes through a tumor boundary from normal tissues to reach the inside of a tumor, the characteristic extraction is carried out through the Branch Block module, and then the extracted characteristic is copied into two parts, one part enters the Trunk Block module to participate in the attention calculation of a backbone network; in order to match the shape of the feature map in each Trunk Block module, the same image feature downsampling as each stage is performed to formThese three resolution sequences.
As shown in fig. 2, the Branch Block module is used to extract the correlation of feature gradual changes between different tumor sub-environments in the image. The method comprises extracting long image block from the image, extracting tangential features from the image by 3×1 convolution, integrating features by 1×1 convolution, dividing the features after 1×1 convolution processing into different token by using a window with length of Patch size×patch size and width of 1 in patch_Embed, performing multi-head attention calculation again, in Patch-emmbed,for a feature map that has been subjected to a 3 x1 convolution and a 1x1 convolution,the convolution operation is performed, the convolution kernel length is pc×pc, the width is 1, and ln is LayerNorm operation.
In enc, MSA is Multi-head selfattention in the vit for the feature map processed by the patch-end, and LN is LayerNorm operation, mlp is the same as mlp in the vit with the output of the patch-end as the valid token input.
The features extracted by the encoder are copied into two parts, one part is sent into the Trunk Block, attention calculation is carried out on the features extracted by the Trunk Block, and the other part is in the form of a feature map and participates in the operation process of the next Branch Block.
As shown in fig. 3, the Trunk Block module appearing in the backbone of the entire network is a feature for extracting the entire image, and performs attention calculation with the feature extracted by the Branch Block module. Firstly, inputting the featuremap from the last stage or the input image into the patchemmbed to convert the tensor of h multiplied by w multiplied by c into the token, adding a special add token to the converted token, and finally adding the add token to the linear layer to realize the application of the downstream task of the image, wherein the number of tokens output by the patchemmbed is N, and after adding the add token, the number of tokens input into the Encoder is N+1.
With n+1 tokens as input to the encoder, first, these tokens are position coded by a cpe, similar to the absolute position coding used in most image transformers, the positional relationship is inserted directly into the input image features, and before each Factorized kp Attention module, the token from patchemb is reshaped to convert the tensor shape to hxwxc. Then, this feature is input to depthwiseconv, position information encoding is performed, and after encoding is completed, the tensor reshape is converted into a shape of n×c. Throughout the position encoding process, only imagetoken participates, while clstoken does not.
After the cpe process token feature, a norm process is performed. After the attention calculation is performed by inputting the feature map processed and the feature maps of the two branch blocks to Factorized kp Attention, the feature map is connected to the feature before the cpe is performed once, and the feed forward of the standard vit is performed once. As shown in fig. 4, the featuremap output by Conv-Attention is processed by norm and mlp, and connected with the feature output by Conv-Attention by residual, and add token in the output token sequence is used as the basis for classification, and other image token is reshaped to obtain c×h×w tensor, C is the channel number, h is long, and w is wide, as the input of the next stage.
Factorized kp Attention the attention module is designed for the attention calculation between different features from the Trunk Block module and the Branch Block module, the features from the Branch Block module are spliced with the features in the Trunk Block module to be used as K and V in the attention calculation, and the features in the Trunk Block module are used as Q in the attention calculation to be used for the attention calculation. This allows features from the BranchBlock module to participate only in the self-attention computation of the features of the Trunk Block module, but not in the delivery of features in the backbone network. Therefore, the characteristics of a longer visual field in the Branch Block can be well combined with the characteristics of the Trunk Block module to participate in attention calculation, and meanwhile, the transmission of original characteristics in the image is not affected.
This attention module consists of two parts, factorized kp Attention and Convolutional Relative Position Encoding. Wherein Factorized Attention is used for K T And V to approximate softmax attention map, thereby reducing computational overhead.
The attention module in this invention differs from conventional Factorized Attention in that a weight c is added, which comes from around each patch. The multi-headed attention mechanism may focus on the link between each patch and the global feature, but the integration capability for local information is relatively weak; meanwhile, because similar features in the tumor images show a certain regional property, the regional features in the images are integrated by depth separable convolution based on the traditional Factorized Attention. The method is that the feature map of c×h×w before each stage is subjected to a set of depth separable convolution to extract the region features in the feature map, and the obtained feature map is followed by the method of patch emb in Trunk Block, and the feature map is converted into a token to obtain token map c.
And carrying out dot product operation on the obtained c and v to calculate the feature correlation weight. The correlation weights obtained are normalized, and the normalized result is further processed with Factorized Attention to produce Hadamard products, and a residual connection is performed so as not to affect the global features obtained at Factorized Attention. Meanwhile, the method of relative position coding of ConvolutionalRelative Position Encoding is used in order to enhance the region correlation between different patch as a supplement after the convolution promotes the region correlation.
The PET/CT peritumor feature extraction method can mine the tumor edge features which are easy to ignore in the image, and provides powerful support for subsequent tasks by utilizing the features.

Claims (4)

1. A transducer-based PET/CT peri-tumor feature extraction system, comprising:
the data preprocessing module is used for preprocessing sample images in the PET/CT image sample data set, including anonymization, data cleaning, data set labeling and data enhancement;
the tumor edge gradual change feature extraction module is used for extracting tangential features of images from narrow-side images cut out of sample images, extracting longitudinal features of images from long-side images cut out of the sample images and integrating change features of tumor tissues in the sample images;
the tumor regional characteristic weighting module is used for adding regional characteristic weighting coefficients through a convolution method, integrating tumor characteristics with stronger regional property in the image, and integrating the characteristics extracted by the tumor edge gradual change characteristic extraction module into a backbone network as added characteristics; and
the model training and verifying optimization module is used for testing and verifying the model focusing on different subregion characteristics inside the tumor so as to obtain a final optimized model;
the tumor edge gradual change feature extraction module comprises:
the 1 multiplied by 3 convolution extraction feature module is used for extracting tangential features of the image from the cut narrow-edge image; and
the self-attribute feature extraction module is used for extracting longitudinal features of the image from the cut long-side image;
the tumor edge gradual change feature extraction module takes a Tansformer as a main network, the Tansformer main network is four stages in total, and each stage comprises a Trunk Block module and a Branch Block module; the method comprises the steps that strip-shaped image blocks which are cut out from the longitudinal direction and the transverse direction in an original image and pass through a tumor boundary from normal tissues to reach the inside of a tumor are respectively input into a Trunk Block module and a Branch Block module, and feature extraction is carried out through the Branch Block module; the Branch Block module is used for extracting the correlation of feature gradual change among different tumor sub-environments in the image, extracting the features of the whole image, and performing attention calculation with the features extracted by the Branch Block module;
the tumor regional characteristic weighting module comprises:
the parallel branch feature fusion module is used for fusing the features extracted by the tumor edge gradual change feature extraction module into a backbone network;
the convolution region feature extraction module is used for integrating tumor features with stronger regional property in tumors; and
the attention weighting calculation module is used for converting the tumor features extracted by the region feature extraction module into region weights and applying the region weights to the attention calculation process;
the attention computing mode of the tumor regional characteristic weighting module is as follows: firstly, splicing the features from the Branch Block module with the features in the Trunk Block module to be used as K and V in the attention calculation, and then carrying out the attention calculation with the features in the Trunk Block module to be used as Q in the attention calculation; then, extracting regional features in the feature map by a group of depth separable convolutions through c×h×w feature map before each stage; wherein c is the number of channels, h is long, and w is wide; converting the obtained feature map into a token to obtain a token map c; performing dot product operation on the obtained token map c and V in the attention calculation to calculate the feature correlation weight; and finally, normalizing the obtained correlation weight, and carrying out Hadamard product on the normalized result and Factorized Attention, and carrying out residual connection.
2. A method for extracting PET/CT peritumor features based on a transducer is characterized by comprising the following steps:
s0, setting the PET/CT peritumor feature extraction system of claim 1;
s1, carrying out preprocessing operations including anonymization, data cleaning, data set labeling and data enhancement on sample images in a PET/CT image sample data set by utilizing a data preprocessing module;
s2, extracting tangential features of images from narrow-side images cut out of the sample images by using a tumor edge gradual change feature extraction module, extracting longitudinal features of the images from long-side images cut out of the sample images, and integrating change features of tumor tissues in the sample images;
s3, adding regional characteristic weighting coefficients by using a tumor regional characteristic weighting module through a convolution method, integrating tumor characteristics with stronger regional characteristics in the image, and integrating the characteristics extracted by the tumor edge gradual change characteristic extraction module into a backbone network as added characteristics;
s4, training a model focusing on different subregion features in the tumor by using a model training and verifying optimization module and adopting a gradient descent method;
s5, performing internal evaluation and optimization on the performance of the trained model on an internal test set by using a model training and verification optimization module;
s6, verifying and optimizing the model subjected to internal evaluation and optimization by using a model training and verifying and optimizing module to obtain a final optimized model;
the tumor edge gradual change feature extraction module comprises a 1 multiplied by 3 convolution extraction feature module and a self-preservation feature extraction module; the 1X 3 convolution extraction feature module is used for extracting tangential features of the image from the cut narrow-edge image; the self-attribute feature extraction module is used for extracting longitudinal features of the image from the cut long-side image;
the tumor edge gradual change feature extraction module takes a Tansformer as a main network, the Tansformer main network is four stages in total, and each stage comprises a Trunk Block module and a Branch Block module; the method comprises the steps that strip-shaped image blocks which are cut out from the longitudinal direction and the transverse direction in an original image and pass through a tumor boundary from normal tissues to reach the inside of a tumor are respectively input into a Trunk Block module and a Branch Block module, and feature extraction is carried out through the Branch Block module; the Branch Block module is used for extracting the correlation of feature gradual change among different tumor sub-environments in the image, extracting the features of the whole image, and performing attention calculation with the features extracted by the Branch Block module;
the tumor regional characteristic weighting module comprises:
the parallel branch feature fusion module is used for fusing the features extracted by the tumor edge gradual change feature extraction module into a backbone network;
the convolution region feature extraction module is used for integrating tumor features with stronger regional property in tumors; and
the attention weighting calculation module is used for converting the tumor features extracted by the region feature extraction module into region weights and applying the region weights to the attention calculation process;
the attention computing mode of the tumor regional characteristic weighting module is as follows: firstly, splicing the features from the Branch Block module with the features in the Trunk Block module to be used as K and V in the attention calculation, and then carrying out the attention calculation with the features in the Trunk Block module to be used as Q in the attention calculation; then, extracting regional features in the feature map by a group of depth separable convolutions through c×h×w feature map before each stage; wherein c is the number of channels, h is long, and w is wide; converting the obtained feature map into a token to obtain a token map c; performing dot product operation on the obtained token map c and V in the attention calculation to calculate the feature correlation weight; and finally, normalizing the obtained correlation weight, and carrying out Hadamard product on the normalized result and Factorized Attention, and carrying out residual connection.
3. The method of claim 2, wherein the parameters used for the internal evaluation and optimization operation in step S5 include: classification accuracy, precision, recall, F1 score, area of subject work curve, and confusion matrix.
4. The method according to claim 2, wherein the verification and optimization in step S6 is to collect PET/CT sample images of different hospitals as test samples, verify the model to determine the accuracy, recall ratio, precision ratio, ROC curve performance index of the model, and analyze the classification differences of the model represented on different hospital data, thereby optimizing the model.
CN202310822439.1A 2023-07-06 2023-07-06 PET/CT tumor periphery feature extraction system and extraction method based on transducer Active CN116758048B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310822439.1A CN116758048B (en) 2023-07-06 2023-07-06 PET/CT tumor periphery feature extraction system and extraction method based on transducer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310822439.1A CN116758048B (en) 2023-07-06 2023-07-06 PET/CT tumor periphery feature extraction system and extraction method based on transducer

Publications (2)

Publication Number Publication Date
CN116758048A CN116758048A (en) 2023-09-15
CN116758048B true CN116758048B (en) 2024-02-27

Family

ID=87951313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310822439.1A Active CN116758048B (en) 2023-07-06 2023-07-06 PET/CT tumor periphery feature extraction system and extraction method based on transducer

Country Status (1)

Country Link
CN (1) CN116758048B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447969A (en) * 2018-10-29 2019-03-08 北京青燕祥云科技有限公司 Hepatic space occupying lesion recognition methods, device and realization device
CN112686875A (en) * 2021-01-04 2021-04-20 浙江明峰智能医疗科技有限公司 Tumor prediction method of PET-CT image based on neural network and computer readable storage medium
CN114677403A (en) * 2021-11-17 2022-06-28 东南大学 Liver tumor image segmentation method based on deep learning attention mechanism
CN115861181A (en) * 2022-11-09 2023-03-28 复旦大学 Tumor segmentation method and system for CT image
CN116258732A (en) * 2023-02-14 2023-06-13 复旦大学 Esophageal cancer tumor target region segmentation method based on cross-modal feature fusion of PET/CT images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2023507109A (en) * 2019-12-20 2023-02-21 ジェネンテック, インコーポレイテッド Automated tumor identification and segmentation from medical images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447969A (en) * 2018-10-29 2019-03-08 北京青燕祥云科技有限公司 Hepatic space occupying lesion recognition methods, device and realization device
CN112686875A (en) * 2021-01-04 2021-04-20 浙江明峰智能医疗科技有限公司 Tumor prediction method of PET-CT image based on neural network and computer readable storage medium
CN114677403A (en) * 2021-11-17 2022-06-28 东南大学 Liver tumor image segmentation method based on deep learning attention mechanism
CN115861181A (en) * 2022-11-09 2023-03-28 复旦大学 Tumor segmentation method and system for CT image
CN116258732A (en) * 2023-02-14 2023-06-13 复旦大学 Esophageal cancer tumor target region segmentation method based on cross-modal feature fusion of PET/CT images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Ruiyang Li et al..DHT-Net: Dynamic Hierarchical Transformer Network for Liver and Tumor Segmentation.《 IEEE Journal of Biomedical and Health Informatics》.2023,第27卷(第07期),全文. *
郝晓宇 等.融合双注意力机制3D U-Net的肺肿瘤分割.中国图象图形学报.2020,(第10期),全文. *

Also Published As

Publication number Publication date
CN116758048A (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN113077471B (en) Medical image segmentation method based on U-shaped network
CN109035263B (en) Automatic brain tumor image segmentation method based on convolutional neural network
CN111784671A (en) Pathological image focus region detection method based on multi-scale deep learning
CN110033032B (en) Tissue slice classification method based on microscopic hyperspectral imaging technology
CN113034505B (en) Glandular cell image segmentation method and glandular cell image segmentation device based on edge perception network
CN112950643B (en) New coronal pneumonia focus segmentation method based on feature fusion deep supervision U-Net
CN111738363B (en) Alzheimer disease classification method based on improved 3D CNN network
CN112819910A (en) Hyperspectral image reconstruction method based on double-ghost attention machine mechanism network
Wang et al. RSCNet: A residual self-calibrated network for hyperspectral image change detection
CN111259954A (en) Hyperspectral traditional Chinese medicine tongue coating and tongue quality classification method based on D-Resnet
CN115578406B (en) CBCT jaw bone region segmentation method and system based on context fusion mechanism
CN117132774B (en) Multi-scale polyp segmentation method and system based on PVT
CN111210444A (en) Method, apparatus and medium for segmenting multi-modal magnetic resonance image
CN114898106A (en) RGB-T multi-source image data-based saliency target detection method
CN114549538A (en) Brain tumor medical image segmentation method based on spatial information and characteristic channel
CN116757986A (en) Infrared and visible light image fusion method and device
CN114332572B (en) Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network
Li et al. Robust blood cell image segmentation method based on neural ordinary differential equations
CN116403121A (en) Remote sensing image water area segmentation method, system and equipment for multi-path fusion of water index and polarization information
CN115526829A (en) Honeycomb lung focus segmentation method and network based on ViT and context feature fusion
CN115546466A (en) Weak supervision image target positioning method based on multi-scale significant feature fusion
CN111862261A (en) FLAIR modal magnetic resonance image generation method and system
CN114972202A (en) Ki67 pathological cell rapid detection and counting method based on lightweight neural network
Xiang et al. A novel weight pruning strategy for light weight neural networks with application to the diagnosis of skin disease
CN116758048B (en) PET/CT tumor periphery feature extraction system and extraction method based on transducer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant