CN117422916A - MR medical image colorectal cancer staging algorithm and system based on weak supervision learning - Google Patents

MR medical image colorectal cancer staging algorithm and system based on weak supervision learning Download PDF

Info

Publication number
CN117422916A
CN117422916A CN202311382618.4A CN202311382618A CN117422916A CN 117422916 A CN117422916 A CN 117422916A CN 202311382618 A CN202311382618 A CN 202311382618A CN 117422916 A CN117422916 A CN 117422916A
Authority
CN
China
Prior art keywords
image
colorectal cancer
staging
medical image
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311382618.4A
Other languages
Chinese (zh)
Inventor
吴志平
鲍军
高阳
杨柳
何克磊
司呈帅
邵鹏
曹月鹏
李姝萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202311382618.4A priority Critical patent/CN117422916A/en
Publication of CN117422916A publication Critical patent/CN117422916A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06V10/7753Incorporation of unlabelled data, e.g. multiple instance learning [MIL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Public Health (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an MR medical image colorectal cancer staging algorithm and system based on weak supervision learning, which utilize target area positioning and local feature assistance to alleviate the problems of small inter-class difference and large intra-class variance of images in different MR periods. Constructing an object-based attention activation mechanism, establishing a connection between a classification decision and a convolution feature map, and accurately positioning a target object; a multi-scale attention positioning mechanism is constructed, unique local characteristics of each target are captured through positioning frames with different scales, and accuracy of fine granularity classification is improved. And (3) carrying out combined optimization training, and testing new colorectal cancer MR image data to obtain a prediction result. The method can effectively acquire the characteristic of finer granularity, and can be applied to the field of colorectal cancer MR image staging.

Description

MR medical image colorectal cancer staging algorithm and system based on weak supervision learning
Technical Field
The invention belongs to the field of medical images, and particularly relates to an MR medical image colorectal cancer staging method based on weak supervision learning.
Background
Colorectal cancer is a common malignant tumor of the digestive system in clinic, comprehensive treatment measures such as operation, chemotherapy, radiotherapy, targeted treatment and the like are mainly adopted as auxiliary measures for colorectal cancer at present, and the selection of treatment schemes has close correlation with the tumor stage. However, manually performing tumor staging of magnetic resonance imaging scan (MR) images is a very time-consuming and labor-consuming task. In the medical field, doctors or professional medical image analysts often need to carefully examine the medical images to determine the location, size and stage of a tumor. In the tumor staging process, a doctor needs to first detect tumor areas in the image, quantify the areas, and measure tumors. This typically requires a physician to view the images from frame to frame, consuming a significant amount of time and effort, especially in the case of processing large image sequences. Moreover, because of the possible variability in the staging criteria between different doctors, manual staging may also be affected by subjectivity and inconsistency. This may result in different doctors staging the tumor of the same patient, thereby affecting the patient's treatment and prognosis. Therefore, in order to improve the accuracy and efficiency of medical image analysis while reducing the workload of doctors or professional medical image analysts, it is significant to develop artificial intelligence algorithms to assist in analysis and staged prediction of medical images.
For MR images of colorectal cancer stage, since colorectal images of different patients have similar physical structures, the differences between objects are small; however, the different infiltration depths, different orientations, angles, and occlusions of tumors belonging to the same stage may cause large differences, making medical image tumor staging more challenging than traditional image classification. Therefore, the invention researches the problems of small inter-class difference and large intra-class variance of MR image stage data, and uses a two-stage circulating attention convolution neural network to obtain more effective fine granularity characteristics.
Disclosure of Invention
The invention provides an MR medical image colorectal cancer stage algorithm and system based on weak supervision learning, which are used for establishing an object-based attention activation mechanism and a multi-scale attention mechanism, on one hand, establishing a connection between a final classification decision and a convolution feature map through gradient reflux, thereby accurately positioning the region where a target object is located and effectively eliminating the interference of background information in an image. On the other hand, the local area with the discrimination is positioned on the feature map corresponding to the object image through the multi-scale positioning frame and the positioning method with the maximum response value, so that the unique local features of different stages are captured, the network model can better distinguish colorectal cancer MR images of different stages, and the accuracy of fine-grained image classification is further improved. In order to achieve the above purpose, the technical scheme of the invention is as follows:
MR medical image colorectal cancer staging algorithm based on weak supervision learning, wherein the MR is magnetic resonance imaging scanning; the method comprises the following steps:
step 1, acquiring a colorectal cancer MR image data set, preprocessing an acquired medical image, and dividing a training set, a verification set and a test set;
step 2, constructing an object-based attention activation module, and connecting correct classification scores with object region positioning through gradient reflux to accurately position the object region in a mutual enhancement mode;
step 3, constructing a multi-scale attention positioning module, positioning a local feature region by selecting a region with the largest response value in a feature channel, further extracting detailed features of the local feature region in the MR image, and forming new branches by the detailed features to feed into a classification network so as to assist in fine-granularity classification of the image;
step 4, training a medical image staging model, carrying out weight optimization, and storing model parameters;
and 5, testing the colorectal cancer MR image by using the optimally trained medical image staging model to obtain a final staging result.
Further, the preprocessing implementation process of the step 1 is as follows:
selecting three-dimensional colorectal MR data, performing image voxel spacing adjustment, colorectal region extraction, resampling and data normalization on the image, performing random clipping and random image overturning expansion, and dividing a training set, a verification set and a test set, wherein each data set comprises an MR image of the colorectal region and a corresponding T stage result.
Further, in the step 2, an activation map corresponding to the original image is generated by the attention activation module based on the object, the feature map F is extracted according to the feature extraction network, and the prediction score P of the category c of the image obtained by the fully connected layer c Calculating the weight W required by the convolution characteristic diagram F c Activation map A corresponding to feature map F at different spatial positions (x, y, z) c
Wherein F is (x,y,z) Representing the value of the specified convolution layer at the spatial location (x, y, z).
Further, the multi-scale attention positioning module in step 3 assists in fine-granularity classification of the image by extracting detailed features, and uses a local cross-channel information interaction method to enable the network to allocate higher weight for important channels and aggregate feature channels y i The attention mechanics of the characteristic channel of e R is described as:
I=σ(W y )
wherein,is used to capture local cross-channel interaction information by considering each channel and its neighbors.
Further, in the step 4, training and optimizing the medical image classification model, the overall training objective is to minimize the weighted sum of the overall classification loss function, the judgment loss function of the cut target image, and the classification loss function of the local feature region, train the model, save the model and the visualization result every fixed iteration times, and properly adjust the hyper-parameter setting according to the result, so as to perform iterative optimization on the model.
Further, in the step 5, the MR images of the test set are tested by using the medical image staging model which is well trained in an optimization mode, and a final staging result is obtained.
MR medical image colorectal cancer stage system based on semi-weak supervised learning for realizing the stage algorithm comprises:
the first module is used for acquiring a colorectal cancer MR image data set, preprocessing the acquired medical image and dividing a training set, a verification set and a test set;
the second module is used for constructing an object-based attention activation module, and connecting the correct classification score with object region positioning through gradient reflux so as to accurately position the object region in a mutual enhancement mode;
the third module is used for constructing a multi-scale attention positioning module, positioning the local feature area by selecting the area with the largest response value in the feature channel, further extracting detailed features of the local feature area in the MR image, and forming new branches by the detailed features to feed into a classification network so as to assist fine-granularity classification of the image;
the fourth module is used for training a medical image staging model, carrying out weight optimization and storing model parameters;
and a fifth module for testing the colorectal cancer MR image by using the optimally trained medical image staging model to obtain a final staging result.
The beneficial effects of the invention are as follows: the invention discloses an MR medical image colorectal cancer staging algorithm and system based on weak supervision learning, which utilize target area positioning and local feature assistance to alleviate the problems of small inter-class difference and large intra-class variance of images in different MR periods. The method can effectively acquire the characteristic of finer granularity, can be applied to the field of colorectal cancer MR image staging, and can also expand the field of staging with other medical images.
Drawings
FIG. 1 is a schematic flow chart of the present invention;
FIG. 2 is a schematic diagram of the overall architecture of the present invention;
FIG. 3 is a schematic diagram of an object-based attention activation module of the present invention;
FIG. 4 is a schematic diagram of a multi-scale attention positioning module according to the present invention.
Detailed Description
The following describes the embodiments of the present invention further with reference to the drawings.
Example 1
The embodiment discloses an MR medical image colorectal cancer staging algorithm based on weak supervision learning, which specifically comprises the following steps as shown in fig. 1:
step (1): colorectal cancer MR image data preprocessing and construction of data sets
And selecting three-dimensional colorectal cancer MR image data, acquiring an original data set, unifying voxel-to-voxel spacing of the images, and facilitating restoration of the real images, so that important information of the images is learned more efficiently. The colorectal target area is truncated and resampled to 96 x 96, data normalization processing is then performed to scale the pixel values of the image to the range of [0,1 ].
The training set, validation set and test set are divided according to a 3:1:1, and each data set comprises a colorectal region MR image and a corresponding T-staging result. In order to improve the generalization capability of the model and avoid the phenomenon of over-fitting, the training image is subjected to image expansion operation of random clipping and random overturning after 0-value filling.
Formalized representation of a build dataset, given a training set imageWherein x is i ∈R W×H×D Is an input 3D image, y i E {0,1} is the true annotation at the image level. N represents the total training image number.
Step (2): building object-based attention activation mechanisms
As in fig. 2, an object-based attention activation module is constructed that links the correct classification score to object region localization by gradient reflow to accurately localize the object regions in a mutually enhanced manner. For the input image x, features are first extracted by a feature extraction network, and a feature map of the input image after the last convolution layer of the feature extraction network is expressed as:
F=BaseNet(x)∈R C×W×H×D
the base network BaseNet may be a convolutional neural network such as 3D res net, 3D acceptance, etc., C is the number of channels of the feature map, and w×h×d is the spatial dimension. Obtaining a predictive score P of a category c to which the image belongs through a full connection layer c . From the feature map F of the specified convolution layer, the weight W required for convolving the feature map F is calculated using the following formula c
Wherein F is (x,y,z) Representing the value of the specified convolution layer at the spatial location (x, y, z). When the weight W required by the convolution characteristic diagram F is obtained c Then, an activation graph A corresponding to the feature graph F at different spatial positions (x, y, z) can be calculated c
Since activation map A affects the focal region of the deep neural network, the mean value of A can be calculatedAs a threshold value, it is determined whether or not the position (x, y, z) in a corresponds to a part of the target, and denoted as M:
wherein A is (x,y,z) Representing the characteristic value of A at position (x, y, z), M (x,y,z) Representing the value of M at position (x, y, z). After the corresponding mask M is obtained from the activation map a, the target region in the fine-grained image is determined by finding the maximum connected region of M.
Step (3): constructing a multiscale attention mechanism
As shown in fig. 3, a multi-scale attention localization module is constructed to further extract detailed features on MR images, and then these detailed features are formed into new branches to feed into a classification network to assist in fine-grained classification of images. The module can accurately acquire detailed features of the local region by selecting the region with the largest response value in the feature channel to position the local feature region.
And feeding the target object image obtained by the object-based attention activation module into a backbone network to obtain a convolution characteristic diagram of the last layer. The convolution feature map obtained by the last convolution layer not only contains a high-level visual structure, but also retains the spatial information of the picture, and the key region in the target object picture is searched by using the convolution feature map.
Since the convolution feature map contains a large number of feature channels, and each feature channel learns different visual pattern information, it is necessary to find the first K feature channels with the largest response values, locate the pixel point with the largest response value from the K feature channels, frame the area with different dimensions with the point as the center, and then further cut off the local feature maps to feed into the classification network. The scale size of the positioning frame is predefined, the detailed characteristics of the target object can be accurately captured through a multi-scale attention mechanism, and a better classification result can be obtained without increasing too much calculation amount.
The method of local cross-channel information interaction is used, so that the network can assign higher weight to important channels, and meanwhile, the computing performance and the model complexity are ensured. Given an aggregate characteristic channel y i E R, the attention learning of the feature channel can be described as:
I=σ(W y )
wherein W is a parameter matrix, and I is a characteristic diagram after channel attention learning.
To achieve local cross-channel information interaction, the characteristic channels are grouped, each group containing t channels. To solve for intra-group channel correlation we use the band matrix W t To learn the channel attention. W (W) t The form of (2) is as follows:
wherein W is t Contains t×c parameters, t represents a set of channels of the channel number, and C represents the total number of channels in the convolution feature map. Using a frequency band matrix W t The correlations of the channels in the group are learned to be reassigned the appropriate weights. Thus, for the characteristic channel y i Only the information interaction with the t neighbor needs to be considered, namely:
wherein,is used to capture local cross-channel interaction information by considering each channel and its neighbors.
Step (4): model optimization training
In the network training process, the network learning rate is adaptively updated, and when the training error is iterated for a plurality of times and no obvious update occurs, the learning rate is reduced. The loss function of the network includes three parts: the overall judgment of the classification loss function, the judgment loss function of the cut target image obtained by the object-based attention module, and the classification loss function of the local area, and the change of each loss is monitored during training. The weight coefficient of the loss function is determined, and an optimization method and super parameters used in the training process comprise an optimizer, an initial learning rate, training iteration times and the like.
Step (5): MR imaging staging of colorectal cancer
And testing the MR images of the test set by using the optimized and trained model to obtain a final stage result. The model based on weak supervision learning can find out the local area which can express the distinguishing information in the image only by relying on the category labels, so that the difficulty of large individual differences of subclasses and small inter-subclass variances is greatly relieved, the problem that fine-grained images are difficult to accurately identify is solved, and the local information of the images is used for classification.
Example 2
The embodiment discloses an MR medical image colorectal cancer staging system based on semi-weak supervised learning, which is used for realizing the staging algorithm described in the embodiment 1, and comprises the following steps:
the first module is used for acquiring a colorectal cancer MR image data set, preprocessing the acquired medical image and dividing a training set, a verification set and a test set;
the second module is used for constructing an object-based attention activation module, and connecting the correct classification score with object region positioning through gradient reflux so as to accurately position the object region in a mutual enhancement mode;
the third module is used for constructing a multi-scale attention positioning module, positioning the local feature area by selecting the area with the largest response value in the feature channel, further extracting detailed features of the local feature area in the MR image, and forming new branches by the detailed features to feed into a classification network so as to assist fine-granularity classification of the image;
the fourth module is used for training a medical image staging model, carrying out weight optimization and storing model parameters;
and a fifth module for testing the colorectal cancer MR image by using the optimally trained medical image staging model to obtain a final staging result.
The MR medical image colorectal cancer staging algorithm and the MR medical image colorectal cancer staging system based on weak supervision learning provided by the invention are described in detail. Noteworthy are: the above description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that the present invention is described in detail with reference to the above embodiments, and modifications and equivalents of some of the technical features described in the above embodiments may be made by those skilled in the art. Any equivalent replacement, modification, etc. made within the core idea and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. An MR medical image colorectal cancer staging algorithm based on weak supervision learning is characterized in that the MR is magnetic resonance imaging scanning; the method comprises the following steps:
step 1, acquiring a colorectal cancer MR image data set, preprocessing an acquired medical image, and dividing a training set, a verification set and a test set;
step 2, constructing an object-based attention activation module, and connecting correct classification scores with object region positioning through gradient reflux to accurately position the object region in a mutual enhancement mode;
step 3, constructing a multi-scale attention positioning module, positioning a local feature region by selecting a region with the largest response value in a feature channel, further extracting detailed features of the local feature region in the MR image, and forming new branches by the detailed features to feed into a classification network so as to assist in fine-granularity classification of the image;
step 4, training a medical image staging model, carrying out weight optimization, and storing model parameters;
and 5, testing the colorectal cancer MR image by using the optimally trained medical image staging model to obtain a final staging result.
2. The MR medical image colorectal cancer staging algorithm based on weak supervised learning according to claim 1 wherein the preprocessing of step 1 is implemented as follows:
selecting three-dimensional colorectal MR data, performing image voxel spacing adjustment, colorectal region extraction, resampling and data normalization on the image, performing random clipping and random image overturning expansion, and dividing a training set, a verification set and a test set, wherein each data set comprises an MR image of the colorectal region and a corresponding T stage result.
3. The MR medical image colorectal cancer staging algorithm based on weakly supervised learning according to claim 1, wherein in step 2, an activation map corresponding to the original image is generated by an object-based attention activation module, a feature map F is extracted from a feature extraction network, and a prediction score P of category c is obtained from fully connected layers c Calculating the weight W required by the convolution characteristic diagram F c Activation map A corresponding to feature map F at different spatial positions (x, y, z) c
Wherein F is (x,y,z) Representing the value of the specified convolution layer at the spatial location (x, y, z).
4. The MR medical image colorectal cancer staging algorithm based on weak supervised learning according to claim 1 wherein the multi-scale attention localization module of step 3 assists in fine granularity classification of images by extracting detailed features, using a method of local cross-channel information interaction to enable the network to assign higher weights to important channels, aggregating feature channels y i The attention mechanics of the characteristic channel of e R is described as:
I=σ(W y )
wherein,is used to capture local cross-channel interaction information by considering each channel and its neighbors.
5. The MR medical image colorectal cancer staging algorithm based on weak supervision learning according to claim 1, wherein the training optimization is performed on the medical image staging model in step 4, the overall training goal is to minimize the weighted sum of the overall classification loss function, the judgment loss function of the cut target image, and the classification loss function of the local feature region, the model is trained, the model and the visualization result are saved every fixed iteration number, the hyper-parameter setting is properly adjusted according to the result, and the model is subjected to iterative optimization.
6. The MR medical image colorectal cancer staging algorithm based on weak supervised learning according to claim 1, wherein in step 5, the MR images of the test set are tested by using an optimally trained medical image staging model to obtain the final staging result.
7. MR medical image colorectal cancer staging system based on semi-weakly supervised learning for implementing the staging algorithm according to any one of claims 1 to 6, characterized by comprising:
the first module is used for acquiring a colorectal cancer MR image data set, preprocessing the acquired medical image and dividing a training set, a verification set and a test set;
the second module is used for constructing an object-based attention activation module, and connecting the correct classification score with object region positioning through gradient reflux so as to accurately position the object region in a mutual enhancement mode;
the third module is used for constructing a multi-scale attention positioning module, positioning the local feature area by selecting the area with the largest response value in the feature channel, further extracting detailed features of the local feature area in the MR image, and forming new branches by the detailed features to feed into a classification network so as to assist fine-granularity classification of the image;
the fourth module is used for training a medical image staging model, carrying out weight optimization and storing model parameters;
and a fifth module for testing the colorectal cancer MR image by using the optimally trained medical image staging model to obtain a final staging result.
CN202311382618.4A 2023-10-24 2023-10-24 MR medical image colorectal cancer staging algorithm and system based on weak supervision learning Pending CN117422916A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311382618.4A CN117422916A (en) 2023-10-24 2023-10-24 MR medical image colorectal cancer staging algorithm and system based on weak supervision learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311382618.4A CN117422916A (en) 2023-10-24 2023-10-24 MR medical image colorectal cancer staging algorithm and system based on weak supervision learning

Publications (1)

Publication Number Publication Date
CN117422916A true CN117422916A (en) 2024-01-19

Family

ID=89530851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311382618.4A Pending CN117422916A (en) 2023-10-24 2023-10-24 MR medical image colorectal cancer staging algorithm and system based on weak supervision learning

Country Status (1)

Country Link
CN (1) CN117422916A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611930A (en) * 2024-01-23 2024-02-27 中国海洋大学 Fine granularity classification method of medical image based on CLIP

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611930A (en) * 2024-01-23 2024-02-27 中国海洋大学 Fine granularity classification method of medical image based on CLIP
CN117611930B (en) * 2024-01-23 2024-04-26 中国海洋大学 Fine granularity classification method of medical image based on CLIP

Similar Documents

Publication Publication Date Title
CN112270660B (en) Nasopharyngeal carcinoma radiotherapy target area automatic segmentation method based on deep neural network
US9123095B2 (en) Method for increasing the robustness of computer-aided diagnosis to image processing uncertainties
CN112446891B (en) Medical image segmentation method based on U-Net network brain glioma
CN105809175B (en) Cerebral edema segmentation method and system based on support vector machine algorithm
Ashwin et al. Efficient and reliable lung nodule detection using a neural network based computer aided diagnosis system
CN108921821A (en) Method of discrimination based on the LASSO mammary cancer armpit lymph gland transfering state returned
CN112614133B (en) Three-dimensional pulmonary nodule detection model training method and device without anchor point frame
Nandihal et al. Glioma Detection using Improved Artificial Neural Network in MRI Images
Kumar et al. An approach for brain tumor detection using optimal feature selection and optimized deep belief network
CN117422916A (en) MR medical image colorectal cancer staging algorithm and system based on weak supervision learning
CN112132808A (en) Breast X-ray image lesion detection method and device based on normal model learning
CN112330645A (en) Glioma grading method and device based on attention mechanism
CN113989551A (en) Alzheimer disease classification method based on improved ResNet network
Basha et al. An effective and robust cancer detection in the lungs with BPNN and watershed segmentation
Abed Lung Cancer Detection from X-ray images by combined Backpropagation Neural Network and PCA
Alagarsamy et al. Identification of Brain Tumor using Deep Learning Neural Networks
Bhakta et al. Lung tumor segmentation and staging from ct images using fast and robust fuzzy C-Means clustering
Singh et al. Detection of Brain Tumors Through the Application of Deep Learning and Machine Learning Models
CN113889235A (en) Unsupervised feature extraction system for three-dimensional medical image
Athanasiadis et al. Segmentation of complementary DNA microarray images by wavelet-based Markov random field model
Mandle et al. WSSOA: whale social spider optimization algorithm for brain tumor classification using deep learning technique
CN117649400B (en) Image histology analysis method and system under abnormality detection framework
Sathya et al. Development of CAD system based on enhanced clustering based segmentation algorithm for detection of masses in breast DCE-MRI
CN116934754B (en) Liver image identification method and device based on graph neural network
Pour et al. Brain Tumor Detection from MRI Images based on Cellular Neural Network and Firefly Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination