CN116051467A - Bladder cancer myolayer invasion prediction method based on multitask learning and related device - Google Patents

Bladder cancer myolayer invasion prediction method based on multitask learning and related device Download PDF

Info

Publication number
CN116051467A
CN116051467A CN202211607134.0A CN202211607134A CN116051467A CN 116051467 A CN116051467 A CN 116051467A CN 202211607134 A CN202211607134 A CN 202211607134A CN 116051467 A CN116051467 A CN 116051467A
Authority
CN
China
Prior art keywords
image
bladder cancer
module
invasion
myolayer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211607134.0A
Other languages
Chinese (zh)
Other versions
CN116051467B (en
Inventor
李建鹏
罗梓欣
邱峥轩
黄炳升
邹玉坚
邓磊
曹康养
岳沛言
杨水清
黄翔
张坤林
高云
梁满球
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Dongguan Peoples Hospital
Original Assignee
Shenzhen University
Dongguan Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University, Dongguan Peoples Hospital filed Critical Shenzhen University
Priority to CN202211607134.0A priority Critical patent/CN116051467B/en
Publication of CN116051467A publication Critical patent/CN116051467A/en
Application granted granted Critical
Publication of CN116051467B publication Critical patent/CN116051467B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The application discloses a bladder cancer myolayer invasion prediction method based on multitask learning, which comprises the following steps: determining, by a feature extraction module in the predictive network model, a first feature map of the MRI image; inputting the first feature map into a segmentation module in a prediction network model, and determining a mask map of the MRI image through the segmentation module; the mask image and the MRI image are fused to obtain a fused image, the fused image is input into a feature extraction module, and a second feature image of the fused image is determined through the feature extraction module; and inputting the second feature map into a classification module in the prediction network model, and determining the bladder cancer myolayer invasion category corresponding to the MRI image through the classification module. According to the method and the device, the bladder cancer muscle layer invasion category is determined based on the MRI image by adopting the multitasking prediction network model, so that the classification module can learn focus information extracted by the segmentation module, the classification performance of the classification module can be improved, and the accuracy of the obtained bladder cancer muscle layer invasion category can be improved.

Description

Bladder cancer myolayer invasion prediction method based on multitask learning and related device
Technical Field
The application relates to the technical field of biomedical engineering, in particular to a bladder cancer muscular layer invasion prediction method based on multitask learning and a related device.
Background
Bladder cancer (Bca) refers to malignant tumor occurring on bladder mucosa, is one of the most common malignant tumors of urinary system, and is the tenth place of common cancer in men and the seventeenth place in women in global malignant tumor row. The most common types of Bca are urothelial carcinoma (more than 90%), followed by squamous cell carcinoma, adenocarcinoma, and small cell carcinoma. The bladder wall histologically includes a mucosal layer, submucosa, musculopropria, and perivesicular fat layer. Among them, the intrinsic muscle layer is a special sign of bladder cancer staging and treatment, and Bca is classified into non-muscle layer invasive bladder cancer (non-muscle invasive bladder cancer, NMIBC) and muscle layer invasive bladder cancer (muscle invasive bladder cancer, MIBC) according to whether the tumor invades the intrinsic muscle layer. NMIBC is treated by transurethral cystectomy (transurethral resection of bladder tumor, TURBT), with or without intravesical infusion chemotherapy; and MIBC is treated by partial or total bladder ablation and assisted by radiation, chemotherapy or combination therapy. Clinically, the therapeutic approaches taken by two different tumor types, prognostic significance, and long-term survival are quite different.
According to the European urology Association and the American urology guidelines, for the staging and grading of Bca, the clinically usual diagnostic method is transurethral bladder tumor resection (TURBT). However, the TURBT is at risk of underestimating the stage of the tumor and is susceptible to operator manipulation levels resulting in insufficient pre-operative stage, requiring multiple TURBT's to accurately predict bladder cancer stage. Multiple TURBT increases the risk of bladder perforation and cancer embolism, exacerbates the condition, and increases the psychological and economic burden on the patient. Magnetic resonance imaging (magnetic resonance imaging, MRI) is a non-invasive, highly reproducible method of examination. The research shows that the magnetic resonance imaging has special advantages in terms of preoperative stage, focus grading, clinical curative effect monitoring and the like of Bca. However, the doctor can diagnose the Bca muscle layer invasion condition by naked eyes through MRI images, and is easily influenced by subjectivity and qualification experience of the doctor, and the accuracy of the acquired bladder cancer muscle layer invasion category is influenced.
There is thus a need for improvements and improvements in the art.
Disclosure of Invention
The technical problem to be solved by the application is to provide a bladder cancer myolayer invasion prediction method and a related device based on multi-task learning aiming at the defects of the prior art.
In order to solve the above technical problems, a first aspect of embodiments of the present application provides a bladder cancer muscular layer invasion prediction method based on multitask learning, where the method includes:
acquiring an MRI image, wherein the MRI image carries an image of a bladder;
inputting the MRI image into a feature extraction module in a pre-trained prediction network model, and determining a plurality of first feature graphs corresponding to the MRI image through the feature extraction module;
inputting a plurality of first feature maps into a segmentation module in the prediction network model, and determining a mask map corresponding to the MRI image through the segmentation module;
the mask image and the MRI image are fused to obtain a fused image, the fused image is input into a feature extraction module, and a second feature image corresponding to the fused image is determined through the feature extraction module;
and inputting the second feature map into a classification module in the prediction network model, and determining the bladder cancer myolayer invasion category corresponding to the MRI image through the classification module.
The bladder cancer muscle layer invasion prediction method based on multi-task learning, wherein the feature extraction module comprises a convolution unit, a pooling unit and a plurality of downsampling units which are sequentially cascaded, wherein the downsampling unit comprises a three-dimensional convolution block and a plurality of three-dimensional identity modules which are sequentially cascaded, each three-dimensional identity module comprises a plurality of three-dimensional convolution layers which are cascaded and an adder, and input items of the adder are output items of the last three-dimensional convolution layer and input items of the forefront three-dimensional convolution layer.
The bladder cancer muscle layer invasion prediction method based on multi-task learning, wherein the three-dimensional convolution block comprises a plurality of cascaded three-dimensional convolution layers, an adder and a target three-dimensional convolution layer, wherein the input item of the target three-dimensional convolution layer is the input item of the three-dimensional convolution layer positioned at the forefront, and the input item of the adder is the output item of the three-dimensional convolution layer positioned at the last and the output item of the target three-dimensional convolution layer.
The bladder cancer muscular layer invasion prediction method based on multi-task learning comprises the following steps that the segmentation module comprises a first upsampling unit, a plurality of second upsampling units and two third upsampling units which are sequentially cascaded; the up-sampling units are connected with the last down-sampling unit, the third up-sampling unit positioned at the back of the two third up-sampling units is connected with the front-most down-sampling unit, and the third up-sampling unit positioned at the front is connected with the convolution unit; the second upsampling units are in one-to-one correspondence with the downsampling units in the last downsampling unit and the forefront downsampling unit, and are connected with the downsampling units corresponding to the downsampling units.
The bladder cancer myolayer invasion prediction method based on the multi-task learning comprises a classification module and a feature extraction module, wherein the classification module comprises a pooling unit and a classification unit which are cascaded in sequence, and the pooling unit is connected with the feature extraction module.
The bladder cancer muscular layer invasion prediction method based on the multi-task learning, wherein a loss function adopted in the training process of the prediction network model is as follows:
Figure BDA0003999043950000031
wherein L is total Representing a loss function, L seg Represents a segmentation loss function, L cls Representing a classification loss function.
The bladder cancer myolayer invasion prediction method based on multitasking learning, wherein the L is cls Representative class loss functions are:
L cls =-α t (1-p t ) γ ·log(p t )
Figure BDA0003999043950000032
Figure BDA0003999043950000033
wherein p is t Representing the probability that the sample to be classified is a positive sample, alpha t And (3) representing weight coefficients, wherein alpha, gamma and p are preset parameters.
A second aspect of embodiments of the present application provides a bladder cancer myolayer invasion prediction system based on multitasking learning, the system comprising:
the system comprises an acquisition module, a detection module and a detection module, wherein the acquisition module is used for acquiring an MRI image, wherein the MRI image carries a bladder image;
the control module is used for controlling the feature extraction module in the pre-trained prediction network model to determine a plurality of first feature maps corresponding to the MRI image; controlling a segmentation module in the prediction network model to determine a mask map corresponding to the MRI image based on a first feature map; the mask map and the MRI image are fused to obtain a fused image, and the feature extraction module is controlled to determine a second feature map corresponding to the fused image based on the fused image; and controlling a classification module in the prediction network model to determine the bladder cancer myolayer invasion category corresponding to the MRI image based on a second feature map.
A third aspect of the embodiments provides a computer readable storage medium storing one or more programs executable by one or more processors to implement steps in a method for predicting a bladder cancer myolayer invasion based on multitasking learning as described above.
A fourth aspect of the present embodiment provides a terminal device, including: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes connection communication between the processor and the memory;
the processor, when executing the computer readable program, implements the steps in the bladder cancer myolayer violation prediction method based on multitasking learning as described in any of the above.
The beneficial effects are that: compared with the prior art, the application provides a bladder cancer muscular layer invasion prediction method based on multitask learning, which comprises the following steps: determining a plurality of first feature maps corresponding to the MRI image through a feature extraction module in a prediction network model; inputting a plurality of first feature maps into a segmentation module in the prediction network model, and determining a mask map corresponding to the MRI image through the segmentation module; the mask image and the MRI image are fused to obtain a fused image, the fused image is input into a feature extraction module, and a second feature image corresponding to the fused image is determined through the feature extraction module; and inputting the second feature map into a classification module in the prediction network model, and determining the bladder cancer myolayer invasion category corresponding to the MRI image through the classification module. According to the method and the device, the multi-task prediction network model is adopted, the bladder cancer muscle layer invasion category is determined through the MRI image, so that the classification module can learn focus information extracted by the segmentation module, the classification performance of the classification module can be improved, and the accuracy of the obtained bladder cancer muscle layer invasion category can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without creative effort for a person of ordinary skill in the art.
Fig. 1 is a flowchart of a bladder cancer muscular layer invasion prediction method based on multitasking learning provided in the present application.
Fig. 2 is a model structure diagram of a predictive network model.
Fig. 3 is a schematic structural diagram of a bladder cancer muscular layer invasion prediction system based on multitask learning provided by the application.
Fig. 4 is a schematic structural diagram of a terminal device provided in the present application.
Detailed Description
The application provides a bladder cancer myolayer invasion prediction method and a related device based on multitask learning, and in order to make the purposes, technical schemes and effects of the application clearer and more definite, the application is further described in detail below by referring to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should be understood that the sequence number and the size of each step in this embodiment do not mean the sequence of execution, and the execution sequence of each process is determined by the function and the internal logic of each process, and should not constitute any limitation on the implementation process of the embodiment of the present application.
It has been found that bladder cancer (Bca) refers to malignant tumor occurring on bladder mucosa, and is one of the most common malignant tumors of urinary system, and in the tenth place of global malignant tumor, male common cancer is sixth and female seventeenth. The most common types of Bca are urothelial carcinoma (more than 90%), followed by squamous cell carcinoma, adenocarcinoma, and small cell carcinoma. The bladder wall histologically includes a mucosal layer, submucosa, musculopropria, and perivesicular fat layer. Among them, the intrinsic muscle layer is a special sign of bladder cancer staging and treatment, and Bca is classified into non-muscle layer invasive bladder cancer (non-muscle invasive bladder cancer, NMIBC) and muscle layer invasive bladder cancer (muscle invasive bladder cancer, MIBC) according to whether the tumor invades the intrinsic muscle layer. NMIBC is treated by transurethral cystectomy (transurethral resection of bladder tumor, TURBT), with or without intravesical infusion chemotherapy; and MIBC is treated by partial or total bladder ablation and assisted by radiation, chemotherapy or combination therapy. Clinically, the therapeutic approaches taken by two different tumor types, prognostic significance, and long-term survival are quite different.
According to the European urology Association and the American urology guidelines, for the staging and grading of Bca, the clinically usual diagnostic method is transurethral bladder tumor resection (TURBT). However, the TURBT is at risk of underestimating the stage of the tumor and is susceptible to operator manipulation levels resulting in insufficient pre-operative stage, requiring multiple TURBT's to accurately predict bladder cancer stage. Multiple TURBT increases the risk of bladder perforation and cancer embolism, exacerbates the condition, and increases the psychological and economic burden on the patient. Magnetic resonance imaging (magnetic resonance imaging, MRI) is a non-invasive, highly reproducible method of examination. The research shows that the magnetic resonance imaging has special advantages in terms of preoperative stage, focus grading, clinical curative effect monitoring and the like of Bca. However, the doctor can diagnose the Bca muscle layer invasion condition by naked eyes through MRI images, and is easily influenced by subjectivity and qualification experience of the doctor, and the accuracy of the acquired bladder cancer muscle layer invasion category is influenced.
In order to solve the above problems, it is currently common to implement Bca myolayer invasion prediction based on MRI images based on an image histology method or a deep learning method. The image histology method refers to extracting high-dimensional feature information from an image in a high throughput manner to quantitatively describe lesion morphology and heterogeneity, for example, xu et al (Xu et al, european Radiology, 2020) extract image histology features such as intensity, texture, shape and the like from a region of interest (region of interest, ROI) of a bladder cancer patient by diffusion-weighted imaging (DWI), and select an optimal feature subset to input into a Random Forest (RF) classifier for prediction, wherein the area under the curve (AUC) of the subject working feature (receiver operating characteristic, ROC) of the final model can reach 0.907; wang et al (Wang et al, european Radiology, 2020) extract relevant image histology features from T2WI, DWI and apparent diffusion coefficient maps (apparent diffusion coefficient map, ADC) of bladder cancer patients in two independent clinical centers and predict using a logistic regression (logistic regression, LR) classifier with a model AUC of 0.813 on the validation set. The research shows that the image histology characteristics extracted based on the MRI image can effectively and quantitatively predict the invasion condition of the bladder cancer myolayer. However, traditional image histology methods require manual delineation of tumor lesions as ROIs, and the accuracy of manual delineation is largely dependent on radiologist experience, which is time-consuming, labor-consuming and subjective; in addition, even for the same type of tumor, there are certain differences in size, edge, shape, texture, etc. of different lesions, and it is difficult to accurately describe the different lesions by manually designed features in the image histology method.
Compared with an image histology method, the deep learning can autonomously mine the intrinsic law of data and learn the deep features with distinction degree, does not need to manually sketch the ROI or design features, and has higher automation degree. For example, yang et al (Yang et al, european Journal of Radiology, 2021) designed a convolutional neural network system named DL-CNN, distinguishing NMIBC from MIBC based on a contrast-enhanced computed tomography image (Computed Tomography, CT) of a bladder cancer patient, with a model AUC of 0.946 on the validation set and 0.998 on the test set. The model of this study achieves extremely high performance, but because only sections of similar lesion size are included, difficult samples may be excluded, so in actual diagnosis, the diagnostic performance of the model may be exaggerated, and other central data and clinical practices are required to prove its robustness and generalization. Zhang et al Frontiers in Oncology,2021 built a novel 3D-CNN framework based on a guided filter pyramid network (filter-guided pyramid network) to achieve prediction of muscular layer invasion of bladder cancer, and the AUC of the model on the validation set and test set was 0.861 and 0.791, respectively, with performance yet to be improved. The study shows that the deep learning method can effectively and automatically extract relevant characteristics reflecting the Bca myolayer infiltration condition in the medical image, and can realize more accurate Bca myolayer invasion prediction. Although deep learning does not need to extract artificial design features, and the limitation that image group science needs artificial design features is overcome, the existing deep learning method based on a single task is used for direct classification and lacks assistance and supervision of more information, so that the accuracy of bladder cancer myolayer invasion prediction is affected.
In the embodiment of the application, a plurality of first feature maps corresponding to the MRI image are determined through a feature extraction module in a prediction network model; inputting a plurality of first feature maps into a segmentation module in the prediction network model, and determining a mask map corresponding to the MRI image through the segmentation module; the mask image and the MRI image are fused to obtain a fused image, the fused image is input into a feature extraction module, and a second feature image corresponding to the fused image is determined through the feature extraction module; and inputting the second feature map into a classification module in the prediction network model, and determining the bladder cancer myolayer invasion category corresponding to the MRI image through the classification module. According to the method, the multi-task prediction network model is adopted, the bladder cancer focus segmentation task is introduced into the bladder cancer muscle layer invasion category classification task, so that the classification task is guided by the segmentation task, more focus areas are focused on the prediction network model, more focus information is provided for the classification task, the classification performance of the classification module can be improved, and the accuracy of the acquired bladder cancer muscle layer invasion category can be improved.
The application will be further described by the description of embodiments with reference to the accompanying drawings.
The embodiment provides a bladder cancer muscular layer invasion prediction method based on multitask learning, as shown in fig. 1, the method comprises the following steps:
s10, acquiring an MRI image.
Specifically, the MRI image may be an MRI image of the bladder cancer patient, that is, the MRI image carries the bladder image, where the MRI image may be acquired in real time, may be acquired through a network, or may be sent by an external device. For example, the MRI image is a T2WI sequence image of a pathologically diagnosed Bca patient, where the magnetic resonance scan parameters corresponding to the T2WI sequence image may be the instrument used is the Siemens 3.0T superconducting magnetic resonance imaging system MAGNETOM Skyra, tr=7500 ms, te=101 ms, layer thickness=4.0 mm, layer spacing=0.4 mm, fov=20 cm, matrix=320×320, and excitation number 2 times.
S20, inputting the MRI image into a feature extraction module in a pre-trained prediction network model, and determining a plurality of first feature graphs corresponding to the MRI image through the feature extraction module.
Specifically, the prediction network model is pre-trained and is used for predicting the invasion category of the bladder cancer muscle layer, wherein the prediction network model comprises a feature extraction module, a segmentation module and a classification module, the feature extraction module is respectively connected with the segmentation module and the classification module, a mask map of a focus area in an MRI image is determined through the segmentation module, the invasion category of the bladder cancer muscle layer corresponding to the MRI image is determined through the classification module, the classification module utilizes the MRI image and the mask map output by the segmentation module to determine the invasion category of the bladder cancer muscle layer corresponding to the MRI image, and therefore the classification module can learn focus information carried by the mask map, and accuracy of the invasion category of the bladder cancer muscle layer can be improved.
The feature extraction module is used for extracting features of the MRI image to obtain a plurality of first feature images, wherein the plurality of first feature images are output items of different network layers of the feature extraction module, and the image sizes of the plurality of first feature images are different, that is, the multi-scale first feature images are extracted through the feature extraction module. As shown in fig. 2, the feature extraction module includes a convolution unit, a pooling unit, and a plurality of downsampling units that are sequentially cascaded, where the plurality of first feature maps includes output terms of the convolution unit and output terms of the plurality of downsampling units, that is, the output terms of the convolution unit and the output terms of the downsampling units are used as first feature maps, so as to obtain a plurality of first feature maps.
As shown in fig. 2, the downsampling unit comprises a three-dimensional convolution block and a plurality of three-dimensional identity modules which are sequentially cascaded, wherein the three-dimensional identity modules comprise a plurality of three-dimensional convolution layers which are cascaded and adders, and input items of the adders are output items positioned in the last three-dimensional convolution layer and input items positioned in the forefront three-dimensional convolution layer. In this embodiment, the three-dimensional identity module in the feature extraction module adopts a residual structure, that is, the output item located in the last three-dimensional rolling layer and the input item located in the forefront three-dimensional convolution layer are connected in a short circuit manner through the adder, so that the stability of calculation can be improved without additionally increasing network parameters and calculation amount, and the training speed of the prediction network model and the model performance of the prediction network model can be greatly improved. In addition, in practical application, the calculation amount is reduced, the three-dimensional identity module and the three-dimensional convolution block in the embodiment both adopt Bottleneck residual modules, and a Bottleneck structure (Bottleneck) is added in a branch connected through an adder in a short circuit manner.
The convolution unit includes a convolution layer, a batch normalization (batch normalization) layer and an activation layer, the convolution kernel of the convolution layer is 7 multiplied by 7, and the step length is 2; the active layer employs a modified linear cell (rectified linear units) active layer. The pooling unit comprises a convolution the core size is 3×3×3 the layer is pooled with a step size of 2 maximum.
The number of three-dimensional identity modules included in each downsampling unit can be equal or different. For example, as shown in fig. 2, the feature extraction module includes four downsampling units, respectively denoted as layer4, layer3, layer2, and layer1, wherein layer4 includes 2 three-dimensional identity modules, layer3 includes 3 three-dimensional identity modules, layer2 includes 3 three-dimensional identity modules, and layer1 includes 2 three-dimensional identity modules. As shown in fig. 2, among others, the three-dimensional identity module includes 3 three-dimensional convolution layers, convolution kernels of three-dimensional convolution layers respectively is 1×1×1,3×3×3,1×1×1, the short circuit connection between the input item and the last three-dimensional convolution layer is connected with a bottleneck structure h (x). The three-dimensional convolution block comprises a plurality of cascaded three-dimensional convolution layers, an adder and a target three-dimensional convolution layer, wherein the input item of the target three-dimensional convolution layer is the input item of the three-dimensional convolution layer positioned at the forefront, the input item of the adder is the output item of the three-dimensional convolution layer positioned at the last and the output item of the target three-dimensional convolution layer, and the target three-dimensional convolution layer is added on the short circuit connection, so that the output tensor is smaller than the input tensor, and the calculated quantity is reduced. In one implementation, as shown in fig. 2, the three-dimensional convolution block includes 3 three-dimensional convolution layers having convolution kernels of 1 x 1,3 x 3,1 x 1, the convolution kernel of the target three-dimensional convolution layer is 1 x 1, and, the input item of the three-dimensional convolution block is input into the target three-dimensional convolution layer after passing through the bottleneck structure h (x).
According to the embodiment, the two 1 multiplied by 1 three-dimensional convolution layers are used in the three-dimensional identity module and the three-dimensional convolution module to respectively realize the rising and falling dimensions of the feature map, so that the 3 multiplied by 3 three-dimensional convolution can carry out convolution operation on input with relatively lower dimensions, the input and output of the original convolution layers are not affected, the calculated amount is reduced, and the calculation efficiency is improved.
S30, inputting a plurality of first feature maps into a segmentation module in the prediction network model, and determining a mask map corresponding to the MRI image through the segmentation module.
The segmentation module is used for determining a mask map corresponding to the MRI image, the segmentation module is connected with the feature extraction module, and the segmentation module determines the mask map corresponding to the MRI image based on a plurality of first feature maps determined by the feature extraction module, wherein the mask map carries a focus area.
The segmentation module restores the feature map size in a deconvolution mode, and meanwhile, jump connection is adopted to combine the corresponding first feature map in the feature extraction module and the up-sampled first feature map so as to acquire more context information. The segmentation module comprises a first upsampling unit, a plurality of second upsampling units and two third upsampling units which are sequentially cascaded; the up-sampling units are connected with the last down-sampling unit, the third up-sampling unit positioned at the back of the two third up-sampling units is connected with the front-most down-sampling unit, and the third up-sampling unit positioned at the front is connected with the convolution unit; the second upsampling units are in one-to-one correspondence with the downsampling units in the last downsampling unit and the forefront downsampling unit, and are connected with the downsampling units corresponding to the downsampling units.
In an exemplary implementation, as shown in fig. 2, the segmentation model includes a second upsampling unit a, a second upsampling unit b, a third upsampling unit c, and a third upsampling unit d, and the downsampling module includes four downsampling units, respectively denoted as layer4, layer3, layer2, and layer1, where the first upsampling unit is connected to layer4, and the second upsampling unit a is connected to the first upsampling unit and is in skip connection with layer 3; the second up-sampling unit b is connected with the second up-sampling unit a and is connected with the layer2 in a jumping manner; the third up-sampling unit a is connected with the second up-sampling unit b and is connected with the layer1 in a jumping manner; the third up-sampling unit b is connected with the third up-sampling unit a and is connected with the convolution unit in a jumping manner. In this embodiment, the jump connection is through an adder connection. The first up-sampling unit comprises an up-sampling layer, the second up-sampling unit comprises a convolution block and an up-sampling layer, and the third up-sampling unit comprises a convolution block, an up-sampling layer and a convolution layer, wherein the convolution block comprises 3 cascaded convolution layers.
S40, the mask image and the MRI image are fused to obtain a fused image, the fused image is input into a feature extraction module, and a second feature image corresponding to the fused image is determined through the feature extraction module.
Specifically, the image size of the mask map is the same as the image size of the MRI image, and the fusion image is obtained by superposing the mask map and the MRI image, wherein the mask map and the MRI image can be directly superposed when the mask map and the MRI image are superposed, or the mask map and the MRI image can be superposed according to a preset weight coefficient. In this embodiment, the fused image is obtained by overlapping the mask image and the MRI image with preset weight coefficients, where the preset weight coefficients may be preset or may be obtained by training in the training process of the prediction network model.
After the fused image is obtained, the fused image is used as an input item of a feature extraction module, and features of the fused image are extracted through the feature extraction module to obtain a second feature image, wherein the second feature image is a feature with the smallest image size in all feature images corresponding to the fused image extracted through the feature extraction module, that is, the second feature image is an output item of a last downsampling unit in the feature extraction module.
S50, inputting the second feature map into a classification module in the prediction network model, and determining the bladder cancer myolayer invasion category corresponding to the MRI image through the classification module.
Specifically, the classification module is used for predicting a bladder cancer myolayer invasion category corresponding to the MRI image, wherein the bladder cancer myolayer invasion category includes Ta (non-invasive papillary carcinoma), T1 (tumor invasion and subepithelial connective tissue), tis (carcinoma in situ (flat tumor)), T2 (tumor invasion myolayer), and NxMx (regional lymph node, distant metastasis cannot be assessed).
The classification module comprises a cascading pooling unit and a classification unit, wherein the pooling unit is connected with the feature extraction module, the pooling unit comprises a tie pooling layer, the classification unit comprises a full-connection layer and an activation layer, the full-connection layer is connected with the pooling unit, the activation layer is connected with the full-connection layer, the tie pooling layer is used for converting the second feature map into feature vectors, the full-connection layer and the activation layer are used for outputting prediction probability so as to determine the invasion category of the bladder cancer myolayer based on the prediction probability, and the activation layer can be configured with a Sigmoid activation function.
In order to further explain the predictive network model employed in the present embodiment, a training process of the predictive network model is described below. The training process of the prediction network model specifically comprises the following steps:
h10, acquiring a sample image set, wherein the sample image set comprises a plurality of sample images and gold standards corresponding to the sample images;
And H20, training the preset network model based on the sample images in the sample image set and the corresponding gold standards thereof to obtain a predicted network model.
Specifically, in step H10, the sample image is a T2WI sequence image in the mri of the patient with bladder cancer, and the gold standard includes a focal region gold standard and a bladder cancer myolayer invasion category gold standard, wherein the focal region gold standard may be a focal region marked by the T2WI sequence image in the mri of the patient with bladder cancer manually, and the bladder cancer myolayer invasion category gold standard is a bladder cancer myolayer invasion category determined by pathological evaluation.
In a specific embodiment, the sample image set comprises a training set and a test set, wherein the training set comprises 93T 2WI sequence images of 60 pathologically diagnosed Bca patients obtained using an instrument of Siemens 3.0T superconducting magnetic resonance imaging system MAGNETOM Skyra, tr=7500 ms, te=101 ms, layer thickness=4.0 mm, layer spacing=0.4 mm, fov=20 cm, matrix=320×320, number of excitations 2; the test set included 28T 2WI sequence images acquired for 28 bladder cancer patients using an instrument United Imaging Healthcare 3.0T magnetic resonance imaging system UMR780, tr=4000 ms, te=120 ms, layer thickness=3.0 mm, layer spacing=0.6 mm, fov=20 cm, matrix=336×269, number of excitations 1.5.
Further, after the sample image set is obtained, each sample image in the sample image set is subjected to data preprocessing, wherein the data preprocessing comprises image size adjustment and normalization, and the image sizes of the sample images are the same through the image size adjustment and normalization so as to meet the input requirement of a prediction network model, and meanwhile, the difference between data can be reduced and the network convergence speed can be increased. The specific process of the image size adjustment and normalization is as follows:
scaling the cross-sectional (x-y plane) dimension of the sample image to 128 x 128 voxels, with the z-axis direction unchanged; and then carrying out max-min standardization on the scaled sample system, and mapping the gray value of the sample image to a range of 0-1, wherein the formula of the max-min standardization is as follows:
Figure BDA0003999043950000131
where x represents the sample image, min represents the voxel minimum value of the sample image, and max represents the voxel maximum value of the sample image.
Further, due to limitation of computing resources, a complete sample image cannot be input into a prediction network model, so that after the sample image is acquired, image block division is required to be performed on the sample image, wherein the image block division process is as follows: dividing a sample image into a plurality of image blocks (patches) with identical layers in the z-axis direction, and taking each image block as one sample image, wherein the dividing rule is as follows: the focus image is extracted along the z-axis in 5 voxel steps, and every 8 layers are integrated into a patch, namely the size of each patch point is 128 multiplied by 8, and zero padding is carried out at two ends of the patch if the number of layers of the z-axis is less than 8. In addition, in order to avoid the data waste caused by the fact that sliding cannot intercept the last layers of the focus, a patch is taken from back to front after the sliding is finished, so that the divided image blocks cover all layers of the focus.
Further, in order to alleviate the unbalance problem of each type of sample image, after the sample image is divided, each image block obtained by division may be subjected to data enhancement, where the data enhancement may include one or more of horizontal flipping, vertical flipping, random cropping, image scaling, translation and rotation.
In step H20, in the training process of the prediction network model, a Focal loss function is used to alleviate the problem of imbalance between the positive and negative sample categories in the classification and segmentation tasks, where the classification loss function is:
Figure BDA0003999043950000141
Figure BDA0003999043950000142
L=-α t (1-p t ) γ ·log(p t )
wherein alpha is t The representation weight coefficient is used for distributing different weights to the positive and negative samples; (1-p) t ) γ For the modulation factor, p t Representing a sample to be classifiedProbability of positive sample, p when sample classification is wrong t Tending towards 0, the modulation factor tends towards 1, the loss L being hardly affected; when the sample classification is correct, p t Tending towards 1, the modulation factor tends towards 0, which reduces the weight of the total loss taken up by the sample loss; gamma is a focusing parameter used to adjust the downward weighted rate of the easily classified samples to adjust the ratio of the easily classified samples. In one exemplary implementation, α in the classification loss function may be set to 0.2 and γ may be set to 2.
Further, in the training process of the prediction network model, the training process may be divided into two stages, the first stage performs reverse learning on the prediction network model based on the segmentation loss function only, and the second stage performs reverse learning on the prediction network model based on the segmentation loss function and the classification loss function. In one implementation, the first stage includes the first 500 iterations and the second stage includes all iterations after 500 iterations, where the total loss function of the first stage is equal to the segmentation loss function and the total loss function of the second stage is equal to the segmentation loss function summed with the classification loss function by a preset weight.
Based on this, the training process total loss function calculation formula may be:
Figure BDA0003999043950000143
wherein L is total Represents the total loss function, L seg Represents a segmentation loss function, L cls Representing a classification loss function.
In addition, other parameters of the training process are set as follows: the optimizer was set to Adam, the learning rate was fluctuated between 1e-5 and 1e-3 using a cosine annealing (cosine annealing) learning rate decay strategy, the number of iterations was set to 1000, and the batch size was set to 8. And simultaneously, testing the verification set by using the model stored after the loss on the training set is stabilized, and evaluating the performance of the model according to the final predicted value of each sample in the verification set so as to select an optimal model. And finally, applying the optimal model based on the verification set to an external independent test set for testing.
After the predicted network model is obtained, in order to explain the model performance of the predicted network model, a five-fold cross-validation method is used on 93 samples in the training set to obtain an internal cross-validation result: the Area (AUC) Under the working characteristic Curve (Receiver Operating Characteristic Curve, ROC) of the subject can reach 0.932, and the accuracy rate reaches 0.946. Then, in order to evaluate the generalization capability of the algorithm, an external independent test is performed on 28 sample images in a test set based on a prediction network model constructed on a training set, so as to obtain an external independent verification result: AUC can reach 0.932, and accuracy reaches 0.857.
In addition, in order to illustrate the reliability of the prediction network model, the embodiment adopts a Grad-CAM feature visualization method to visualize the output features of the last convolution layer of the prediction network model, and analyzes the output features by displaying the region of interest of the network. The result shows that for the same sample image, the traditional deep learning classification (single task) model possibly focuses on some soft tissues except for lesion tissues, while the multi-task learning model can focus on focus areas, especially focus edges, more accurately due to the fact that segmentation tasks are introduced as assistance, and focus areas, especially focus edges, are related to the infiltration, invasion and expansion expression of tumors, so that the fact that a network is possibly helped to pay more attention to focus areas by introducing focus segmentation tasks is explained, classification performance of a prediction network model is improved, and accuracy of the predicted bladder cancer myolayer invasion type can be improved.
In summary, the present embodiment provides a bladder cancer muscular layer invasion prediction method based on multitask learning, including: determining a plurality of first feature maps corresponding to the MRI image through a feature extraction module in a prediction network model; inputting a plurality of first feature maps into a segmentation module in the prediction network model, and determining a mask map corresponding to the MRI image through the segmentation module; the mask image and the MRI image are fused to obtain a fused image, the fused image is input into a feature extraction module, and a second feature image corresponding to the fused image is determined through the feature extraction module; and inputting the second feature map into a classification module in the prediction network model, and determining the bladder cancer myolayer invasion category corresponding to the MRI image through the classification module. According to the method and the device, the multi-task prediction network model is adopted, the bladder cancer muscle layer invasion category is determined through the MRI image, so that the classification module can learn focus information extracted by the segmentation module, the classification performance of the classification module can be improved, and the accuracy of the obtained bladder cancer muscle layer invasion category can be improved.
Based on the above-mentioned bladder cancer muscular layer invasion prediction method based on multi-task learning, the present embodiment provides a bladder cancer muscular layer invasion prediction system based on multi-task learning, as shown in fig. 3, the system includes:
An acquisition module 100 for acquiring an MRI image, wherein the MRI image carries an image of a bladder;
the control module 200 is used for controlling the feature extraction module in the pre-trained prediction network model to determine a plurality of first feature maps corresponding to the MRI image; controlling a segmentation module in the prediction network model to determine a mask map corresponding to the MRI image based on a first feature map; the mask map and the MRI image are fused to obtain a fused image, and the feature extraction module is controlled to determine a second feature map corresponding to the fused image based on the fused image; and controlling a classification module in the prediction network model to determine the bladder cancer myolayer invasion category corresponding to the MRI image based on a second feature map.
Based on the above-described method for predicting bladder cancer muscular layer invasion based on multi-task learning, the present embodiment provides a computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps in the method for predicting bladder cancer muscular layer invasion based on multi-task learning as described in the above-described embodiment.
Based on the above-mentioned bladder cancer muscular layer invasion prediction method based on multitasking learning, the present application also provides a terminal device, as shown in fig. 4, which includes at least one processor (processor) 20; a display screen 21; and a memory (memory) 22, which may also include a communication interface (Communications Interface) 23 and a bus 24. Wherein the processor 20, the display 21, the memory 22 and the communication interface 23 may communicate with each other via a bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. The processor 20 may invoke logic instructions in the memory 22 to perform the methods of the embodiments described above.
Further, the logic instructions in the memory 22 described above may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product.
The memory 22, as a computer readable storage medium, may be configured to store a software program, a computer executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 20 performs functional applications and data processing, i.e. implements the methods of the embodiments described above, by running software programs, instructions or modules stored in the memory 22.
The memory 22 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the terminal device, etc. In addition, the memory 22 may include high-speed random access memory, and may also include nonvolatile memory. For example, a plurality of media capable of storing program codes such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or a transitory storage medium may be used.
In addition, the specific processes that the storage medium and the plurality of instruction processors in the terminal device load and execute are described in detail in the above method, and are not stated here.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. A method for predicting bladder cancer myolayer invasion based on multitasking learning, the method comprising:
acquiring an MRI image, wherein the MRI image carries an image of a bladder;
inputting the MRI image into a feature extraction module in a pre-trained prediction network model, and determining a plurality of first feature graphs corresponding to the MRI image through the feature extraction module;
inputting a plurality of first feature maps into a segmentation module in the prediction network model, and determining a mask map corresponding to the MRI image through the segmentation module;
the mask image and the MRI image are fused to obtain a fused image, the fused image is input into a feature extraction module, and a second feature image corresponding to the fused image is determined through the feature extraction module;
and inputting the second feature map into a classification module in the prediction network model, and determining the bladder cancer myolayer invasion category corresponding to the MRI image through the classification module.
2. The bladder cancer muscle layer invasion prediction method based on multi-task learning according to claim 1, wherein the feature extraction module comprises a convolution unit, a pooling unit and a plurality of downsampling units which are sequentially cascaded, wherein the downsampling unit comprises a three-dimensional convolution block and a plurality of three-dimensional identity modules which are sequentially cascaded, the three-dimensional identity module comprises a plurality of three-dimensional convolution layers which are cascaded and an adder, and an input item of the adder is an output item of a last three-dimensional convolution layer and an input item of a foremost three-dimensional convolution layer.
3. The bladder cancer muscle layer invasion prediction method based on multi-task learning according to claim 2, wherein the three-dimensional convolution block comprises a plurality of cascaded three-dimensional convolution layers, an adder and a target three-dimensional convolution layer, wherein the input item of the target three-dimensional convolution layer is the input item of the three-dimensional convolution layer positioned at the forefront, and the input item of the adder is the output item of the three-dimensional convolution layer positioned at the last and the output item of the target three-dimensional convolution layer.
4. The method for predicting the invasion of a bladder cancer myolayer based on multi-task learning according to claim 2, wherein the segmentation module comprises a first upsampling unit, a plurality of second upsampling units and two third upsampling units which are sequentially cascaded; the up-sampling units are connected with the last down-sampling unit, the third up-sampling unit positioned at the back of the two third up-sampling units is connected with the front-most down-sampling unit, and the third up-sampling unit positioned at the front is connected with the convolution unit; the second upsampling units are in one-to-one correspondence with the downsampling units in the last downsampling unit and the forefront downsampling unit, and are connected with the downsampling units corresponding to the downsampling units.
5. The method for predicting the invasion of a bladder cancer myolayer based on multi-task learning according to claim 1, wherein the classification module comprises a pooling unit and a classification unit which are cascaded in sequence, and the pooling unit is connected with the feature extraction module.
6. The method for predicting the invasion of a bladder cancer myolayer based on multitask learning according to claim 1, wherein a loss function adopted in the training process of the prediction network model is as follows:
Figure FDA0003999043940000021
wherein L is total Representing a loss function, L seg Represents a segmentation loss function, L cls Representing a classification loss function.
7. The method for predicting a bladder cancer myolayer invasion based on multitasking learning of claim 6, wherein the L is cls Representative class loss functions are:
L cls =-α t (1-p t ) γ ·log(p t )
Figure FDA0003999043940000022
/>
Figure FDA0003999043940000023
wherein p is t Representing the probability that the sample to be classified is a positive sample, alpha t And (3) representing weight coefficients, wherein alpha, gamma and p are preset parameters.
8. A bladder cancer myolayer invasion prediction system based on multitasking learning, the system comprising:
the system comprises an acquisition module, a detection module and a detection module, wherein the acquisition module is used for acquiring an MRI image, wherein the MRI image carries a bladder image;
the control module is used for controlling the feature extraction module in the pre-trained prediction network model to determine a plurality of first feature maps corresponding to the MRI image; controlling a segmentation module in the prediction network model to determine a mask map corresponding to the MRI image based on a first feature map; the mask map and the MRI image are fused to obtain a fused image, and the feature extraction module is controlled to determine a second feature map corresponding to the fused image based on the fused image; and controlling a classification module in the prediction network model to determine the bladder cancer myolayer invasion category corresponding to the MRI image based on a second feature map.
9. A computer readable storage medium storing one or more programs executable by one or more processors to implement the steps in the multitasking learning-based bladder cancer myolayer invasion prediction method of any of claims 1-7.
10. A terminal device, comprising: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes connection communication between the processor and the memory;
the processor, when executing the computer readable program, implements the steps in the method for predicting a bladder cancer myolayer violation based on multitasking learning as claimed in any of claims 1-7.
CN202211607134.0A 2022-12-14 2022-12-14 Bladder cancer myolayer invasion prediction method based on multitask learning and related device Active CN116051467B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211607134.0A CN116051467B (en) 2022-12-14 2022-12-14 Bladder cancer myolayer invasion prediction method based on multitask learning and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211607134.0A CN116051467B (en) 2022-12-14 2022-12-14 Bladder cancer myolayer invasion prediction method based on multitask learning and related device

Publications (2)

Publication Number Publication Date
CN116051467A true CN116051467A (en) 2023-05-02
CN116051467B CN116051467B (en) 2023-11-03

Family

ID=86115416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211607134.0A Active CN116051467B (en) 2022-12-14 2022-12-14 Bladder cancer myolayer invasion prediction method based on multitask learning and related device

Country Status (1)

Country Link
CN (1) CN116051467B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117894A (en) * 2018-08-29 2019-01-01 汕头大学 A kind of large scale remote sensing images building classification method based on full convolutional neural networks
CN111754446A (en) * 2020-06-22 2020-10-09 怀光智能科技(武汉)有限公司 Image fusion method, system and storage medium based on generation countermeasure network
CN113450408A (en) * 2021-06-23 2021-09-28 中国人民解放军63653部队 Irregular object pose estimation method and device based on depth camera
CN113724231A (en) * 2021-09-01 2021-11-30 广东工业大学 Industrial defect detection method based on semantic segmentation and target detection fusion model
CN114240839A (en) * 2021-11-17 2022-03-25 东莞市人民医院 Bladder tumor muscle layer invasion prediction method based on deep learning and related device
WO2022160202A1 (en) * 2021-01-28 2022-08-04 深圳市锐明技术股份有限公司 Method and apparatus for inspecting mask wearing, terminal device and readable storage medium
CN114943840A (en) * 2022-06-16 2022-08-26 京东科技信息技术有限公司 Training method of machine learning model, image processing method and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117894A (en) * 2018-08-29 2019-01-01 汕头大学 A kind of large scale remote sensing images building classification method based on full convolutional neural networks
CN111754446A (en) * 2020-06-22 2020-10-09 怀光智能科技(武汉)有限公司 Image fusion method, system and storage medium based on generation countermeasure network
WO2022160202A1 (en) * 2021-01-28 2022-08-04 深圳市锐明技术股份有限公司 Method and apparatus for inspecting mask wearing, terminal device and readable storage medium
CN113450408A (en) * 2021-06-23 2021-09-28 中国人民解放军63653部队 Irregular object pose estimation method and device based on depth camera
CN113724231A (en) * 2021-09-01 2021-11-30 广东工业大学 Industrial defect detection method based on semantic segmentation and target detection fusion model
CN114240839A (en) * 2021-11-17 2022-03-25 东莞市人民医院 Bladder tumor muscle layer invasion prediction method based on deep learning and related device
CN114943840A (en) * 2022-06-16 2022-08-26 京东科技信息技术有限公司 Training method of machine learning model, image processing method and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周慧;褚娜;陈澎;: "复杂场景下的SAR图像船舶目标检测", 大连海事大学学报, no. 03 *

Also Published As

Publication number Publication date
CN116051467B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
Zhang et al. ME‐Net: multi‐encoder net framework for brain tumor segmentation
US20230267611A1 (en) Optimization of a deep learning model for performing a medical imaging analysis task
Charron et al. Automatic detection and segmentation of brain metastases on multimodal MR images with a deep convolutional neural network
US9968257B1 (en) Volumetric quantification of cardiovascular structures from medical imaging
JP2022544229A (en) 3D Object Segmentation of Localized Medical Images Using Object Detection
CN111008984B (en) Automatic contour line drawing method for normal organ in medical image
Cinar et al. A hybrid DenseNet121-UNet model for brain tumor segmentation from MR Images
CN107563434B (en) Brain MRI image classification method and device based on three-dimensional convolutional neural network
WO2021179491A1 (en) Image processing method and apparatus, computer device and storage medium
CN108629785B (en) Three-dimensional magnetic resonance pancreas image segmentation method based on self-learning
Narayanan et al. Multi-channeled MR brain image segmentation: A novel double optimization approach combined with clustering technique for tumor identification and tissue segmentation
JP2023540910A (en) Connected Machine Learning Model with Collaborative Training for Lesion Detection
KR102328229B1 (en) AI based tumor detection and diagnostic method using 3D medical image
CN110751187B (en) Training method of abnormal area image generation network and related product
CN113706486A (en) Pancreas tumor image segmentation method based on dense connection network migration learning
CN114332132A (en) Image segmentation method and device and computer equipment
Zaridis et al. Region-adaptive magnetic resonance image enhancement for improving CNN-based segmentation of the prostate and prostatic zones
Liu et al. Weakly-supervised localization and classification of biomarkers in OCT images with integrated reconstruction and attention
CN111784652B (en) MRI (magnetic resonance imaging) segmentation method based on reinforcement learning multi-scale neural network
CN114693671B (en) Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning
CN116051467B (en) Bladder cancer myolayer invasion prediction method based on multitask learning and related device
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN111971751A (en) System and method for evaluating dynamic data
CN115375787A (en) Artifact correction method, computer device and readable storage medium
Nour et al. Skin lesion segmentation based on edge attention vnet with balanced focal tversky loss

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant