CN114820520A - Prostate image segmentation method and intelligent prostate cancer auxiliary diagnosis system - Google Patents
Prostate image segmentation method and intelligent prostate cancer auxiliary diagnosis system Download PDFInfo
- Publication number
- CN114820520A CN114820520A CN202210450439.9A CN202210450439A CN114820520A CN 114820520 A CN114820520 A CN 114820520A CN 202210450439 A CN202210450439 A CN 202210450439A CN 114820520 A CN114820520 A CN 114820520A
- Authority
- CN
- China
- Prior art keywords
- image
- prostate
- segmentation
- network
- candidate region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30081—Prostate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a prostate image segmentation method and an intelligent auxiliary diagnosis system for prostate cancer, which belong to the technical field of intelligent auxiliary diagnosis for prostate cancer, and the scheme comprises the following steps: the prostate image to be diagnosed is segmented by adopting the prostate image segmentation method; performing gridding processing on the segmented prostate images, and inputting the prostate images into a pre-trained tumor judgment network model to obtain a tumor judgment result of each grid image; merging the grid images with tumor tissues to obtain merged images; carrying out lesion probability classification and boundary segmentation in the region; based on the combined image, a deep learning model is adopted to extract high-order features; meanwhile, performing image omics feature extraction on the prostate image to be diagnosed; and performing secondary boundary segmentation on boundary regions where tissues with different probabilities are mutually fused based on the extracted high-order features and the image omics features to obtain a lesion probability distribution result of the prostate image.
Description
Technical Field
The disclosure belongs to the technical field of intelligent auxiliary diagnosis of prostate cancer, and particularly relates to a prostate image segmentation method and an intelligent auxiliary diagnosis system of prostate cancer.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The main clinical screening means for prostate cancer at present are serum prostate specific antigen detection and digital rectal examination. However, the test level of the serum prostate specific antigen is influenced by various factors in practical detection, such as digital rectal examination, catheter indwelling, prostatitis, sexual life, and the use of finasteride, so that the detection of the serum prostate cancer specific antigen alone in clinic has considerable error in estimating the detection rate of prostate cancer and clinical missed diagnosis. Although the digital rectal examination is simple and easy to operate and belongs to non-invasive examination, the operation is mainly determined subjectively, and only tumors close to the posterior capsule of the prostate can be detected in the detection process, so that the digital rectal examination also has considerable limitation in the clinical detection of the prostate cancer in the diagnosis of the prostate cancer.
Meanwhile, because the prostate cancer mpMRI has a large amount of image data, the diagnosis process needs manual identification by doctors, and a large amount of time and labor force are consumed; in addition, the results of manual diagnosis are subjective and often misdiagnosed due to the experience of the doctor.
Disclosure of Invention
In order to solve the problems, the scheme provides a prostate image segmentation method and a prostate cancer intelligent auxiliary diagnosis system, wherein a prostate cancer region is accurately judged by using multi-mode mpMRI medical image data, firstly, mpMRI image segmentation is carried out by using a region classification method, then, a prostate cancer focus is identified by using a lightweight convolutional neural network, and the performance of the convolutional neural network is optimized by using an evolutionary algorithm, so that the accuracy of image segmentation is effectively improved based on the segmentation method; on the other hand, the diagnostic system effectively improves the accuracy of the auxiliary diagnosis of the clinical prostate cancer.
According to a first aspect of embodiments of the present disclosure, there is provided a prostate image segmentation method, including:
acquiring a prostate mpMRI image sequence to be segmented;
respectively extracting a plurality of different-level feature maps of MRI images of different sequences based on a pre-trained feature extraction network model, and forming a series feature map based on a series connection mode;
based on the obtained serial feature map and the attention mechanism, re-calibrating the feature channel weight of the serial feature map to obtain a fusion feature map for fusing the effective information of the mpMRI image;
obtaining a candidate regional feature map through a regional suggestion network and a regional feature aggregation network based on the fusion feature map;
and obtaining the segmentation result of the detection target in the candidate region through a pre-trained image segmentation network based on the obtained candidate region characteristic graph.
Further, based on the obtained series characteristic diagram and the attention mechanism, the characteristic channel weight of the series characteristic diagram is recalibrated, specifically:
the attention mechanism adopts an SE module, and different characteristic channels of each series characteristic diagram are subjected to correlation modeling through the SE module respectively, so as to calibrate the weights of the different characteristic channels; based on the global pooling layer, compressing the spatial information of different characteristic channels into a channel descriptor; modeling the interdependency among different characteristic channels by utilizing a full connection layer and a ReLU layer, and obtaining the weights of the different characteristic channels through a sigmoid activation function; and multiplying the obtained weight by the characteristic channel through scaling operation, and re-calibrating the weight of the characteristic channel to obtain a fusion characteristic diagram.
Further, based on the obtained candidate region feature map, obtaining a segmentation result of the detection target in the candidate region through a pre-trained image segmentation network; the method comprises the following specific steps:
correcting the candidate region by utilizing a frame regression algorithm for the obtained candidate region feature map; and obtaining the class and the probability of the detection target contained in the candidate region based on the region classification branch network, and obtaining the segmentation result of the detection target in the candidate region through a pre-trained image segmentation network.
Further, the obtaining of the candidate regional characteristic diagram through the regional recommendation network and the regional characteristic aggregation network specifically includes: and obtaining a candidate region through an RPN (resilient packet network) based on the fusion feature map, and obtaining a candidate region feature map based on a RoIAlign method.
Further, the different sequences include an ADC image and a TW2 image;
or the like, or, alternatively,
the feature extraction network model adopts two SE-Resnet networks in parallel;
or the like, or, alternatively,
and correcting the candidate region based on a frame regression algorithm for the obtained candidate region characteristic graph, and outputting the detection target type and probability value contained in the candidate region.
According to a second aspect of the embodiments of the present disclosure, there is provided a prostate cancer intelligent auxiliary diagnosis system, including:
segmenting the prostate image to be diagnosed, wherein the segmentation method adopts the prostate image segmentation method;
performing gridding processing on the segmented prostate images, and inputting each grid image into a pre-trained tumor judgment network model to obtain a tumor judgment result of each grid image;
merging the grid image areas with tumor tissues to obtain merged images; carrying out lesion probability classification and boundary segmentation in the region;
based on the combined image, a deep learning model is adopted to extract high-order features; meanwhile, performing image omics feature extraction on the prostate image to be diagnosed;
and performing secondary boundary segmentation on boundary regions where different probability tissues are mutually fused based on the extracted high-order features and the image omics features to obtain a lesion probability distribution result of the prostate image to be diagnosed.
Further, the gridding processing is performed on the segmented prostate image, and each grid image is input into a pre-trained tumor judgment network model, specifically: dividing the segmented prostate image into grids with the size of g × g, traversing in the grid image by adopting the step length of g/2, and inputting each traversal result into the tumor judgment network model to obtain the tumor judgment result of each grid image.
Further, the tumor judgment model adopts a LocNet network model;
or
And based on the combined image, adopting a deep learning model to extract high-order features, specifically adopting a CNN network model.
Further, for the training process of the deep learning model, network parameters and structures of the deep learning model are optimized based on an evolutionary algorithm.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic device, including a memory, a processor, and a computer program stored in the memory for execution, the processor implementing the following steps when executing the program:
segmenting the prostate image to be diagnosed, wherein the segmentation method adopts the prostate image segmentation method;
performing gridding processing on the segmented prostate images, and inputting each grid image into a pre-trained tumor judgment network model to obtain a tumor judgment result of each grid image;
merging the grid image areas with tumor tissues to obtain merged images; carrying out probability classification and boundary segmentation on the focus in the region;
based on the combined image, a deep learning model is adopted to extract high-order features; meanwhile, performing image omics feature extraction on the prostate image to be diagnosed;
and performing secondary boundary segmentation on boundary regions where tissues with different probabilities are mutually fused based on the extracted high-order features and the image omics features to obtain a lesion probability distribution result of the prostate image to be diagnosed.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of:
segmenting the prostate image to be diagnosed, wherein the segmentation method adopts the prostate image segmentation method;
performing gridding processing on the segmented prostate images, and inputting each grid image into a pre-trained tumor judgment network model to obtain a tumor judgment result of each grid image;
merging the grid image areas with tumor tissues to obtain merged images; carrying out probability classification and boundary segmentation on the focus in the region;
based on the combined image, a deep learning model is adopted to extract high-order features; meanwhile, performing image omics feature extraction on the prostate image to be diagnosed;
and performing secondary boundary segmentation on boundary regions where tissues with different probabilities are mutually fused based on the extracted high-order features and the image omics features to obtain a lesion probability distribution result of the prostate image to be diagnosed.
Compared with the prior art, this disclosed beneficial effect is:
(1) the scheme of the present disclosure provides a prostate image segmentation method and an intelligent prostate cancer auxiliary diagnosis system; according to the scheme, a prostate cancer region is accurately judged by using multi-modal mpMRI medical image data, firstly, mpMRI images are segmented by a region classification method, then prostate cancer focuses are identified by a lightweight convolutional neural network, and the performance of the convolutional neural network is optimized by an evolutionary algorithm, so that on one hand, the accuracy of image segmentation is effectively improved based on the segmentation method; on the other hand, the diagnostic system effectively improves the accuracy of the auxiliary diagnosis of the clinical prostate cancer.
(2) Based on the scheme disclosed by the disclosure, standardized and efficient radiograph reading of prostate mpMRI can be effectively realized, image data preprocessing is completed, suspected prostate nodule identification and marking are realized, neural network classification performance is optimized, prostate cancer diagnosis accuracy is improved, and finally the auxiliary value of mpMRI in the prostate cancer diagnosis process is improved; the method effectively overcomes the limitation that the traditional mpMRI film reading only depends on the experience of imaging doctors, can form a uniform standard for multi-parameter magnetic resonance imaging film reading, and improves the film reading efficiency and the prostate cancer diagnosis accuracy.
Advantages of additional aspects of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure.
Fig. 1 is a flow chart of a prostate image segmentation method according to an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating the operation of an intelligent diagnosis assisting system for prostate cancer according to an embodiment of the present disclosure;
FIG. 3 is a SE-block structure diagram described in embodiments of the present disclosure;
FIG. 4 is a schematic diagram of an image segmentation network model structure according to an embodiment of the present disclosure;
fig. 5 is a LocNet model structure described in an embodiment of the present disclosure;
fig. 6 is an ECNN network structure described in embodiments of the present disclosure;
fig. 7(a) is a schematic structural diagram of a ResNet module described in the embodiments of the present disclosure;
fig. 7(b) is a flow chart of the evolutionary algorithm described in the embodiments of the present disclosure.
Detailed Description
The present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
The first embodiment is as follows:
the object of the present embodiment is to provide a prostate image segmentation method.
As shown in fig. 1, there is provided a prostate image segmentation method, including:
acquiring a prostate mpMRI image sequence to be segmented;
respectively extracting a plurality of different-level feature maps of MRI images of different sequences based on a pre-trained feature extraction network model, and forming a series feature map based on a series connection mode;
based on the obtained serial feature map and the attention mechanism, re-calibrating the feature channel weight of the serial feature map to obtain a fusion feature map for fusing the effective information of the mpMRI image;
obtaining a candidate regional feature map through a regional suggestion network and a regional feature aggregation network based on the fusion feature map;
and obtaining the segmentation result of the detection target in the candidate region through a pre-trained image segmentation network based on the obtained candidate region characteristic graph.
Further, based on the obtained series characteristic diagram and the attention mechanism, the characteristic channel weight of the series characteristic diagram is recalibrated, specifically:
the attention mechanism adopts an SE (Squeeze-and-Excitation: channel attention mechanism) module, and the SE module is used for respectively carrying out correlation modeling on different characteristic channels of each series characteristic diagram and calibrating the weights of the different characteristic channels; based on the global pooling layer, compressing the spatial information of different characteristic channels into a channel descriptor; modeling the interdependency among different characteristic channels by utilizing a full connection layer and a ReLU layer, and obtaining the weights of the different characteristic channels through a sigmoid activation function; and multiplying the obtained weight by the characteristic channel through scaling operation, and re-calibrating the weight of the characteristic channel to obtain a fusion characteristic diagram.
Further, based on the obtained candidate region feature map, obtaining a segmentation result of the detection target in the candidate region through a pre-trained image segmentation network; the method specifically comprises the following steps:
correcting the candidate region by using a frame regression algorithm for the obtained candidate region feature map; and obtaining the class and the probability of the detection target contained in the candidate region based on the region classification branch network, and obtaining the segmentation result of the detection target in the candidate region through a pre-trained image segmentation network.
Further, the obtaining of the candidate regional characteristic diagram through the regional recommendation network and the regional characteristic aggregation network specifically includes: and obtaining a candidate region through an RPN (resilient packet network) based on the fusion feature map, and obtaining a candidate region feature map based on a RoIAlign method.
Further, the different sequences include an ADC image and a TW2 image;
or the like, or, alternatively,
the feature extraction network model adopts two SE-Resnet networks in parallel;
or the like, or, alternatively,
and correcting the candidate region based on a frame regression algorithm for the obtained candidate region characteristic graph, and outputting the detection target type and probability value contained in the candidate region.
Specifically, for the convenience of understanding, the scheme of the present embodiment is described in detail below with reference to the accompanying drawings:
in order to solve the problems in the prior art, the present embodiment provides a prostate mpMRI image segmentation method based on region classification, including:
s1: extracting 3 hierarchical features of MRI images of different sequences, and forming a series connection feature in a series connection mode;
s2: modeling the correlation between different characteristic channels of each series characteristic, and automatically calibrating the weights of the different characteristic channels;
s3: and compressing the spatial information of different characteristic channels into a channel descriptor by using a global pooling layer.
S4: useful characteristic channels are promoted, irrelevant characteristic channels are suppressed, and fusion characteristics fused with mp-MRI effective information are obtained.
S5: and (3) using an image cleaning method based on non-maximum values to inhibit candidate regions with low probability of removing the prostate edge, and obtaining a final candidate region and an accurate prostate segmentation result.
Furthermore, the scheme is based on the technical route of the prostate mpMRI image segmentation method of region classification, and the accurate segmentation of the image is realized from the three aspects of mpMRI information fusion, feature extraction and image cleaning, specifically:
mpMRI information fusion step: and (3) extracting 3 hierarchical feature maps of MRI images of different sequences respectively by using 2 SE-Resnet networks in parallel, and forming a serial feature map in a serial mode. And (3) modeling the correlation between different characteristic channels of each series characteristic diagram by using SE-block, and automatically calibrating the weights of the different characteristic channels. In the SE-block, a global pooling layer is used for compressing the spatial information of different characteristic channels into a channel descriptor; modeling the interdependency among different characteristic channels by using 2 full connection layers and 1 ReLU layer, and obtaining the weights of the different characteristic channels through a sigmoid activation function. And multiplying the obtained weight by the characteristic channel through scaling operation, re-calibrating the weight of the characteristic channel, promoting a useful characteristic channel, inhibiting an irrelevant characteristic channel, and obtaining a fusion characteristic diagram fused with mp-MRI effective information.
A characteristic extraction step: an input image feature map is extracted by a feature extraction network, and candidate regions on the feature map are given by RPN (region pro-social network). And obtaining a candidate region feature map through a RoIAlign layer. Inputting the candidate region feature map into a head network, and correcting the candidate region by a frame regression algorithm in the head network; giving the class and the possibility that the candidate area contains the detection target by the area classification branch network; and giving a segmentation result of the detection target in the candidate area by the segmentation network.
An image cleaning step: and (3) using an image cleaning method based on non-maximum values to inhibit candidate regions with low probability of removing the prostate edge, and obtaining a final candidate region and an accurate prostate segmentation result.
As shown in fig. 3, the structure diagram of the SE-block in this embodiment is shown; fig. 4 is a schematic diagram of an image segmentation network model structure described in this embodiment.
Example two:
the embodiment aims at providing an intelligent auxiliary diagnosis system for prostate cancer.
As shown in fig. 2, an intelligent auxiliary diagnosis system for prostate cancer is provided, which comprises:
segmenting the prostate image to be diagnosed, wherein the segmentation method adopts the prostate image segmentation method;
performing gridding processing on the segmented prostate images, and inputting each grid image into a pre-trained tumor judgment network model to obtain a tumor judgment result of each grid image;
merging the grid image areas with tumor tissues to obtain merged images; carrying out lesion probability classification and boundary segmentation in the region;
based on the combined image, a deep learning model is adopted to extract high-order features; meanwhile, performing image omics feature extraction on the prostate image to be diagnosed;
and performing secondary boundary segmentation on boundary regions where tissues with different probabilities are mutually fused based on the extracted high-order features and the image omics features to obtain a lesion probability distribution result of the prostate image to be diagnosed.
Further, the gridding processing is performed on the segmented prostate image, and each grid image is input into a pre-trained tumor judgment network model, specifically: dividing the segmented prostate image into grids with the size of g × g, traversing in the grid image by adopting the step length of g/2, and inputting each traversal result into the tumor judgment network model to obtain the tumor judgment result of each grid image.
Further, the tumor judgment model adopts a LocNet network model;
or
And based on the combined image, adopting a deep learning model to extract high-order features, and specifically adopting a CNN network model.
Further, for the training process of the deep learning model, network parameters and structures of the deep learning model are optimized based on an evolutionary algorithm.
Specifically, for the convenience of understanding, the scheme of the present embodiment is described in detail below with reference to the accompanying drawings:
in order to solve the problems in the prior art, the present embodiment provides a prostate cancer identification method based on a lightweight convolutional neural network, including:
s1: traversing in the grid image by taking g/2 as a step length, and inputting each traversal result into LocNet;
s2: carrying out probability classification and boundary segmentation on the focus;
s3: performing image omics feature extraction on the mpMRI image, and generating a new feature set through feature selection;
s4: determining attribution of pixel points, finishing classification and probability boundary segmentation of tissues with fuzzy boundaries, and finishing drawing of a probability map according to the segmented boundaries;
further, the CNN-based method has higher computational efficiency, and the ensemble learning-based method has better fuzzy classification performance. Therefore, the prostate cancer identification of the mpMRI medical image is realized from two aspects of image identification and feature fusion based on the lightweight convolutional neural network, by fusing the CNN and the integrated learning method, specifically:
first, in order to increase the tumor detection and localization speed, the prostate mpMRI image obtained by image segmentation is subjected to gridding processing, and the image is divided into grids with the size of g × g. Traversing in the grid image by adopting g/2 as step length, inputting each traversal result into LocNet, and expressing the subgraph set input into LocNet as
P i ={P (i,1) ,P (i,2) ,…,P (i,n) ,i∈1,2,…,N}
And the size of each subgraph is g, and N is the number of subgraph sets of the input LocNet.
Secondly, Pi is input into LocNet to be calculated and output as the class Cl to which it belongs. Cl to 1 indicates the presence of prostate tumor tissue, otherwise 0. In the detection process, a matrix Mask with the same gridding structure as that of the mpMRI image is adopted to record the marking information of each Pi, and the marking information is specifically expressed as follows:
here, loc (Pi) represents the result of classifying Pi by LocNet. On the basis of Mask records, merging the grid regions marked as 1, and carrying out lesion probability classification and boundary segmentation in the regions. The probability classification and boundary segmentation specifically adopt the following scheme: obtaining a candidate region characteristic diagram through a region suggestion network and a region characteristic extraction network, and obtaining a segmentation result of a detection target in a candidate region through a pre-trained image segmentation network based on the obtained candidate region characteristic diagram; and based on the obtained series characteristic diagram and the attention mechanism, re-calibrating the characteristic channel weight of the series characteristic diagram, and based on the obtained candidate region characteristic diagram, obtaining the segmentation result of the detection target in the candidate region through a pre-trained image segmentation network. Correcting the candidate region by utilizing a frame regression algorithm; obtaining the class and the probability of a detection target contained in a candidate region based on a region classification branch network, and obtaining the segmentation result of the detection target in the candidate region through a pre-trained image segmentation network; the specific details of the above steps have been described in detail in one embodiment, and thus are not described herein again.
And finally, performing image omics feature extraction on the mpMRI image, wherein the extracted image features are recorded as f1, f 2. And (4) finishing high-order feature extraction of the MR image by the image through a CNN feature extractor, and marking the features as FCNN. And forming a feature set F by the above image omics features and the CNN features, then performing feature selection and feature processing on the F by adopting L1 regularized Lasso to generate a new feature set F'. And determining attribution of pixel points aiming at boundary regions where different probability tissues are mutually blended, finishing classification and probability boundary segmentation of tissues with fuzzy boundaries, and finishing drawing of a probability map according to the segmented boundaries.
Further, the present embodiment relates to the utilization of a deep learning model, and therefore relates to the optimization of parameters and structures of the deep learning model, and the present embodiment adopts the following scheme:
s1: the learning rate and the network structure are encoded by an ECNN (evolution relational network) algorithm to create an initial chromosome cluster, and the network structure information includes information about a 2D Convolutional layer complete connection layer, a ResNet module, and an activation function, the number of filters, the size of the filters, the step size, the loss rate and the like in each layer.
S2: for the encoded initial chromosome, mutation operation is realized by randomly changing the learning rate of the chromosome or the structure of a neural network. When the variation operation is performed on the learning rate, the learning rate is randomly generated within a certain range; when mutation operation is executed on the network structure, the method is realized by adding, deleting and replacing the layer or ResNet module and randomly selecting the layer attribute information. The network layer is randomly selected from a 2D convolutional layer, a full-connection layer and a pooling layer, and layer attribute information such as an activation function, the number of filters, the size of the filters, the step length, the loss rate and the like in the layer is generated through random selection. Since the randomness of the mutation operations may lead to the generation of invalid neural network structures, fitness evaluation is performed after each mutation operation is applied to ensure the validity of the resulting network structure. And if the invalid structure with low fitness is generated through mutation, discarding the individual with the lowest fitness and executing mutation operation again, and selecting the effective structure with the highest fitness in the third generation as the optimal network model.
S3: and evaluating the value of good or poor environment adaptability of the chromosome through fitness function calculation. The selection operation is typically based on probability, where individuals with higher fitness values are selected with higher probability, thereby ensuring that the overall quality of the offspring population is improved generation by generation. The selection of the fitness function directly influences the convergence speed of finding the optimal network model, so that the accuracy of the ECNN is adopted as the fitness function to ensure the accuracy in selection.
Further, as shown in fig. 5, it is a schematic structural diagram of the LocNet model described in this embodiment; fig. 6 is a schematic diagram of the ECNN network structure described in this embodiment; fig. 7(a) is a schematic structural diagram of a ResNet module described in the embodiment; fig. 7(b) is a schematic flow chart of the evolutionary algorithm in this embodiment.
Example three:
the embodiment aims at providing an electronic device.
An electronic device comprising a memory, a processor and a computer program stored for execution on the memory, the processor when executing the program implementing the steps of:
segmenting a prostate image to be diagnosed, wherein the segmentation method adopts the above prostate image segmentation method;
performing gridding processing on the segmented prostate images, and inputting each grid image into a pre-trained tumor judgment network model to obtain a tumor judgment result of each grid image;
merging the grid image areas with tumor tissues to obtain merged images; carrying out lesion probability classification and boundary segmentation in the region;
based on the combined image, a deep learning model is adopted to extract high-order features; meanwhile, performing image omics feature extraction on the prostate image to be diagnosed;
and performing secondary boundary segmentation on boundary regions where tissues with different probabilities are mutually fused based on the extracted high-order features and the image omics features to obtain a lesion probability distribution result of the prostate image to be diagnosed.
Example four:
it is an object of the present embodiments to provide a non-transitory computer-readable storage medium.
A non-transitory computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of:
segmenting the prostate image to be diagnosed, wherein the segmentation method adopts the prostate image segmentation method;
performing gridding processing on the segmented prostate images, and inputting each grid image into a pre-trained tumor judgment network model to obtain a tumor judgment result of each grid image;
merging the grid image areas with tumor tissues to obtain merged images; carrying out probability classification and boundary segmentation on the focus in the region;
based on the combined image, a deep learning model is adopted to extract high-order features; meanwhile, performing image omics feature extraction on the prostate image to be diagnosed;
and performing secondary boundary segmentation on boundary regions where tissues with different probabilities are mutually fused based on the extracted high-order features and the image omics features to obtain a lesion probability distribution result of the prostate image to be diagnosed.
In further embodiments, there is also provided:
an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, the computer instructions when executed by the processor performing the method of embodiment one. For brevity, no further description is provided herein.
It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate arrays FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include both read-only memory and random access memory, and may provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.
A computer readable storage medium storing computer instructions which, when executed by a processor, perform the method of embodiment one.
The method in the first embodiment may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
Those of ordinary skill in the art will appreciate that the various illustrative elements, i.e., algorithm steps, described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The prostate image segmentation method and the intelligent prostate cancer auxiliary diagnosis system provided by the embodiment can be realized, and have wide application prospects.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
Claims (10)
1. A prostate image segmentation method, comprising:
acquiring a prostate mpMRI image sequence to be segmented;
respectively extracting a plurality of different-level feature maps of MRI images of different sequences based on a pre-trained feature extraction network model, and forming a series feature map based on a series connection mode;
based on the obtained serial feature map and the attention mechanism, re-calibrating the feature channel weight of the serial feature map to obtain a fusion feature map for fusing the effective information of the mpMRI image;
obtaining a candidate regional feature map through a regional suggestion network and a regional feature aggregation network based on the fusion feature map;
and obtaining the segmentation result of the detection target in the candidate region through a pre-trained image segmentation network based on the obtained candidate region characteristic graph.
2. The prostate image segmentation method according to claim 1, wherein the feature channel weights of the series feature map are recalibrated based on the obtained series feature map and the attention mechanism, and specifically:
the attention mechanism adopts an SE module, and different characteristic channels of each series characteristic diagram are subjected to correlation modeling through the SE module respectively, so as to calibrate the weights of the different characteristic channels; based on the global pooling layer, compressing the spatial information of different characteristic channels into a channel descriptor; modeling the interdependency among different characteristic channels by utilizing a full connection layer and a ReLU layer, and obtaining the weights of the different characteristic channels through a sigmoid activation function; and multiplying the obtained weight by the characteristic channel through scaling operation, and re-calibrating the weight of the characteristic channel to obtain a fusion characteristic diagram.
3. The method for segmenting the prostate image according to claim 1, wherein the segmentation result of the detection target in the candidate region is obtained through a pre-trained image segmentation network based on the obtained feature map of the candidate region; the method comprises the following specific steps:
correcting the candidate region by utilizing a frame regression algorithm for the obtained candidate region feature map; and obtaining the class and the probability of the detection target contained in the candidate region based on the region classification branch network, and obtaining the segmentation result of the detection target in the candidate region through a pre-trained image segmentation network.
4. The prostate image segmentation method according to claim 1, wherein the candidate regional feature map is obtained through a regional suggestion network and a regional feature aggregation network, and specifically comprises: and obtaining a candidate region through an RPN (resilient packet network) based on the fusion feature map, and obtaining a candidate region feature map based on a RoIAlign method.
5. The prostate image segmentation method of claim 1, wherein the different sequences include an ADC image and a TW2 image;
or the like, or, alternatively,
the feature extraction network model adopts two SE-Resnet networks in parallel;
or the like, or a combination thereof,
and correcting the candidate region based on a frame regression algorithm for the obtained candidate region characteristic graph, and outputting the detection target type and probability value contained in the candidate region.
6. An intelligent diagnosis assisting system for prostate cancer, comprising:
segmenting the prostate image to be diagnosed, wherein the segmentation method adopts a prostate image segmentation method according to any one of claims 1 to 5;
performing gridding processing on the segmented prostate images, and inputting each grid image into a pre-trained tumor judgment network model to obtain a tumor judgment result of each grid image;
merging the grid image areas with tumor tissues to obtain merged images; carrying out lesion probability classification and boundary segmentation in the region;
based on the combined image, a deep learning model is adopted to extract high-order features; meanwhile, performing image omics feature extraction on the prostate image to be diagnosed;
and performing secondary boundary segmentation on boundary regions where tissues with different probabilities are mutually fused based on the extracted high-order features and the image omics features to obtain a lesion probability distribution result of the prostate image to be diagnosed.
7. The system according to claim 6, wherein the segmented prostate images are gridded, and each gridded image is input into a pre-trained tumor determination network model, specifically: dividing the segmented prostate image into grids with the size of g × g, traversing in the grid image by adopting the step length of g/2, and inputting each traversal result into the tumor judgment network model to obtain the tumor judgment result of each grid image.
8. The intelligent auxiliary diagnostic system for prostate cancer according to claim 6, wherein said tumor determination model employs a LocNet network model;
or
And based on the combined image, adopting a deep learning model to extract high-order features, and specifically adopting a CNN network model.
Or
And optimizing the network parameters and the structure of the deep learning model based on an evolutionary algorithm in the training process of the deep learning model.
9. An electronic device comprising a memory, a processor, and a computer program stored for execution on the memory, wherein the processor when executing the program performs the steps of:
segmenting the prostate image to be diagnosed, wherein the segmentation method adopts a prostate image segmentation method according to any one of claims 1 to 5;
gridding the prostate images obtained by segmentation, and inputting each grid image into a pre-trained tumor judgment network model to obtain a tumor judgment result of each grid image;
merging the grid image areas with tumor tissues to obtain merged images; carrying out lesion probability classification and boundary segmentation in the region;
based on the combined image, a deep learning model is adopted to extract high-order features; meanwhile, performing image omics feature extraction on the prostate image to be diagnosed;
and performing secondary boundary segmentation on boundary regions where tissues with different probabilities are mutually fused based on the extracted high-order features and the image omics features to obtain a lesion probability distribution result of the prostate image to be diagnosed.
10. A non-transitory computer readable storage medium having a computer program stored thereon, wherein the program when executed by a processor implements the steps of:
segmenting the prostate image to be diagnosed, wherein the segmentation method adopts a prostate image segmentation method according to any one of claims 1 to 5;
performing gridding processing on the segmented prostate images, and inputting each grid image into a pre-trained tumor judgment network model to obtain a tumor judgment result of each grid image;
merging the grid image areas with the tumor tissues to obtain merged images; carrying out lesion probability classification and boundary segmentation in the region;
based on the combined image, a deep learning model is adopted to extract high-order features; meanwhile, performing image omics feature extraction on the prostate image to be diagnosed;
and performing secondary boundary segmentation on boundary regions where tissues with different probabilities are mutually fused based on the extracted high-order features and the image omics features to obtain a lesion probability distribution result of the prostate image to be diagnosed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210450439.9A CN114820520A (en) | 2022-04-24 | 2022-04-24 | Prostate image segmentation method and intelligent prostate cancer auxiliary diagnosis system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210450439.9A CN114820520A (en) | 2022-04-24 | 2022-04-24 | Prostate image segmentation method and intelligent prostate cancer auxiliary diagnosis system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114820520A true CN114820520A (en) | 2022-07-29 |
Family
ID=82506998
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210450439.9A Pending CN114820520A (en) | 2022-04-24 | 2022-04-24 | Prostate image segmentation method and intelligent prostate cancer auxiliary diagnosis system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114820520A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115274099A (en) * | 2022-09-26 | 2022-11-01 | 之江实验室 | Human-intelligent interactive computer-aided diagnosis system and method |
CN115619810A (en) * | 2022-12-19 | 2023-01-17 | 中国医学科学院北京协和医院 | Prostate partition method, system and equipment |
CN116228786A (en) * | 2023-05-10 | 2023-06-06 | 青岛市中心医院 | Prostate MRI image enhancement segmentation method, device, electronic equipment and storage medium |
CN117636076A (en) * | 2024-01-25 | 2024-03-01 | 北京航空航天大学 | Prostate MRI image classification method based on deep learning image model |
-
2022
- 2022-04-24 CN CN202210450439.9A patent/CN114820520A/en active Pending
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115274099A (en) * | 2022-09-26 | 2022-11-01 | 之江实验室 | Human-intelligent interactive computer-aided diagnosis system and method |
CN115274099B (en) * | 2022-09-26 | 2022-12-30 | 之江实验室 | Human-intelligent interactive computer-aided diagnosis system and method |
CN115619810A (en) * | 2022-12-19 | 2023-01-17 | 中国医学科学院北京协和医院 | Prostate partition method, system and equipment |
CN115619810B (en) * | 2022-12-19 | 2023-10-03 | 中国医学科学院北京协和医院 | Prostate partition segmentation method, system and equipment |
CN116228786A (en) * | 2023-05-10 | 2023-06-06 | 青岛市中心医院 | Prostate MRI image enhancement segmentation method, device, electronic equipment and storage medium |
CN116228786B (en) * | 2023-05-10 | 2023-08-08 | 青岛市中心医院 | Prostate MRI image enhancement segmentation method, device, electronic equipment and storage medium |
CN117636076A (en) * | 2024-01-25 | 2024-03-01 | 北京航空航天大学 | Prostate MRI image classification method based on deep learning image model |
CN117636076B (en) * | 2024-01-25 | 2024-04-12 | 北京航空航天大学 | Prostate MRI image classification method based on deep learning image model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110232383B (en) | Focus image recognition method and focus image recognition system based on deep learning model | |
CN114820520A (en) | Prostate image segmentation method and intelligent prostate cancer auxiliary diagnosis system | |
Wang et al. | An improved dice loss for pneumothorax segmentation by mining the information of negative areas | |
KR101875468B1 (en) | Method and apparatus for providing medical information service based on diesease model | |
CN106682435A (en) | System and method for automatically detecting lesions in medical image through multi-model fusion | |
US11972571B2 (en) | Method for image segmentation, method for training image segmentation model | |
CN112102237A (en) | Brain tumor recognition model training method and device based on semi-supervised learning | |
CN112102266A (en) | Attention mechanism-based cerebral infarction medical image classification model training method | |
CN113139568B (en) | Class prediction model modeling method and device based on active learning | |
CN113571193B (en) | Construction method and device of lymph node metastasis prediction model based on multi-view learning image histology fusion | |
CN112949654A (en) | Image detection method and related device and equipment | |
CN113592797A (en) | Mammary nodule risk grade prediction system based on multi-data fusion and deep learning | |
KR20220144687A (en) | Dual attention multiple instance learning method | |
CN117095815A (en) | System for predicting prostate cancer patient with homologous recombination defect based on magnetic resonance image and pathological panoramic scanning slice | |
CN113160199B (en) | Image recognition method and device, computer equipment and storage medium | |
Sha et al. | The improved faster-RCNN for spinal fracture lesions detection | |
CN116206756B (en) | Lung adenocarcinoma data processing method, system, equipment and computer readable storage medium | |
WO2024093099A1 (en) | Thyroid ultrasound image processing method and apparatus, medium and electronic device | |
CN110992312A (en) | Medical image processing method, device, storage medium and computer equipment | |
CN113379770B (en) | Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device | |
CN115409812A (en) | CT image automatic classification method based on fusion time attention mechanism | |
WO2022227193A1 (en) | Liver region segmentation method and apparatus, and electronic device and storage medium | |
CN112541909B (en) | Lung nodule detection method and system based on three-dimensional neural network of slice perception | |
CN114612373A (en) | Image identification method and server | |
Chen et al. | PSOU-net: a neural network based on improved particle swarm optimization for breast ultrasound image segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |