CN113850816A - Cervical cancer MRI image segmentation device and method - Google Patents

Cervical cancer MRI image segmentation device and method Download PDF

Info

Publication number
CN113850816A
CN113850816A CN202010601807.6A CN202010601807A CN113850816A CN 113850816 A CN113850816 A CN 113850816A CN 202010601807 A CN202010601807 A CN 202010601807A CN 113850816 A CN113850816 A CN 113850816A
Authority
CN
China
Prior art keywords
image
mri image
module
mri
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010601807.6A
Other languages
Chinese (zh)
Inventor
赵丽娜
张晓鹏
黄陆光
张莹
杨华
刘波
缑水平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Air Force Medical University of PLA
Original Assignee
Xidian University
Air Force Medical University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University, Air Force Medical University of PLA filed Critical Xidian University
Priority to CN202010601807.6A priority Critical patent/CN113850816A/en
Publication of CN113850816A publication Critical patent/CN113850816A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The disclosure provides a cervical cancer MRI image segmentation method and device, relates to the technical field of electronic information, and can solve the problem of poor effect when segmentation processing is carried out on images containing cancer cases. The specific technical scheme is as follows: when an MRI image including a cervical cancer lesion area is obtained, the MRI image is subjected to labeling processing, bias field correction processing and normalization processing, the processed MRI image is input into an MRI image segmentation network model with multi-view feature fusion, and the MRI image segmentation network model with the multi-view feature fusion realizes segmentation processing of the MRI image through image interlayer feature extraction and image in-layer feature extraction in the image, so that an image area including the cervical cancer lesion area in the MRI image is obtained. The present disclosure is for an image segmentation process.

Description

Cervical cancer MRI image segmentation device and method
Technical Field
The disclosure relates to the technical field of electronic information, in particular to a segmentation device and a segmentation method for an MRI (magnetic resonance imaging) image of cervical cancer.
Background
Cervical cancer is one of the most common malignancies of the female reproductive system, with increasing incidence and a trend towards younger patients. The nuclear magnetic resonance has higher image resolution ratio on soft tissues, does not have ionizing radiation on patients, can image from different directions and different sequences, can clearly display cervix uteri, uterine bodies, vagina and surrounding tissue structures thereof, and is commonly used for clinical examination of cervical cancer. In actual clinical treatment, an imaging physician needs to manually and accurately delineate the area where the cervical cancer is located in an MRI imaging system, and then make a detailed radiotherapy and chemotherapy plan according to the size and the position of the delineated cervical cancer. Due to the fact that the number of sequences of magnetic resonance images is large, the manual segmentation method is extremely time-consuming, and the delineation results of different doctors are inconsistent.
In a 3-dimensional medical image segmentation model in the prior art, a convolution kernel with a convolution kernel of 3 × 3 × 3 is adopted as a feature extractor, but in an MRI image of cervical cancer, axial scanning resolution is high, and interlayer scanning resolution is low, so that interlayer information correlation is weakened. If feature extraction is performed directly using a convolution layer having a convolution kernel of 3 × 3 × 3, interlayer information can be considered, but the interlayer information is not strongly correlated, and thus the segmentation effect on a cervical cancer image is poor.
Disclosure of Invention
The embodiment of the disclosure provides a cervical cancer MRI image segmentation device and method, which can solve the problem of poor effect when segmentation processing is carried out on a cervical cancer image. The technical scheme is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an MRI image segmentation method for cervical cancer, the method being based on a multi-view feature fused MRI image segmentation network model, the multi-view feature fused MRI image segmentation network model including: the multi-view characteristic fusion module is used for extracting the characteristics of the input image from different views; the channel attention module is used for carrying out self-adaptive weighted fusion on the extracted features and accurately segmenting the MRI image according to the features subjected to the self-adaptive weighted fusion; the method specifically comprises the following steps:
step 1: the image labeling module labels the MRI image, and specifically comprises:
the image labeling module delineates a lesion area from the MRI image frame by frame, stores the delineated image, and takes the delineated image as a gold standard corresponding to each MRI image;
step 2: the bias field correction module performs bias field correction processing on the MRI image, and specifically comprises the following steps:
the bias field correction module extracts a bias field from each MRI image to correct the image;
and step 3: the image resampling module is used for resampling the MRI image, and specifically comprises the following steps:
the image resampling module uses a SimpleITK toolkit to resample MRI images corresponding to different patients, so that the resolution ratios of the MRI images are consistent;
and 4, step 4: the image normalization module normalizes the MRI image, and specifically includes:
the image normalization module normalizes the MRI image after resampling processing by using a normalization formula and maps the pixel value of the MRI image to an interval [0,1 ];
and 5: the image cropping module crops the MRI image to obtain an image block, and specifically comprises:
the image cutting module performs random cutting on the MRI image of each patient to obtain a preset number of 3D image blocks with the same size; the same cutting operation is carried out at the same position in the label image corresponding to the MRI image of each patient to obtain a label corresponding to each image block;
step 6: the generation sample module generates an MRI image training sample set;
randomly selecting images with a preset proportion from all marked image data as a training set, selecting images with the rest proportion as a test set, and randomly selecting images with a certain proportion from the test set as a verification set to select a final model;
and 7: a network model building module builds an MRI image segmentation network model with multi-view feature fusion; the MRI image segmentation network model with the multi-view feature fusion integrally follows an encoder-decoder structure;
and 8: the training module trains an MRI image segmentation network model with multi-view feature fusion;
and step 9: the image segmentation module segments the MRI image through an MRI image segmentation network model fused with multi-view features, and the image segmentation module specifically comprises:
the image segmentation module predicts the MRI image block by block in a sliding window mode, and splices and reconstructs the prediction result block by block to obtain the segmentation result of the lesion area in the MRI image;
step 10: and the image display module displays the image of the segmented lesion area.
In one embodiment, step 8 of the method: the training module trains the MRI image segmentation network model, and specifically comprises:
step 8 a: selecting 8 data pairs from an image training sample set to form a training batch;
and step 8 b: carrying out forward propagation on data of a training batch through the MRI image segmentation network model to obtain a prediction result of the model;
and step 8 c: calculating a Dice coefficient between each segmented lesion area image in the prediction result and the corresponding gold standard;
and step 8 d: judging whether the Dice loss function in the verification set does not decrease after 5 Epochs, if not, continuing to execute the step 8 e; otherwise, stopping network training and starting to execute the step 9;
step 8 e: and updating the weight parameters of each layer in the multi-view feature fused MRI image segmentation network model by using an Adam algorithm.
In one embodiment, the normalization formula in step 4 of the method is as follows:
Figure BDA0002558778430000031
wherein the content of the first and second substances,
Figure BDA0002558778430000032
the pixel gray value of the normalized MRI image at the coordinate position (i, j, k) is represented, X (i, j, k) represents the pixel gray value of the MRI image at the coordinate position (i, j, k), X represents a matrix of the MRI image, min (X) represents the minimum value of all pixel points in the MRI image, and max (X) represents the maximum value of all pixel points in the MRI image.
In one embodiment, the clipping operation in step 5 of the method is performed as follows:
step 5a, randomly scattering 2500 seed points in the internal area of the whole MRI image;
step 5b, cutting out image blocks with the size of 128 multiplied by 8 by taking the selected points as centers;
step 5c, processing the label of each image in the same way as the step 5a and the step 5 b;
step 5d, detecting the number of pixel points containing pathological feature images in the image blocks cut out from the labels one by one, and if the number is more than 5, keeping the image blocks; otherwise, deleting the image block and the MRI image block corresponding to the image block.
In one embodiment, the multi-view feature fusion module is specifically configured to:
processing the image input into the multi-view feature fusion module by a convolution layer with a convolution kernel of 1x1x1 to obtain two paths of data streams;
extracting three-dimensional features of the first branch data stream by using a convolution layer with a convolution kernel of 3x3x3 to obtain a feature map;
and (3) passing the second branch data stream through a convolution layer with a convolution kernel of 3x3x1, and performing 2-dimensional convolution operation on an image block layer by layer to obtain new three data streams: leg 1, leg 2, and leg 3; respectively carrying out convolution operations with convolution kernels of 3x1x3 and 1x3x3 on the branch 1 and the branch 3 to respectively obtain corresponding characteristic graphs;
and combining the characteristic diagram corresponding to the first branch with the characteristic diagrams corresponding to the branch 1, the branch 2 and the branch 3 in the channel dimension.
In one embodiment, the channel attention module is specifically configured to:
performing global maximum pooling on the input feature map in the channel direction, compressing the features through a full connection layer, and enhancing the nonlinear fitting capability by using a ReLU activation function;
decompressing the features through a full connection layer to keep the feature dimension consistent with the number of channels of the input feature graph, and limiting the feature value between [0 and 1] by using a Sigmoid activation function after the ReLU activation function processing is carried out again, wherein the feature value represents the importance degree of each channel;
multiplying the characteristic value as a weight value by each channel of the input characteristic in sequence;
and adding the processed feature map and the input feature map to be used as the output of the multi-view feature fusion module.
According to a second aspect of the embodiments of the present disclosure, there is provided an MRI image segmentation apparatus for cervical cancer, the apparatus being based on a multi-view feature fused MRI image segmentation network model, the multi-view feature fused MRI image segmentation network model including: the multi-view characteristic fusion module is used for extracting the characteristics of the input image blocks from different views; the channel attention module is used for carrying out self-adaptive weighted fusion on the extracted features and accurately segmenting the MRI image according to the features subjected to the self-adaptive weighted fusion; the device comprises an image labeling module, an offset field correction module, an image resampling module, an image normalization module, an image cutting module, an image segmentation module and an image display module; wherein:
the image labeling module is used for delineating a lesion area in the MRI image and generating a mask image corresponding to the lesion area;
the bias field correction module is used for extracting a bias field from each MRI image to correct the image;
the image resampling module is used for resampling the MRI images corresponding to different patients to ensure that the resolution ratios of the MRI images are consistent;
the image normalization module is used for normalizing the MRI image after resampling processing by using a normalization formula and mapping the pixel value of the MRI image to an interval [0,1 ];
the image cutting module is used for cutting the MRI image to obtain an image block;
the image segmentation module is used for segmenting a lesion region in the MRI image through the multi-view feature fused MRI image segmentation network model;
and the image display module is used for displaying the image of the segmented lesion area.
In one embodiment, the normalization formula is as follows:
Figure BDA0002558778430000051
wherein the content of the first and second substances,
Figure BDA0002558778430000052
the pixel gray value of the normalized MRI image at the coordinate position (i, j, k) is represented, X (i, j, k) represents the pixel gray value of the MRI image at the coordinate position (i, j, k), X represents a matrix of the MRI image, min (X) represents the minimum value of all pixel points in the MRI image, and max (X) represents the maximum value of all pixel points in the MRI image.
In one embodiment, the multi-view feature fusion module in the multi-view feature fused MRI image segmentation network model is specifically configured to:
processing the image input into the multi-view feature fusion module by a convolution layer with a convolution kernel of 1x1x1 to obtain two paths of data streams;
extracting three-dimensional features of the first branch data stream by using a convolution layer with a convolution kernel of 3x3x3 to obtain a feature map;
and (3) passing the second branch data stream through a convolution layer with a convolution kernel of 3x3x1, and performing 2-dimensional convolution operation on an image block layer by layer to obtain new three data streams: leg 1, leg 2, and leg 3; respectively carrying out convolution operations with convolution kernels of 3x1x3 and 1x3x3 on the branch 1 and the branch 3 to respectively obtain corresponding characteristic graphs;
and combining the characteristic diagram corresponding to the first branch with the characteristic diagrams corresponding to the branch 1, the branch 2 and the branch 3 in the channel dimension.
In one embodiment, the channel attention module in the multi-view feature fused MRI image segmentation network model is specifically configured to:
performing global maximum pooling on the input feature map in the channel direction, compressing the features through a full connection layer, and enhancing the nonlinear fitting capability by using a ReLU activation function;
decompressing the features through a full connection layer to keep the feature dimension consistent with the number of channels of the input feature graph, and limiting the feature value between [0 and 1] by using a Sigmoid activation function after the ReLU activation function processing is carried out again, wherein the feature value represents the importance degree of each channel;
multiplying the characteristic value as a weight value by each channel of the input characteristic in sequence;
and adding the processed feature map and the input feature map to be used as the output of the multi-view feature fusion module.
When an MRI image including an image of a cervical cancer lesion region is acquired, the MRI image is subjected to labeling processing, bias field correction processing and normalization processing, the processed MRI image is input into an MRI image segmentation network model with multi-view feature fusion, and the MRI image segmentation network model with multi-view feature fusion realizes segmentation processing of the MRI image through image interlayer feature extraction and image in-layer feature extraction in the image, so that the image region including the cervical cancer lesion in the MRI image is acquired. The multi-view-angle feature-fused MRI image segmentation network model disclosed by the disclosure can be used for respectively carrying out feature extraction on an input image from an axial position, a crown position and a despin position, and carrying out self-adaptive weighting fusion on the extracted features of different view angles so as to extract the features which have large influence weight on a segmentation result, thereby realizing the improvement of the segmentation accuracy of the image. Meanwhile, the MRI image segmentation network model with the multi-view feature fusion reduces the parameter quantity of the model on the basis of ensuring the segmentation effect through a separable convolution algorithm in the implementation process, so that the reasoning speed of the model in the application stage is higher, the calculated quantity of the image segmentation method is reduced, and the applicability of the method is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart of an MRI image segmentation method for cervical cancer according to an embodiment of the present disclosure;
fig. 2 is a schematic logical layer structure diagram of an MRI image segmentation method for cervical cancer according to an embodiment of the present disclosure;
fig. 3a is an initial diagram of an MRI image segmentation method for cervical cancer according to an embodiment of the present disclosure;
fig. 3b is a target diagram of an MRI image segmentation method for cervical cancer according to an embodiment of the present disclosure;
FIG. 4 is a distance comparison table of an MRI image segmentation method for cervical cancer according to an embodiment of the present disclosure;
fig. 5 is a structural diagram of an MRI image segmentation device for cervical cancer provided by an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of systems and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Example one
The embodiment of the present disclosure provides a cervical cancer MRI image segmentation method, as shown in fig. 1 and 2, the method is based on a multi-view feature fused MRI image segmentation network model, and the multi-view feature fused MRI image segmentation network model includes: the multi-view characteristic fusion module is used for extracting the characteristics of the input image from different views; the channel attention module is used for carrying out self-adaptive weighted fusion on the extracted features and accurately segmenting the MRI image according to the features subjected to the self-adaptive weighted fusion; the method specifically comprises the following steps:
101. the image labeling module labels the MRI image.
The method for labeling the MRI image by the image labeling module specifically comprises the following steps: the image labeling module delineates a lesion area frame by frame from the MRI images and stores the delineated images, wherein the delineated images serve as gold standards corresponding to each MRI image.
In particular, the lesion region may comprise a region of a human cervical organ and the MRI image may comprise a 3D magnetic resonance image. The delineation may be a frame-by-frame delineation of the cervical cancer region from the 3D magnetic resonance image.
102. The bias field correction module performs bias field correction processing on the MRI image,
in the method provided by the present disclosure, the bias field correction processing is performed on the MRI image through the bias field correction module, which specifically includes:
the bias field correction module extracts a bias field from each MRI image to correct the image. For example, the bias field correction may be by an N4 bias field correction technique.
103. And the image resampling module is used for resampling the MRI image.
In the method provided by the disclosure, the MRI image can be resampled through the preset resolution and the preset tool kit, so that the resolutions are consistent; the preset toolkit includes a simpletick tool.
104. The image normalization module normalizes the MRI image.
Specifically, a normalized MRI image includes:
the MRI image is normalized according to a normalization formula, which includes mapping pixel values of the MRI image to a target interval [0,1 ].
Specifically, the normalization formula includes:
Figure BDA0002558778430000081
wherein the content of the first and second substances,
Figure BDA0002558778430000082
expressing the gray value of a pixel point with the coordinate position (i, j, k) of the MRI image after normalization processing, X (i, j, k) expressing the gray value of the pixel point with the coordinate position (i, j, k) of the MRI image, X expressing the matrix of the MRI image, min (X) expressing the minimum value of all pixel points in the MRI image, and max (X) expressing the maximum value of all pixel points in the MRI image.
105. And the image cropping module crops the MRI image to obtain an image block.
In the method, an image cutting module randomly cuts the MRI image of each patient to obtain a preset number of 3D image blocks with the same size; the same cutting operation is carried out at the same position in the label image corresponding to the MRI image of each patient to obtain a label corresponding to each image block;
the method provided by the present disclosure performs a cropping process on the MRI image to obtain an image block of a preset size corresponding to the MRI image, and includes:
randomly scattering 2500 seed points in the image inner region in the MRI image;
with the seed point as the center, generating image blocks of 128 × 128 × 8 size by clipping processing;
according to the image block cutting rule, cutting the label of each image;
detecting the number of pixels containing cervical cancer in the image blocks cut out from the labels one by one;
if the number of the pixel points containing cervical cancer in the image block is more than 5; the image block is retained;
if the number of the pixel points containing cervical cancer in the image block is less than or equal to 5; the image block is deleted.
106. The generate samples module generates a training sample set of MRI images.
In the method provided by the disclosure, in all labeled image data, images in a preset proportion are randomly selected as a training set, images in the rest proportion are used as a test set, and images in a certain proportion are randomly selected in the test set as a verification set to select a final model. For example, the MRI image may be based on; of all labeled MRI images, 80% (preset scale) of the images were randomly selected as a training set and 20% (remaining scale) of the images were selected as a test set.
107. And the network model building module builds a multi-view feature fused MRI image segmentation network model.
The multi-view feature fused MRI image segmentation network model provided by the present disclosure generally follows an encoder-decoder structure. In the method, a multi-view feature fused MRI image segmentation network model is constructed through a multi-view feature fusion module and a channel attention module.
The multi-view feature fusion module in the present disclosure is specifically configured to:
processing the image input into the multi-view feature fusion module by a convolution layer with a convolution kernel of 1x1x1 to obtain two paths of data streams;
extracting three-dimensional features of the first branch data stream by using a convolution layer with a convolution kernel of 3x3x3 to obtain a feature map;
and (3) passing the second branch data stream through a convolution layer with a convolution kernel of 3x3x1, and performing 2-dimensional convolution operation on an image block layer by layer to obtain new three data streams: leg 1, leg 2, and leg 3; respectively carrying out convolution operations with convolution kernels of 3x1x3 and 1x3x3 on the branch 1 and the branch 3 to respectively obtain corresponding characteristic graphs;
and combining the characteristic diagram corresponding to the first branch with the characteristic diagrams corresponding to the branch 1, the branch 2 and the branch 3 in the channel dimension.
The multi-view feature fused MRI image segmentation network model is constructed based on the characteristics of cervical cancer images, the images are characterized by high axial resolution and low resolution of coronal and despin positions, and a plurality of paths of features in the multi-view feature fused MRI image segmentation network model are extracted from different dimensions and different views, so that the proportion of the features of each channel to the improvement of segmentation results is different.
In order to be able to model the importance of the segmentation results for individual channel features adaptively, a channel attention module is added after feature merging.
The channel attention module in this disclosure is particularly useful for:
performing global maximum pooling on the input feature map in the channel direction, compressing the features through a full connection layer, and enhancing the nonlinear fitting capability by using a ReLU activation function;
decompressing the features through a full connection layer to keep the feature dimension consistent with the number of channels of the input feature graph, and limiting the feature value between [0 and 1] by using a Sigmoid activation function after the ReLU activation function processing is carried out again, wherein the feature value represents the importance degree of each channel;
multiplying the characteristic value as a weight value by each channel of the input characteristic in sequence;
and adding the processed feature map and the input feature map to be used as the output of the multi-view feature fusion module.
The multi-view feature fused MRI image segmentation network model can respectively extract features of an MRI image from features in an image layer and features between the image layers, namely from axial positions, coronal positions and despinner positions, and then the target weight performs self-adaptive weighted fusion on the extracted features of different views so as to extract target features which have more influence on segmentation results.
Further, the multi-view feature fused MRI image segmentation network model in the present disclosure is inspired by the residual module, and adds the feature map processed by the channel attention module and the input feature map as the output of the multi-view feature fusion module.
108. And the training module trains the multi-view feature fused MRI image segmentation network model.
This disclosure still can train, adjust it according to the training set after constructing the MRI image segmentation network model that accomplishes the multi-view characteristic and fuse to improve the image segmentation's precision, its specific process includes:
step 8 a: selecting 8 data pairs from an image training sample set to form a training batch;
and step 8 b: carrying out forward propagation on data of a training batch through an MRI image segmentation network model with multi-view feature fusion to obtain a prediction result of the model;
and step 8 c: calculating a Dice coefficient between each segmented lesion area image in the prediction result and the corresponding gold standard;
and step 8 d: judging whether the Dice loss function (loss function Dice coeffient) in the verification set does not decrease after 5 Epochs (model cycle), if not, continuing to execute the step 8 e; otherwise, stopping network training and starting to execute the step 9;
step 8 e: and updating the weight parameters of each layer in the multi-view feature fused MRI image segmentation network model by using an Adam algorithm.
109. The image segmentation module segments the MRI image through the multi-view feature fused MRI image segmentation network model.
The MRI image segmentation network model segmentation MRI image by multi-view feature fusion in the present disclosure specifically includes:
the image segmentation module predicts the MRI image block by block in a sliding window mode, and splices and reconstructs the prediction result block by block to obtain the segmentation result of the lesion area in the MRI image;
110. and the image display module displays the image of the segmented lesion area.
To facilitate understanding of the solution provided by the present disclosure, the present solution is illustrated herein by a specific simulation test procedure: here, as shown in fig. 3, fig. 3a is an MRI image without segmentation processing, and fig. 3b is an image processed by an MRI image segmentation network model with multi-view feature fusion provided by the present disclosure.
The specific simulation training image set of the simulation experiment is as follows: 3D MRI images of 44 cervical cancer patients are obtained, an MRI image segmentation network model is trained through the images, and the trained MRI image segmentation network model is used for carrying out simulation test on the 3D MRI images of 8 patients in the test set.
The cervical cancer MRI image data used in the present disclosure has a scanning layer thickness of 6mm at the time of imaging, a resolution of 0.5x0.5 for each scanning layer, and a scanning magnetic field strength of 3T. In order to ensure the reliability of the segmentation label, all images are delineated by the cervical cancer by an experienced imaging physician, and the delineated result is used as a reference image.
When network training is carried out, MRI images of 44 patients are randomly used as a training set and a verification set, and MRI images of 8 patients are used as independent test sets. Fig. 3a is an axial view of a sample taken randomly from a test sample set. Inputting the trained multi-view feature fused MRI image segmentation network model into the graph in FIG. 3a, obtaining a segmentation result through one-time forward propagation of the multi-view feature fused MRI image segmentation network model, and performing three-dimensional reconstruction on the segmentation result to obtain a segmentation result graph as shown in FIG. 3 b.
Calculating a Dice similarity coefficient between a segmentation result obtained after the MRI image segmentation network model with multi-view feature fusion is segmented and a gold standard corresponding to the segmentation result by using a Dice similarity coefficient formula as follows:
Figure BDA0002558778430000121
wherein, X represents the prediction result of the model, Y is the gold standard as the reference, DC represents the Dice coefficient, and the numeric area of Dice coefficient is [0,1 ]. The larger the Dice coefficient is, the larger the overlapping degree between the segmentation result of the model and the gold standard is, and the better the segmentation effect is.
And calculating the maximum distance from the MF-U-Net segmentation result to the nearest point in the gold standard by using the following Hausdorff distance formula:
Figure BDA0002558778430000122
wherein the content of the first and second substances,
Figure BDA0002558778430000123
representing the two-way Hausdorff distance, dH(X, Y) and dHAnd (Y, X) represents a one-way Hausdorff distance, d (X, Y) represents the Euclidean distance from a pixel point in the segmentation result to a pixel point in the golden standard, X belongs to X to represent that X belongs to an element in X, and Y belongs to Y to represent that Y belongs to an element in Y.
The calculation results of the die similarity coefficient and the hausdorff distance of the multi-view feature fused MRI image segmentation network model on the test set are shown in fig. 4. Specifically, fig. 4 is a table comparing Dice similarity coefficient and hausdorff distance obtained on test samples according to the present disclosure.
The above results show that: the MRI image segmentation network model with multi-view feature fusion can extract image features of different views, the importance degree of the features extracted from different views is modeled by using the channel attention module, self-adaptive weighting fusion is carried out according to the difference of the importance degree, the network model can extract richer and effective features, and the segmentation effect in the MRI image segmentation task of cervical cancer is improved.
FIG. 2 is a schematic diagram of the processing logic for segmenting an image in the disclosed method in practice: and acquiring an MRI image, processing the MRI image through an MRI image segmentation network model with multi-view feature fusion, and segmenting a lesion region image in the RI image of the image, wherein the preprocessing comprises normalization processing and bias field correction processing in the steps, so that the image is ensured to meet the input requirement of the network model.
The multi-view feature fused MRI image segmentation network model provided by the embodiment of the disclosure mainly comprises a multi-view feature fusion module and a residual error adjustment module, wherein the multi-view feature fusion module performs feature extraction on an input image block from different views, then uses a channel attention module to perform adaptive weighted fusion on the extracted features so as to extract features more favorable for a segmentation result, further excavates spatial information through the residual error adjustment module, and performs refinement processing on the features extracted by the multi-view feature fusion module; training an MRI image segmentation network model with multi-view feature fusion by using the normalized image blocks; and segmenting the cervical cancer region in the cervical cancer MRI image by using the trained multi-view feature fused MRI image segmentation network model.
According to the cervical cancer MRI image segmentation method provided by the embodiment of the disclosure, when an MRI image including a cervical cancer lesion area image is acquired, the MRI image is subjected to labeling processing, bias field correction processing and normalization processing, the processed MRI image is input into a multi-view-angle feature fused MRI image segmentation network model, and the multi-view-angle feature fused MRI image segmentation network model realizes segmentation processing of the MRI image through image interlayer feature extraction and image in-layer feature extraction in the image, so that an image area including cervical cancer lesions in the MRI image is acquired. The multi-view-angle feature-fused MRI image segmentation network model disclosed by the disclosure can be used for respectively carrying out feature extraction on an input image from an axial position, a crown position and a despin position, and carrying out self-adaptive weighting fusion on the extracted features of different view angles so as to extract the features which have large influence weight on a segmentation result, thereby realizing the improvement of the segmentation accuracy of the image. Meanwhile, the MRI image segmentation network model with the multi-view feature fusion reduces the parameter quantity of the model on the basis of ensuring the segmentation effect through a separable convolution algorithm in the implementation process, so that the reasoning speed of the model in the application stage is higher, the calculated quantity of the image segmentation method is reduced, and the applicability of the method is improved.
Example two
The embodiment of the present disclosure provides an MRI image segmentation apparatus for cervical cancer, as shown in fig. 5, the MRI image segmentation apparatus 50 for cervical cancer is based on an MRI image segmentation network model with multi-view feature fusion, and the MRI image segmentation network model with multi-view feature fusion includes: the multi-view characteristic fusion module is used for extracting the characteristics of the input image blocks from different views; and the channel attention module is used for performing self-adaptive weighted fusion on the extracted features and accurately segmenting the MRI image according to the features subjected to the self-adaptive weighted fusion.
The device 50 comprises an image labeling module 501, an offset field correction module 502, an image resampling module 503, an image normalization module 504, an image cropping module 505, an image segmentation module 506 and an image display module 507; wherein:
an image labeling module 501, configured to delineate a lesion region in an MRI image and generate a mask image corresponding to the lesion region;
a bias field correction module 502 for extracting a bias field from each MRI image to correct the image;
the image resampling module 503 is configured to resample MRI images corresponding to different patients, so that resolutions of the MRI images are consistent;
an image normalization module 504, configured to normalize the MRI image after resampling processing by using a normalization formula, and map a pixel value of the MRI image to an interval [0,1 ];
an image cropping module 505, configured to crop an MRI image to obtain an image block;
an image segmentation module 506, configured to segment the lesion region in the MRI image segmentation MRI image through the MRI image segmentation network model;
and an image display module 506, configured to display an image of the segmented lesion region.
In one embodiment, the normalization formula is as follows:
Figure BDA0002558778430000141
wherein the content of the first and second substances,
Figure BDA0002558778430000142
expressing the gray value of a pixel point with the coordinate position (i, j, k) of the MRI image after normalization processing, X (i, j, k) expressing the gray value of the pixel point with the coordinate position (i, j, k) of the MRI image, X expressing a matrix of the MRI image, min (X) expressing the minimum value of all pixel points in the MRI image, and max (X) expressing the MRI imageMaximum value of all pixel points in the image.
In one embodiment, the multi-view feature fusion module in the multi-view feature fused MRI image segmentation network model is specifically configured to:
processing the image input into the multi-view feature fusion module by a convolution layer with a convolution kernel of 1x1x1 to obtain two paths of data streams;
extracting three-dimensional features of the first branch data stream by using a convolution layer with a convolution kernel of 3x3x3 to obtain a feature map;
and (3) passing the second branch data stream through a convolution layer with a convolution kernel of 3x3x1, and performing 2-dimensional convolution operation on an image block layer by layer to obtain new three data streams: leg 1, leg 2, and leg 3; respectively carrying out convolution operations with convolution kernels of 3x1x3 and 1x3x3 on the branch 1 and the branch 3 to respectively obtain corresponding characteristic graphs;
and combining the characteristic diagram corresponding to the first branch with the characteristic diagrams corresponding to the branch 1, the branch 2 and the branch 3 in the channel dimension.
In one embodiment, the channel attention module in the multi-view feature fused MRI image segmentation network model is specifically configured to:
performing global maximum pooling on the input feature map in the channel direction, compressing the features through a full connection layer, and enhancing the nonlinear fitting capability by using a ReLU activation function;
decompressing the features through a full connection layer to keep the feature dimension consistent with the number of channels of the input feature graph, and limiting the feature value between [0 and 1] by using a Sigmoid activation function after the ReLU activation function processing is carried out again, wherein the feature value represents the importance degree of each channel;
multiplying the characteristic value as a weight value by each channel of the input characteristic in sequence;
and adding the processed feature map and the input feature map to be used as the output of the multi-view feature fusion module.
When an MRI image including an image of a cervical cancer lesion region is acquired, the MRI image is subjected to labeling processing, bias field correction processing and normalization processing, the processed MRI image is input into an MRI image segmentation network model with multi-view feature fusion, and the MRI image segmentation network model with multi-view feature fusion realizes segmentation processing of the MRI image through image interlayer feature extraction and image in-layer feature extraction in the image, so that the image region including the cervical cancer lesion in the MRI image is acquired. The multi-view-angle feature-fused MRI image segmentation network model disclosed by the disclosure can be used for respectively carrying out feature extraction on an input image from an axial position, a crown position and a despin position, and carrying out self-adaptive weighting fusion on the extracted features of different view angles so as to extract the features which have large influence weight on a segmentation result, thereby realizing the improvement of the segmentation accuracy of the image. Meanwhile, the MRI image segmentation network model with the multi-view feature fusion reduces the parameter quantity of the model on the basis of ensuring the segmentation effect through a separable convolution algorithm in the implementation process, so that the reasoning speed of the model in the application stage is higher, the calculated quantity of the image segmentation method is reduced, and the applicability of the method is improved.
Based on the cervical cancer MRI image segmentation method described in the above embodiment corresponding to fig. 1 and fig. 2, the embodiment of the present disclosure further provides a computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be a Read Only Memory (ROM), a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage system, and the like. The storage medium has stored thereon computer instructions for executing the cervical cancer MRI image segmentation method described in the above embodiment corresponding to fig. 1 and 2, which will not be described herein again.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. An MRI image segmentation method for cervical cancer is characterized in that the method is based on an MRI image segmentation network model of multi-view feature fusion, and the MRI image segmentation network model comprises: the multi-view characteristic fusion module is used for extracting the characteristics of the input image from different views; the channel attention module is used for carrying out self-adaptive weighted fusion on the extracted features and accurately segmenting the MRI image according to the features subjected to the self-adaptive weighted fusion; the method specifically comprises the following steps:
step 1: the image labeling module labels the MRI image, and specifically comprises:
the image labeling module delineates a lesion area frame by frame from the MRI images and stores the delineated images, wherein the delineated images serve as gold standards corresponding to each MRI image, and the lesion area comprises a cervical cancer area;
step 2: the bias field correction module performs bias field correction processing on the MRI image, and specifically comprises the following steps:
the bias field correction module extracts a bias field from each MRI image to correct the image;
and step 3: the image resampling module is used for resampling the MRI image, and specifically comprises the following steps:
the image resampling module uses a SimpleITK toolkit to resample MRI images corresponding to different patients, so that the resolution ratios of the MRI images are consistent;
and 4, step 4: the image normalization module normalizes the MRI image, and specifically includes:
the image normalization module normalizes the MRI image after resampling processing by using a normalization formula and maps the pixel value of the MRI image to an interval [0,1 ];
and 5: the image cropping module crops the MRI image to obtain an image block, and specifically comprises:
the image cutting module performs random cutting on the MRI image of each patient to obtain a preset number of 3D image blocks with the same size; the same cutting operation is carried out at the same position in the label image corresponding to the MRI image of each patient to obtain a label corresponding to each image block;
step 6: the generation sample module generates an MRI image training sample set;
randomly selecting images with a preset proportion from all marked image data as a training set, selecting images with the rest proportion as a test set, and randomly selecting images with a certain proportion from the test set as a verification set to select a final model;
and 7: a constructing network model module constructs the MRI image segmentation network model; wherein the MRI image segmentation network model as a whole follows an encoder-decoder structure;
and 8: a training module trains the MRI image segmentation network model;
and step 9: the image segmentation module segments the MRI image through the MRI image segmentation network model, and specifically includes:
the image segmentation module predicts the MRI image block by block in a sliding window mode, and splices and reconstructs the prediction result block by block to obtain the segmentation result of the lesion area in the MRI image;
step 10: and the image display module displays the image of the segmented lesion area.
2. The method according to claim 1, characterized in that said step 8: the training module trains the MRI image segmentation network model, and specifically comprises:
step 8 a: selecting 8 data pairs from an image training sample set to form a training batch;
and step 8 b: carrying out forward propagation on data of a training batch through the MRI image segmentation network model to obtain a prediction result of the model;
and step 8 c: calculating a Dice coefficient between each segmented lesion area image in the prediction result and the corresponding gold standard;
and step 8 d: judging whether the Dice loss function in the verification set does not decrease after 5 Epochs, if not, continuing to execute the step 8 e; otherwise, stopping network training and starting to execute the step 9;
step 8 e: and updating the weight parameters of each layer in the MRI image segmentation network model by using an Adam algorithm.
3. The method according to claim 1 or 2, wherein the normalization formula in step 4 is as follows:
Figure FDA0002558778420000021
wherein the content of the first and second substances,
Figure FDA0002558778420000022
the pixel gray value of the normalized MRI image at the coordinate position (i, j, k) is represented, X (i, j, k) represents the pixel gray value of the MRI image at the coordinate position (i, j, k), X represents a matrix of the MRI image, min (X) represents the minimum value of all pixel points in the MRI image, and max (X) represents the maximum value of all pixel points in the MRI image.
4. The method according to claim 1 or 2, wherein the clipping operation in step 5 is performed as follows:
step 5a, randomly scattering 2500 seed points in the internal area of the whole MRI image;
step 5b, cutting out image blocks with the size of 128 multiplied by 8 by taking the selected points as centers;
step 5c, processing the label of each image in the same way as the step 5a and the step 5 b;
step 5d, detecting the number of pixel points containing pathological feature images in the image blocks cut out from the labels one by one, and if the number is more than 5, keeping the image blocks; otherwise, deleting the image block and the MRI image block corresponding to the image block.
5. The method according to claim 1 or 2, wherein the multi-perspective feature fusion module is specifically configured to:
processing the image input into the multi-view feature fusion module by a convolution layer with a convolution kernel of 1x1x1 to obtain two paths of data streams;
extracting three-dimensional features of the first branch data stream by using a convolution layer with a convolution kernel of 3x3x3 to obtain a feature map;
and (3) passing the second branch data stream through a convolution layer with a convolution kernel of 3x3x1, and performing 2-dimensional convolution operation on an image block layer by layer to obtain new three data streams: leg 1, leg 2, and leg 3; respectively carrying out convolution operations with convolution kernels of 3x1x3 and 1x3x3 on the branch 1 and the branch 3 to respectively obtain corresponding characteristic graphs;
and combining the characteristic diagram corresponding to the first branch with the characteristic diagrams corresponding to the branch 1, the branch 2 and the branch 3 in the channel dimension.
6. The method of claim 5, wherein the channel attention module is specifically configured to:
performing global maximum pooling on the input feature map in the channel direction, compressing the features through a full connection layer, and enhancing the nonlinear fitting capability by using a ReLU activation function;
decompressing the features through a full connection layer to keep the feature dimension consistent with the number of channels of the input feature graph, and limiting the feature value between [0 and 1] by using a Sigmoid activation function after the ReLU activation function processing is carried out again, wherein the feature value represents the importance degree of each channel;
multiplying the characteristic value serving as a weight value by each channel of the input characteristic in sequence;
and adding the processed feature map and the input feature map to be used as the output of the multi-view feature fusion module.
7. An MRI image segmentation device for cervical cancer, which is characterized in that the device is based on an MRI image segmentation network model of multi-view feature fusion, and the MRI image segmentation network model comprises: the multi-view characteristic fusion module is used for extracting the characteristics of the input image blocks from different views; the channel attention module is used for carrying out self-adaptive weighted fusion on the extracted features and accurately segmenting the MRI image according to the features subjected to the self-adaptive weighted fusion; the device comprises an image labeling module, an offset field correction module, an image resampling module, an image normalization module, an image cutting module, an image segmentation module and an image display module; wherein:
the image labeling module is used for delineating a lesion area in the MRI image and generating a mask image corresponding to the lesion area;
the bias field correction module is used for extracting a bias field from each MRI image to correct the image;
the image resampling module is used for resampling the MRI images corresponding to different patients to make the resolution ratios of the MRI images consistent;
the image normalization module is used for normalizing the MRI image after resampling processing by using a normalization formula and mapping the pixel value of the MRI image to an interval [0,1 ];
the image cutting module is used for cutting the MRI image to obtain an image block;
the image segmentation module is used for segmenting a lesion region in the MRI image through the MRI image segmentation network model;
and the image display module is used for displaying the image of the segmented lesion area.
8. The apparatus of claim 7, wherein the normalization formula is as follows:
Figure FDA0002558778420000041
wherein the content of the first and second substances,
Figure FDA0002558778420000042
representing the gray value of a pixel point with the coordinate position (i, j, k) of the MRI image after normalization processing, X (i, j, k) representing the gray value of a pixel point with the coordinate position (i, j, k) of the MRI image, X representing a matrix of the MRI image, min (X) representing the minimum value of all pixel points in the MRI image, and max (X)) Representing the maximum of all pixel points in the MRI image.
9. The apparatus according to claim 7 or 8, wherein the multi-perspective feature fusion module is specifically configured to:
processing the image input into the multi-view feature fusion module by a convolution layer with a convolution kernel of 1x1x1 to obtain two paths of data streams;
extracting three-dimensional features of the first branch data stream by using a convolution layer with a convolution kernel of 3x3x3 to obtain a feature map;
and (3) passing the second branch data stream through a convolution layer with a convolution kernel of 3x3x1, and performing 2-dimensional convolution operation on an image block layer by layer to obtain new three data streams: leg 1, leg 2, and leg 3; respectively carrying out convolution operations with convolution kernels of 3x1x3 and 1x3x3 on the branch 1 and the branch 3 to respectively obtain corresponding characteristic graphs;
and combining the characteristic diagram corresponding to the first branch with the characteristic diagrams corresponding to the branch 1, the branch 2 and the branch 3 in the channel dimension.
10. The apparatus of claim 9, wherein the channel attention module is specifically configured to:
performing global maximum pooling on the input feature map in the channel direction, compressing the features through a full connection layer, and enhancing the nonlinear fitting capability by using a ReLU activation function;
decompressing the features through a full connection layer to keep the feature dimension consistent with the number of channels of the input feature graph, and limiting the feature value between [0 and 1] by using a Sigmoid activation function after the ReLU activation function processing is carried out again, wherein the feature value represents the importance degree of each channel;
multiplying the characteristic value serving as a weight value by each channel of the input characteristic in sequence;
and adding the processed feature map and the input feature map to be used as the output of the multi-view feature fusion module.
CN202010601807.6A 2020-06-28 2020-06-28 Cervical cancer MRI image segmentation device and method Pending CN113850816A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010601807.6A CN113850816A (en) 2020-06-28 2020-06-28 Cervical cancer MRI image segmentation device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010601807.6A CN113850816A (en) 2020-06-28 2020-06-28 Cervical cancer MRI image segmentation device and method

Publications (1)

Publication Number Publication Date
CN113850816A true CN113850816A (en) 2021-12-28

Family

ID=78972754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010601807.6A Pending CN113850816A (en) 2020-06-28 2020-06-28 Cervical cancer MRI image segmentation device and method

Country Status (1)

Country Link
CN (1) CN113850816A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330759A (en) * 2022-10-12 2022-11-11 浙江霖研精密科技有限公司 Method and device for calculating distance loss based on Hausdorff distance

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330759A (en) * 2022-10-12 2022-11-11 浙江霖研精密科技有限公司 Method and device for calculating distance loss based on Hausdorff distance
CN115330759B (en) * 2022-10-12 2023-03-10 浙江霖研精密科技有限公司 Method and device for calculating distance loss based on Hausdorff distance

Similar Documents

Publication Publication Date Title
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN110310281B (en) Mask-RCNN deep learning-based pulmonary nodule detection and segmentation method in virtual medical treatment
US11430140B2 (en) Medical image generation, localizaton, registration system
WO2021238438A1 (en) Tumor image processing method and apparatus, electronic device, and storage medium
CN109598722B (en) Image analysis method based on recurrent neural network
CN114240962B (en) CT image liver tumor region automatic segmentation method based on deep learning
CN103249358B (en) Medical image-processing apparatus
CN111179237B (en) Liver and liver tumor image segmentation method and device
US20070031020A1 (en) Method and apparatus for intracerebral hemorrhage lesion segmentation
CN111340825B (en) Method and system for generating mediastinum lymph node segmentation model
CN111008984A (en) Method and system for automatically drawing contour line of normal organ in medical image
CN111696126B (en) Multi-view-angle-based multi-task liver tumor image segmentation method
CN113159040B (en) Method, device and system for generating medical image segmentation model
CN110728239A (en) Gastric cancer enhanced CT image automatic identification system utilizing deep learning
CN110751187A (en) Training method of abnormal area image generation network and related product
Lindner et al. Using synthetic training data for deep learning-based GBM segmentation
CN113706486A (en) Pancreas tumor image segmentation method based on dense connection network migration learning
CN114881914A (en) System and method for determining three-dimensional functional liver segment based on medical image
CN113850816A (en) Cervical cancer MRI image segmentation device and method
CN115841457A (en) Three-dimensional medical image segmentation method fusing multi-view information
CN114708283A (en) Image object segmentation method and device, electronic equipment and storage medium
CN115330732A (en) Method and device for determining pancreatic cancer
CN114820483A (en) Image detection method and device and computer equipment
EP3905192A1 (en) Region identification device, method, and program
KR102311472B1 (en) Method for predicting regions of normal tissue and device for predicting regions of normal tissue using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination