CN116524178A - MRI image tissue segmentation method and imaging method based on semi-supervision - Google Patents

MRI image tissue segmentation method and imaging method based on semi-supervision Download PDF

Info

Publication number
CN116524178A
CN116524178A CN202310224096.9A CN202310224096A CN116524178A CN 116524178 A CN116524178 A CN 116524178A CN 202310224096 A CN202310224096 A CN 202310224096A CN 116524178 A CN116524178 A CN 116524178A
Authority
CN
China
Prior art keywords
image
loss
mri
tissue segmentation
semi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310224096.9A
Other languages
Chinese (zh)
Inventor
陈再良
侯雅筝
沈海澜
张健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202310224096.9A priority Critical patent/CN116524178A/en
Publication of CN116524178A publication Critical patent/CN116524178A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06V10/7753Incorporation of unlabelled data, e.g. multiple instance learning [MIL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a MRI image tissue segmentation method based on semi-supervision, which comprises the steps of obtaining image data and marking to obtain a labeled and unlabeled data image; constructing an image tissue segmentation preliminary model; randomly selecting a plurality of images and carrying out amplification to obtain a strong amplification image and a weak amplification image; selecting a labeled image to be input into a current segmentation model and obtaining a supervision part loss and a boundary loss; selecting an image without labels to be input into a current segmentation model and obtaining unsupervised loss; synthesizing all losses to form a total loss function, and reversely propagating through a gradient descent algorithm to update parameters of a current segmentation model; repeating the steps until a final image tissue segmentation model is obtained; and adopting an image tissue segmentation model to carry out tissue segmentation of the actual MRI image. The invention also discloses an imaging method comprising the semi-supervised MRI image tissue segmentation method. The invention has high reliability, good accuracy and good segmentation effect.

Description

MRI image tissue segmentation method and imaging method based on semi-supervision
Technical Field
The invention belongs to the field of image processing, and particularly relates to a semi-supervised MRI (magnetic resonance imaging) image tissue segmentation method and an imaging method.
Background
With the development of economic technology and the improvement of living standard of people, the attention of people on medical treatment and health is getting higher. With the advent of the informatization age and the intellectualization age, the fusion of the deep learning technology and the computer-aided medical image processing technology has become a research focus of researchers.
MRI images are an important component of the medical field; thus, the accuracy of tissue segmentation of MRI images will greatly impact the subsequent use and analysis of the image data. Currently, there are still two problems with tissue segmentation based on MRI images. First, although MRI supports multi-planar imaging and has high resolution, it is still difficult to construct a general MRI tissue segmentation framework due to the different parameter settings of MRI scanning devices and the morphological differences of tissues. Second, label acquisition in medical image segmentation is difficult and high quality labels often require extensive experience with high physician and expert effort. Thus, semi-supervised medical image segmentation methods have evolved.
There are some semi-supervised medical image segmentation methods at present, and they can be roughly classified into a method based on pseudo tag refinement and a method based on consistency regularization. For the method based on the refinement of the pseudo tags, the quality of the pseudo tags is excessively focused by the method, and semantic information correlation between different images or views is ignored. For methods based on consistency regularization, the method only focuses on uncertainty among different scales, and the fusion mechanism is too simple. In addition, some medical image segmentation methods that combine pseudo-label refinement and consistency regularization for semi-supervision have also been proposed by researchers. However, these methods tend to impose consistency constraints too directly and do not filter out uncertain pixels, such that such methods may cause the network to learn incorrect pixels and ignore correct pixels, losing the goal of learning the network and degrading performance.
Disclosure of Invention
The invention aims to provide a semi-supervised MRI image tissue segmentation method with high reliability, good accuracy and good segmentation effect.
It is a second object of the present invention to provide an imaging method comprising the semi-supervised MRI image tissue segmentation method.
The MRI image tissue segmentation method based on semi-supervision provided by the invention comprises the following steps:
s1, acquiring existing MRI image data and marking the same, so as to obtain a tagged data image and a non-tagged data image;
s2, constructing a semi-supervised MRI image tissue segmentation preliminary model;
s3, randomly selecting a plurality of images from the data acquired in the step S1;
s4, performing image augmentation operation on the image obtained in the step S3, so as to obtain a strong augmentation image and a weak augmentation image;
s5, selecting a labeled image from the images obtained in the step S4, inputting the labeled image into a current segmentation model, and calculating to obtain the loss of a supervision part and the boundary loss of the labeled image;
s6, selecting an unlabeled image from the images obtained in the step S4, inputting the unlabeled image into a current segmentation model, and calculating to obtain an unsupervised loss;
s7, integrating the supervised partial loss and the boundary loss obtained in the step S5 and the unsupervised loss obtained in the step S6 to form a total loss function, and carrying out back propagation through a gradient descent algorithm to update the parameters of the current segmentation model;
s8, repeating the steps S3 to S7 until the set conditions are met, and obtaining a final MRI image tissue segmentation model based on semi-supervision;
s9, performing actual tissue segmentation of the MRI image by adopting the semi-supervised MRI image tissue segmentation model obtained in the step S8.
The step S1 of acquiring and marking the existing MRI image data to obtain a tagged data image and a non-tagged data image specifically includes the steps of:
acquiring existing MRI image data;
marking the data to obtain a tagged data imageAnd no-label data imageWherein (1)>For the ith tagged data image, y i The label of the ith labeled data image, N is the total number of labeled data images,/>For the ith unlabeled data image, M is the total number of unlabeled data images.
The step S2 of constructing a semi-supervised MRI image tissue segmentation preliminary model specifically comprises the following steps:
the model comprises a shared encoder, two decoders and two convolution attention modules;
the shared encoder comprises five convolution modules, each comprising a 3 x 3 convolution block, a batch normalization layer and an activation function ReLU, and each followed by a 2 x 2 pooling layer;
the first decoder comprises five upsampling modules, each upsampling module comprising an upsampling layer based on bilinear interpolation, a 3 x 3 convolution block, a batch normalization layer and an activation function ReLU;
the second decoder comprises five upsampling modules, each upsampling module comprising an upsampling layer based on bilinear interpolation, a 3 x 3 convolution block, a batch normalization layer and an activation function ReLU;
each layer of output of the shared encoder and the same-size characteristic output of the corresponding layer of the first decoder are cascaded to obtain a first cascade characteristic, and the first cascade characteristic is input into a first convolution attention module;
the same-size characteristics of each layer output of the shared encoder and the corresponding layer output of the second decoder are cascaded to obtain second cascading characteristics, and the second cascading characteristics are input into a second convolution attention module;
the convolution attention module comprises two parallel channel attention sub-modules and a space attention sub-module; wherein the channel attention submodule comprises a maximum pooling layer, an average pooling layer and a multi-layer perceptron; the spatial attention submodule comprises a maximum pooling layer, an average pooling layer and a convolution module layer, wherein the convolution module layer comprises a 3 multiplied by 3 convolution block, a 2 multiplied by 2 pooling layer, a batch normalization layer and an activation function ReLU;
the output characteristics of the channel attention module and the output characteristics of the space attention module are multiplied by the cascade characteristics and then added, and the multiplied output characteristics and the cascade characteristics are input into a convolution module to obtain attention activated characteristics; finally, inputting the feature activated by attention into a network layer corresponding to the first decoder or the second decoder for subsequent decoding; the convolution module includes a 3 x 3 convolution block, a 2 x 2 pooling layer, a batch normalization layer, and an activation function ReLU.
And (3) performing image augmentation operation on the image obtained in the step (S3) in the step (S4) to obtain a strong augmented image and a weak augmented image, wherein the method specifically comprises the following steps:
dividing the image acquired in the step S3 into two parts:
performing weak augmentation operation on the first partial image so as to obtain a weak augmented image; the weak amplification operation comprises random size adjustment, cutting, overturning and rotation;
performing strong augmentation operation on the second part of image so as to obtain a strong augmentation image; the strong amplification operation comprises the steps of adjusting the brightness of an image, adjusting the contrast of the image, adjusting the blurring degree of the image and adding Gaussian noise;
the first partial image and the second partial image each comprise a tagged data image and an untagged data image.
In the image obtained in step S4, the step S5 is that the labeled image is selected and input into the current segmentation model, and the monitoring part loss and the boundary loss of the labeled image are calculated, which specifically includes the following steps:
inputting the labeled image into a current segmentation model to obtain a tissue segmentation result of the MRI image, and calculating the loss of a supervision part;
and extracting the boundary of the MRI image by adopting a Sobel operator according to the tissue segmentation result of the obtained MRI image, then calculating a distance map of the image, and calculating to obtain the boundary loss.
The method for calculating the loss of the supervision part specifically comprises the following steps:
the supervision partial loss L is calculated by adopting the following formula sup
D in l Is a labeled image set; i D l I is the number of tagged images; x is x l Is a tagged image; y is x l The corresponding label; l (L) ce () Is a cross entropy loss function; a is that 1 () A processing function that is a weak augmented image processing; a is that 2 () A processing function that is a strong augmentation image processing; f () is the processing function of the encoder; f (A) 1 (x l ) A characteristic diagram obtained after the weak augmentation image is processed by an encoder; f (A) 2 (x l ) A characteristic diagram obtained after the strong amplification image is processed by an encoder; h is a 1 () Is a processing function of the first decoder; h is a 2 () Is a processing function of the second decoder.
The method for calculating the distance map of the image specifically comprises the following steps:
the distance map Dist (x, y) is calculated using the following formula:
in B of 1 () A boundary map for the label; (x) 0 ,y 0 ) Is the location of boundary pixels in the label; euclid (a, b) is a euclidean distance calculation function between position a and position b.
The calculation to obtain the boundary loss specifically comprises the following steps:
the boundary loss L is calculated by the following formula boundary
Wherein C is the number of categories; p (x, y) is a probability map; b (B) 2 Is a boundary map of the extracted MRI image.
In the image obtained in step S4, the step S6 is to select an unlabeled image to be input into the current segmentation model, and calculate the unsupervised loss, and specifically includes the following steps:
selecting an unlabeled image to be input into a current segmentation model to obtain a first view level MRI tissue segmentation probability map p output by a first decoder 1 And a second view level MRI tissue segmentation probability map p output by a second decoder 2
From the probability map p obtained 1 And p 2 Calculating to obtain a corresponding confidence coefficient graph c 1 And c 2
From the probability map p obtained 1 And p 2 Confidence map c 1 And c 2 Generating an image-level pseudo tag;
and calculating to obtain the unsupervised loss according to the obtained image-level pseudo tag and the probability map.
The generation of the image-level pseudo tag specifically comprises the following steps:
calculating to obtain image-level pseudo tagIs->
The calculation results in unsupervised loss, which specifically comprises the following steps:
the unsupervised loss L is calculated by the following formula u
D in u Is a label-free image set; i D u I is the number of unlabeled images; x is x u Is an unlabeled image.
And (3) synthesizing the supervised partial loss and the boundary loss obtained in the step S5 and the unsupervised loss obtained in the step S6 to form a total loss function, wherein the method specifically comprises the following steps of:
constructed total loss function L loss Is L loss =L sup +L boundary +L u
The invention also discloses an imaging method comprising the MRI image tissue segmentation method based on semi-supervision, which comprises the following steps:
A. acquiring actual MRI image data;
B. adopting the MRI image tissue segmentation method based on semi-supervision to segment the MRI image data obtained in the step A into tissues to obtain a tissue segmentation result;
C. and C, marking and secondarily imaging the tissue segmentation result obtained in the step B on the MRI image obtained in the step A, thereby completing corresponding MRI image imaging.
According to the MRI image tissue segmentation method and the imaging method based on semi-supervision, the constructed MRI image tissue segmentation model is adopted to conduct view level prediction, then the image level pseudo tag is generated to establish bidirectional consistency, and meanwhile boundary optimization is conducted to improve the boundary segmentation quality, so that an accurate tissue segmentation result with good robustness can be obtained on a multi-task MRI image, and the method is high in reliability, good in accuracy and good in segmentation effect.
Drawings
Fig. 1 is a flow chart of the dividing method according to the present invention.
Fig. 2 is a general flow chart of the segmentation method of the present invention.
Fig. 3 is a schematic diagram of a convolution attention module in the segmentation method of the present invention.
Fig. 4 is a schematic diagram showing the effect of the segmentation method of the present invention.
Fig. 5 is a flow chart of the imaging method of the present invention.
Detailed Description
Fig. 1 is a flow chart of the segmentation method according to the present invention, and the segmentation process is shown in fig. 2:
the MRI image tissue segmentation method based on semi-supervision provided by the invention comprises the following steps:
s1, acquiring existing MRI image data and marking the same, so as to obtain a tagged data image and a non-tagged data image; the method specifically comprises the following steps:
acquiring existing MRI image data;
marking the data to obtain a tagged data imageAnd no-label data imageWherein (1)>For the ith tagged data image, y i The label of the ith labeled data image, N is the total number of labeled data images,/>For the ith unlabeled data image, M is the total number of unlabeled data images;
in practice, the acquired MRI image may be a hybrid image or a single image: a single image, i.e., an MRI image of only one tissue site, such as an MRI data image of female pelvis, an MRI data image of male prostate, or a cardiac MRI data image, etc.; the mixed image can simultaneously comprise images of a plurality of tissue parts, such as MRI data images of female pelvis, MRI data images of male prostate and heart MRI data images;
s2, constructing a semi-supervised MRI image tissue segmentation preliminary model; the method specifically comprises the following steps:
the model comprises a shared encoder, two decoders and two convolution attention modules;
the shared encoder comprises five convolution modules, each comprising a 3 x 3 convolution block, a batch normalization layer and an activation function ReLU, and each followed by a 2 x 2 pooling layer;
the first decoder comprises five upsampling modules, each upsampling module comprising an upsampling layer based on bilinear interpolation, a 3 x 3 convolution block, a batch normalization layer and an activation function ReLU;
the second decoder comprises five upsampling modules, each upsampling module comprising an upsampling layer based on bilinear interpolation, a 3 x 3 convolution block, a batch normalization layer and an activation function ReLU;
each layer of output of the shared encoder and the same-size characteristic output of the corresponding layer of the first decoder are cascaded to obtain a first cascade characteristic, and the first cascade characteristic is input into a first convolution attention module; in the implementation, the output of a first convolution module of a shared encoder and the output of a first up-sampling module of a first decoder are cascaded, and the cascaded characteristics are input to a first convolution attention module; likewise, the output of the second convolution module of the shared encoder and the output of the second up-sampling module of the first decoder are cascaded, and the cascaded features are input to the first convolution attention module; and so on;
the same-size characteristics of each layer output of the shared encoder and the corresponding layer output of the second decoder are cascaded to obtain second cascading characteristics, and the second cascading characteristics are input into a second convolution attention module; in the implementation, the output of a first convolution module of a shared encoder and the output of a first up-sampling module of a second decoder are cascaded, and the cascaded characteristics are input to a second convolution attention module; likewise, cascading the output of the second convolution module of the shared encoder with the output of the second upsampling module of the second decoder, and inputting the cascaded characteristic to the second convolution attention module; and so on;
the convolution attention module comprises two parallel channel attention sub-modules and a space attention sub-module; wherein the channel attention submodule comprises a maximum pooling layer, an average pooling layer and a multi-layer perceptron; the spatial attention submodule comprises a maximum pooling layer, an average pooling layer and a convolution module layer, wherein the convolution module layer comprises a 3 multiplied by 3 convolution block, a 2 multiplied by 2 pooling layer, a batch normalization layer and an activation function ReLU;
the output characteristics of the channel attention module and the output characteristics of the space attention module are multiplied by the cascade characteristics and then added, and the multiplied output characteristics and the cascade characteristics are input into a convolution module to obtain attention activated characteristics; the convolution module comprises a 3×3 convolution block, a 2×2 pooling layer, a batch normalization layer and an activation function ReLU; finally, the attention activated features (namely the output features of the convolution attention module) are input into the network layer corresponding to the first decoder or the second decoder for subsequent decoding;
in specific implementation, a first convolution attention module is described as an example: if the input characteristic is a characteristic obtained by cascading the output of the first convolution module of the encoder and the output of the first up-sampling module of the first decoder, the output characteristic of the first convolution attention module is input to the second up-sampling module of the first decoder at the moment and is used as the input of the second up-sampling module of the first decoder; if the input characteristic is a characteristic obtained by cascading the output of the second convolution module of the encoder and the output of the second up-sampling module of the first decoder, the output characteristic of the first convolution attention module is input to the third up-sampling module of the first decoder at the moment and is used as the input of the third up-sampling module of the first decoder; and so on; meanwhile, the processing flow of the second convolution attention module is the same as that of the first convolution attention module;
the encoder can extract high-level characteristics of the original data and reduce the dimension of the data;
the first decoder and the second decoder can repair the detail and the space dimension of the data step by step so as to obtain a target output result;
the first convolution attention module and the second convolution attention module can enable the network to concentrate on training key features, neglect non-key features and improve network segmentation accuracy;
the channel attention submodule is used for generating a channel attention map, and can highlight key channels and inhibit non-key channels, so that the network effectively learns global features and the network representation capacity is improved;
the space attention submodule is used for generating a space attention map, and is mutually complemented with the channel attention submodule, so that the important space position can be highlighted, the non-important space position can be restrained, the network can effectively learn local characteristics, and the network performance is improved;
wherein the structure of the adopted convolution attention module is shown in fig. 3;
s3, randomly selecting a plurality of images from the data acquired in the step S1;
s4, performing image augmentation operation on the image obtained in the step S3, so as to obtain a strong augmentation image and a weak augmentation image; the method specifically comprises the following steps:
dividing the image acquired in the step S3 into two parts:
performing weak augmentation operation on the first partial image so as to obtain a weak augmented image; the weak amplification operation comprises random size adjustment, cutting, overturning and rotation;
performing strong augmentation operation on the second part of image so as to obtain a strong augmentation image; the strong amplification operation comprises the steps of adjusting the brightness of an image, adjusting the contrast of the image, adjusting the blurring degree of the image and adding Gaussian noise;
the first partial image and the second partial image comprise a tagged data image and an untagged data image;
s5, selecting a labeled image from the images obtained in the step S4, inputting the labeled image into a current segmentation model, and calculating to obtain the loss of a supervision part and the boundary loss of the labeled image; the method specifically comprises the following steps:
inputting the labeled image into a current segmentation model to obtain a tissue segmentation result of the MRI image, and calculating the loss of a supervision part;
in specific implementation, the method for calculating the supervision part loss specifically comprises the following steps:
the supervision partial loss L is calculated by adopting the following formula sup
D in l Is a labeled image set; i D l I is the number of tagged images; x is x l Is a tagged image; y is x l The corresponding label; l (L) ce () Is a cross entropy loss function; a is that 1 () A processing function that is a weak augmented image processing; a is that 2 () A processing function that is a strong augmentation image processing; f () is the processing function of the encoder; f (A) 1 (x l ) A characteristic diagram obtained after the weak augmentation image is processed by an encoder; f (A) 2 (x l ) A characteristic diagram obtained after the strong amplification image is processed by an encoder; h is a 1 () Is a processing function of the first decoder; h is a 2 () Is a processing function of the second decoder;
extracting the boundary of the MRI image by adopting a Sobel operator according to the tissue segmentation result of the obtained MRI image, then calculating a distance map of the image, and calculating to obtain boundary loss;
in specific implementation, the calculating the distance map of the image specifically includes the following steps:
the distance map Dist (x, y) is calculated using the following formula:
in B of 1 () A boundary map for the label; (x) 0 ,y 0 ) Is the location of boundary pixels in the label; euclid (a, b) is a Euclidean distance calculation function between position a and position b;
the calculating of the boundary loss specifically comprises the following steps:
the boundary loss L is calculated by the following formula boundary
Wherein C is the number of categories; p (x, y) is a probability map; b (B) 2 A boundary map for the extracted MRI image;
s6, selecting an unlabeled image from the images obtained in the step S4, inputting the unlabeled image into a current segmentation model, and calculating to obtain an unsupervised loss; the method specifically comprises the following steps:
selecting an unlabeled image to be input into a current segmentation model to obtain a first view level MRI tissue segmentation probability map p output by a first decoder 1 And a second view level MRI tissue segmentation probability map p output by a second decoder 2
From the probability map p obtained 1 And p 2 Calculating to obtain a corresponding confidence coefficient graph c 1 And c 2
From the probability map p obtained 1 And p 2 Confidence map c 1 And c 2 Generating an image-level pseudo tag; the method specifically comprises the following steps:
calculating to obtain image-level pseudo tagIs->
According to the obtained image-level pseudo tag and probability map, calculating to obtain unsupervised loss; the method specifically comprises the following steps:
the unsupervised loss L is calculated by the following formula u
D in u Is a label-free image set; i D u I is the number of unlabeled images; x is x u Is an unlabeled image;
s7, integrating the supervised partial loss and the boundary loss obtained in the step S5 and the unsupervised loss obtained in the step S6 to form a total loss function, and carrying out back propagation through a gradient descent algorithm to update the parameters of the current segmentation model;
in specific implementation, the method for constructing the total loss function specifically comprises the following steps:
constructed total loss function L loss Is L loss =L sup +L boundary +L u
S8, repeating the steps S3 to S7 until the set conditions are met, and obtaining a final MRI image tissue segmentation model based on semi-supervision;
s9, performing actual tissue segmentation of the MRI image by adopting the semi-supervised MRI image tissue segmentation model obtained in the step S8.
The invention provides an image tissue segmentation method, which comprises the steps of inputting double views into double decoders of a single encoder to obtain view level predictions, and then fusing the confidence level images and probability images to generate better image level pseudo labels, so as to generate a robust and accurate segmentation result;
the invention improves the convolution attention module, and uses the channel attention sub-module and the space attention sub-module in parallel, so that the network learns channel characteristics and space characteristics at the same time, and the generalization capability of the network is enhanced;
in addition, the invention provides a boundary optimization module, which adopts a Sobel operator to extract the boundary of the MRI image, then calculates the distance graph of the image as a boundary soft label, and improves the quality of boundary segmentation.
The following sets of embodiments further illustrate the segmentation effect of the present invention:
comparing the segmentation method provided by the invention with the existing segmentation method, the female pelvic cavity MRI data set with 134 training sets and 37 testing sets, the prostate MRI data set with 497 training sets and 125 testing sets and the heart MRI data set with 690 training sets and 173 testing sets are adopted during comparison. The full supervision method uses all labels of a training set for training, and other methods use 10% of labeled data, and take the average cross-over ratio of a segmentation result and a group trunk, a Dice coefficient and a Hausdorff distance as evaluation criteria.
Specific results are shown in tables 1 to 3:
TABLE 1 female pelvic MRI tissue segmentation results data sheet
Segmentation method Average cross ratio (%) Dice coefficient (%) Haosdorff distance (mm)
Full supervision method 74.18 81.77 4.28
2017 semi-supervision method 76.45 83.81 3.72
2021 semi-supervised method 77.17 84.50 3.81
The method of the invention 78.28 85.21 3.53
TABLE 2 prostate MRI tissue segmentation results data sheet
Segmentation method Average cross ratio (%) Dice coefficient (%) Haosdorff distance (mm)
Full supervision method 84.58 89.13 4.71
2017 semi-supervision method 85.14 89.61 4.45
2021 semi-supervised method 87.07 91.50 4.55
The method of the invention 87.78 92.17 4.26
TABLE 3 cardiac MRI tissue segmentation results data sheet
Segmentation method Average cross ratio (%) Dice coefficient (%) Haosdorff distance (mm)
Full supervision method 63.14 69.07 3.63
2017 semi-supervision method 72.59 78.53 3.05
2021 semi-supervised method 73.63 79.70 3.00
The method of the invention 74.46 79.84 2.87
It can be seen from tables 1 to 3 that the segmentation method provided by the invention is superior to the full-supervision method, the semi-supervision classical method and the latest method in three indexes, and the segmentation method provided by the invention can obtain more accurate segmentation results.
Fig. 4 is a schematic diagram showing the effect of the segmentation method according to the present invention: FIG. 4 compares the results of the semi-supervised method and the segmentation method of the present invention, from left to right, respectively, the original image, the GroundTruth, the semi-supervised method of 2017, the semi-supervised method of 2021, and the segmentation result of the segmentation method of the present invention; as can be seen from fig. 4, the segmentation method of the present invention performs better in segmenting the integrity of MRI tissue and continuity of the stenosed portion.
Fig. 5 is a flow chart of the imaging method of the present invention: the imaging method comprising the semi-supervised MRI image tissue segmentation method disclosed by the invention comprises the following steps of:
A. acquiring actual MRI image data;
B. adopting the MRI image tissue segmentation method based on semi-supervision to segment the MRI image data obtained in the step A into tissues to obtain a tissue segmentation result;
C. and C, marking and secondarily imaging the tissue segmentation result obtained in the step B on the MRI image obtained in the step A, thereby completing corresponding MRI image imaging.
In particular, the imaging method of the present invention may be used with existing MRI image acquisition devices, such as MRI machines. When the imaging method is specifically used, the imaging method is fused into the existing MRI image acquisition equipment, then the original MRI image is acquired by adopting the prior art, then the imaging method is adopted to carry out secondary imaging on the original MRI image, then the MRI image with the tissue segmentation result is obtained, and the MRI image with the tissue segmentation result is directly output. In this way, medical practitioners (including clinicians, imaging doctors or experimenters, etc.) can acquire MRI images directly with tissue segmentation results, which would greatly facilitate existing users.

Claims (10)

1. A semi-supervised MRI image tissue segmentation method comprises the following steps:
s1, acquiring existing MRI image data and marking the same, so as to obtain a tagged data image and a non-tagged data image;
s2, constructing a semi-supervised MRI image tissue segmentation preliminary model;
s3, randomly selecting a plurality of images from the data acquired in the step S1;
s4, performing image augmentation operation on the image obtained in the step S3, so as to obtain a strong augmentation image and a weak augmentation image;
s5, selecting a labeled image from the images obtained in the step S4, inputting the labeled image into a current segmentation model, and calculating to obtain the loss of a supervision part and the boundary loss of the labeled image;
s6, selecting an unlabeled image from the images obtained in the step S4, inputting the unlabeled image into a current segmentation model, and calculating to obtain an unsupervised loss;
s7, integrating the supervised partial loss and the boundary loss obtained in the step S5 and the unsupervised loss obtained in the step S6 to form a total loss function, and carrying out back propagation through a gradient descent algorithm to update the parameters of the current segmentation model;
s8, repeating the steps S3 to S7 until the set conditions are met, and obtaining a final MRI image tissue segmentation model based on semi-supervision;
s9, performing actual tissue segmentation of the MRI image by adopting the semi-supervised MRI image tissue segmentation model obtained in the step S8.
2. The semi-supervised MRI image tissue segmentation method according to claim 1, wherein the acquiring and marking of the existing MRI image data in step S1 to obtain a tagged data image and a non-tagged data image comprises the steps of:
acquiring existing MRI image data;
marking the data to obtain a tagged data imageAnd no-label data imageWherein (1)>For the ith tagged data image, y i The label of the ith labeled data image, N is the total number of labeled data images,/>For the ith unlabeled data image, M is the total number of unlabeled data images.
3. The semi-supervised MRI image tissue segmentation method as set forth in claim 2, wherein the constructing the semi-supervised MRI image tissue segmentation preliminary model in step S2 specifically includes the steps of:
the model comprises a shared encoder, two decoders and two convolution attention modules;
the shared encoder comprises five convolution modules, each comprising a 3 x 3 convolution block, a batch normalization layer and an activation function ReLU, and each followed by a 2 x 2 pooling layer;
the first decoder comprises five upsampling modules, each upsampling module comprising an upsampling layer based on bilinear interpolation, a 3 x 3 convolution block, a batch normalization layer and an activation function ReLU;
the second decoder comprises five upsampling modules, each upsampling module comprising an upsampling layer based on bilinear interpolation, a 3 x 3 convolution block, a batch normalization layer and an activation function ReLU;
each layer of output of the shared encoder and the same-size characteristic output of the corresponding layer of the first decoder are cascaded to obtain a first cascade characteristic, and the first cascade characteristic is input into a first convolution attention module;
the same-size characteristics of each layer output of the shared encoder and the corresponding layer output of the second decoder are cascaded to obtain second cascading characteristics, and the second cascading characteristics are input into a second convolution attention module;
the convolution attention module comprises two parallel channel attention sub-modules and a space attention sub-module; wherein the channel attention submodule comprises a maximum pooling layer, an average pooling layer and a multi-layer perceptron; the spatial attention submodule comprises a maximum pooling layer, an average pooling layer and a convolution module layer, wherein the convolution module layer comprises a 3 multiplied by 3 convolution block, a 2 multiplied by 2 pooling layer, a batch normalization layer and an activation function ReLU;
the output characteristics of the channel attention module and the output characteristics of the space attention module are multiplied by the cascade characteristics and then added, and the multiplied output characteristics and the cascade characteristics are input into a convolution module to obtain attention activated characteristics; finally, inputting the feature activated by attention into a network layer corresponding to the first decoder or the second decoder for subsequent decoding; the convolution module includes a 3 x 3 convolution block, a 2 x 2 pooling layer, a batch normalization layer, and an activation function ReLU.
4. The semi-supervised MRI image tissue segmentation method according to claim 3, wherein the image enhancement operation is performed on the image obtained in step S3 in step S4 to obtain a strong enhanced image and a weak enhanced image, and specifically comprising the steps of:
dividing the image acquired in the step S3 into two parts:
performing weak augmentation operation on the first partial image so as to obtain a weak augmented image; the weak amplification operation comprises random size adjustment, cutting, overturning and rotation;
performing strong augmentation operation on the second part of image so as to obtain a strong augmentation image; the strong amplification operation comprises the steps of adjusting the brightness of an image, adjusting the contrast of the image, adjusting the blurring degree of the image and adding Gaussian noise;
the first partial image and the second partial image each comprise a tagged data image and an untagged data image.
5. The semi-supervised MRI image tissue segmentation method according to claim 4, wherein in the step S5, the labeled image is selected and input into the current segmentation model, and the supervised portion loss and the boundary loss of the labeled image are calculated, and the method specifically comprises the following steps:
inputting the labeled image into a current segmentation model to obtain a tissue segmentation result of the MRI image, and calculating the loss of a supervision part;
and extracting the boundary of the MRI image by adopting a Sobel operator according to the tissue segmentation result of the obtained MRI image, then calculating a distance map of the image, and calculating to obtain the boundary loss.
6. The semi-supervised MRI image tissue segmentation method according to claim 5, wherein said calculating the supervised partial loss comprises the steps of:
the supervision partial loss L is calculated by adopting the following formula sup
D in l Is labeledAn image collection; d (D) l Is the number of tagged images; x is x l Is a tagged image; y is x l The corresponding label; l (L) ce () Is a cross entropy loss function; a is that 1 () A processing function that is a weak augmented image processing; a is that 2 () A processing function that is a strong augmentation image processing; f () is the processing function of the encoder; f (A) 1 (x l ) A characteristic diagram obtained after the weak augmentation image is processed by an encoder; f (A) 2 (x l ) A characteristic diagram obtained after the strong amplification image is processed by an encoder; h is a 1 () Is a processing function of the first decoder; h is a 2 () Is a processing function of the second decoder;
the method for calculating the distance map of the image specifically comprises the following steps:
the distance map Dist (x, y) is calculated using the following formula:
in B of 1 () A boundary map for the label; (x) 0 ,y 0 ) Is the location of boundary pixels in the label; euclid (a, b) is a Euclidean distance calculation function between position a and position b;
the calculation to obtain the boundary loss specifically comprises the following steps:
the boundary loss L is calculated by the following formula boundary
Wherein C is the number of categories; p (x, y) is a probability map; b (B) 2 Is a boundary map of the extracted MRI image.
7. The semi-supervised MRI image tissue segmentation method according to claim 6, wherein in the image obtained in step S4 in step S6, an unlabeled image is selected and input into a current segmentation model, and an unsupervised loss is calculated, and the method specifically comprises the following steps:
selecting an unlabeled image to be input into a current segmentation model to obtain a first view level MRI tissue segmentation probability map p output by a first decoder 1 And a second view level MRI tissue segmentation probability map p output by a second decoder 2
From the probability map p obtained 1 And p 2 Calculating to obtain a corresponding confidence coefficient graph c 1 And c 2
From the probability map p obtained 1 And p 2 Confidence map c 1 And c 2 Generating an image-level pseudo tag;
and calculating to obtain the unsupervised loss according to the obtained image-level pseudo tag and the probability map.
8. The semi-supervised MRI image tissue segmentation method as set forth in claim 7, wherein the generating of the image-level pseudo labels comprises the steps of:
calculating to obtain image-level pseudo tagIs->
The calculation results in unsupervised loss, which specifically comprises the following steps:
the unsupervised loss L is calculated by the following formula u
D in u Is a label-free image set; d (D) u Number of images that are label-free; x is x u Is an unlabeled image.
9. The semi-supervised MRI image tissue segmentation method according to claim 8, wherein the step S7 is characterized by integrating the supervised partial loss and the boundary loss obtained in the step S5 and the unsupervised loss obtained in the step S6 to form a total loss function, and specifically comprises the following steps:
constructed total loss function L loss Is L loss =L sup +L boundary +L u
10. An imaging method comprising the semi-supervised MRI image tissue segmentation method of any one of claims 1-9, comprising in particular the steps of:
A. acquiring actual MRI image data;
B. performing tissue segmentation on the MRI image data acquired in the step A by adopting the semi-supervised MRI image tissue segmentation method according to one of claims 1-9 to obtain a tissue segmentation result;
C. and C, marking and secondarily imaging the tissue segmentation result obtained in the step B on the MRI image obtained in the step A, thereby completing corresponding MRI image imaging.
CN202310224096.9A 2023-03-10 2023-03-10 MRI image tissue segmentation method and imaging method based on semi-supervision Pending CN116524178A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310224096.9A CN116524178A (en) 2023-03-10 2023-03-10 MRI image tissue segmentation method and imaging method based on semi-supervision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310224096.9A CN116524178A (en) 2023-03-10 2023-03-10 MRI image tissue segmentation method and imaging method based on semi-supervision

Publications (1)

Publication Number Publication Date
CN116524178A true CN116524178A (en) 2023-08-01

Family

ID=87398274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310224096.9A Pending CN116524178A (en) 2023-03-10 2023-03-10 MRI image tissue segmentation method and imaging method based on semi-supervision

Country Status (1)

Country Link
CN (1) CN116524178A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197166A (en) * 2023-11-06 2023-12-08 中南大学 Polyp image segmentation method and imaging method based on edge and neighborhood information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197166A (en) * 2023-11-06 2023-12-08 中南大学 Polyp image segmentation method and imaging method based on edge and neighborhood information
CN117197166B (en) * 2023-11-06 2024-02-06 中南大学 Polyp image segmentation method and imaging method based on edge and neighborhood information

Similar Documents

Publication Publication Date Title
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN113554669B (en) Unet network brain tumor MRI image segmentation method with improved attention module
CN113012172A (en) AS-UNet-based medical image segmentation method and system
US20230316549A1 (en) Method for establishing non-rigid multi-modal medical image registration model and application thereof
US20240257356A1 (en) Three-dimensional medical image segmentation method and system based on short-term and long-term memory self-attention model
CN113314205A (en) Efficient medical image labeling and learning system
CN110648331B (en) Detection method for medical image segmentation, medical image segmentation method and device
WO2022121100A1 (en) Darts network-based multi-modal medical image fusion method
Cheng et al. DDU-Net: A dual dense U-structure network for medical image segmentation
CN115147600A (en) GBM multi-mode MR image segmentation method based on classifier weight converter
CN116524178A (en) MRI image tissue segmentation method and imaging method based on semi-supervision
CN116433654A (en) Improved U-Net network spine integral segmentation method
CN115661459A (en) 2D mean teacher model using difference information
CN117974693B (en) Image segmentation method, device, computer equipment and storage medium
CN113129310A (en) Medical image segmentation system based on attention routing
CN118230038A (en) Eye socket lymphoproliferative disease classification and identification method and system based on image analysis
Chatterjee et al. A survey on techniques used in medical imaging processing
CN117633558A (en) Multi-excitation fusion zero-sample lesion detection method based on visual language model
CN117437423A (en) Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement
CN117291935A (en) Head and neck tumor focus area image segmentation method and computer readable medium
Li et al. Hrinet: Alternative supervision network for high-resolution ct image interpolation
Kening et al. Nested recurrent residual unet (nrru) on gan (nrrg) for cardiac ct images segmentation task
CN113269783A (en) Pulmonary nodule segmentation method and device based on three-dimensional attention mechanism
Wang et al. MAFUNet: Multi-Attention Fusion Network for Medical Image Segmentation
CN112967295A (en) Image processing method and system based on residual error network and attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination