WO2022041307A1 - Method and system for constructing semi-supervised image segmentation framework - Google Patents

Method and system for constructing semi-supervised image segmentation framework Download PDF

Info

Publication number
WO2022041307A1
WO2022041307A1 PCT/CN2020/113496 CN2020113496W WO2022041307A1 WO 2022041307 A1 WO2022041307 A1 WO 2022041307A1 CN 2020113496 W CN2020113496 W CN 2020113496W WO 2022041307 A1 WO2022041307 A1 WO 2022041307A1
Authority
WO
WIPO (PCT)
Prior art keywords
segmentation
image
loss
student
supervised
Prior art date
Application number
PCT/CN2020/113496
Other languages
French (fr)
Chinese (zh)
Inventor
潘志方
陈高翔
茹劲涛
Original Assignee
温州医科大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 温州医科大学 filed Critical 温州医科大学
Publication of WO2022041307A1 publication Critical patent/WO2022041307A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the invention relates to the technical field of image processing, in particular to a method and system for constructing a semi-supervised image segmentation framework.
  • Medical image segmentation plays a vital role in clinical applications and scientific research. Accurate medical image segmentation can provide important quantitative measures for lesion grading, classification, and disease diagnosis, and further help clinicians evaluate treatment response to related diseases and provide a reliable basis for surgical planning and rehabilitation strategies.
  • weakly supervised learning does not require voxel-level labeled data, but uses image-level labeled data as weakly supervised signals in network training. Nonetheless, image-level labels or bounding boxes for medical images also require domain knowledge and are expensive to acquire, and the application of weakly supervised learning models in medical imaging is still limited, where image-level labels and bounding boxes are still required. Simple markup.
  • This semi-supervised learning method utilizes both labeled and unlabeled data, and strikes a balance between tediously supervised and unsupervised to train a model to accurately segment medical images with only a small number of labeled samples, which is important for designing images in medicine A split frame might be a more meaningful option.
  • the existing semi-supervised segmentation methods not only utilize unlabeled data, but also require image-level labels (such as bounding box labels) to assist the training and learning of semi-supervised networks, which are not semi-supervised in the true sense, and in 3D
  • image-level labels such as bounding box labels
  • the technical problem to be solved by the embodiments of the present invention is to provide a method and system for constructing a semi-supervised image segmentation framework, by improving the mean teacher model to establish a general semi-supervised segmentation framework that can be used for 3D medical images, and no additional images are required. level mark.
  • an embodiment of the present invention provides a method for constructing a semi-supervised image segmentation framework, including the following steps:
  • Step S1 constructing a semi-supervised image segmentation framework including a student model, a teacher model and a discriminator;
  • Step S2 obtaining a marked MRI image and its corresponding gold standard, and importing the marked MRI image as a first training set image into the student model for training, obtaining a segmentation probability map, and further combining the Gold standard to calculate supervised segmentation loss;
  • Step S3 obtain the original unlabeled MRI image and the noise unlabeled MRI image after it is combined with the noise of the preset Gaussian distribution, obtain a second training set image, and import the second training set image into the student
  • the model and the teacher model are trained respectively to obtain the corresponding student segmentation probability result map and the teacher segmentation probability result map. Further, the student segmentation probability result map and the teacher segmentation probability result map are respectively overlaid on the original After the marked MRI image, the corresponding student segmentation area and teacher segmentation area are generated and passed to the discriminator for similarity comparison to calculate the consistency loss; wherein, the teacher model is based on the The weights of the student model use an exponential moving average strategy to update the model parameters;
  • Step S4 Obtain a total segmentation loss according to the supervised segmentation loss and the consistency loss, and optimize the semi-supervised image segmentation framework according to the total segmentation loss.
  • step S3 specifically includes:
  • Training is performed in the teacher model, and the teacher model uses the exponential moving average strategy to update the model parameters based on the weight of the student model in the training process to obtain a teacher segmentation probability result map;
  • the student segmentation area and the teacher segmentation area are passed to the discriminator for similarity comparison, and the student multi-scale features and teacher multi-scale features are extracted respectively, and according to the student multi-scale features and the teacher multi-scale features Scale features, and calculate the consistency loss.
  • X u is the original unlabeled MRI image
  • S(X u ) is the student segmentation probability result Figure
  • R(X u ) is the teacher segmentation probability result map
  • f( ) is the hierarchical feature map extracted from the corresponding segmentation area
  • h, w, d are the height, width and length of each image
  • ⁇ mae for K is the number of network layers in the discriminator
  • f(x i ) is the feature vector output by the ith layer.
  • Y l is the gold standard for labeled images
  • h, w, d are the height, width, and length of each image
  • C is the number of label categories
  • c is a certain number of label categories in C a class
  • X l is the labeled MRI image
  • S(X l ) is the segmentation probability map.
  • the method further includes:
  • the self-training loss of the discriminator is calculated, the adversarial loss of the discriminator is obtained, and the self-training loss of the discriminator and its corresponding loss are obtained.
  • the adversarial loss is combined with the supervised segmentation loss and the consistency loss, the total segmentation loss is updated, and the semi-supervised image segmentation framework is optimized according to the updated total segmentation loss.
  • the embodiment of the present invention also provides a system for constructing a semi-supervised image segmentation framework, including an image segmentation framework construction unit, a supervised segmentation loss calculation unit, a consistency loss calculation unit, and an image segmentation framework optimization unit; wherein,
  • the image segmentation framework building unit is used to construct a semi-supervised image segmentation framework including a student model, a teacher model and a discriminator;
  • a supervised segmentation loss calculation unit is used to obtain the labeled MRI image and its corresponding gold standard, and import the labeled MRI image as the first training set image into the student model for training to obtain a segmentation probability map , and further combined with the gold standard to calculate the supervised segmentation loss;
  • the consistency loss calculation unit is used to obtain the original unlabeled MRI image and the noise unlabeled MRI image after it is combined with the noise of the preset Gaussian distribution to obtain a second training set image, and the second training set
  • the image is imported into the student model and the teacher model for training respectively, and the corresponding student segmentation probability result map and teacher segmentation probability result map are obtained. Further, the student segmentation probability result map and the teacher segmentation probability result map are respectively covered
  • the corresponding student segmentation area and teacher segmentation area are generated and passed to the discriminator for similarity comparison, so as to calculate the consistency loss; wherein, the teacher model is trained during training.
  • the model parameters are updated using an exponential moving average strategy based on the weight of the student model;
  • An image segmentation framework optimization unit configured to obtain a total segmentation loss according to the supervised segmentation loss and the consistency loss, and optimize the semi-supervised image segmentation framework according to the total segmentation loss.
  • the image segmentation framework re-optimization unit is used to calculate the self-training loss of the discriminator according to the student segmentation probability result map and the gold standard set correspondingly, and obtain the confrontation loss of the discriminator, and further
  • the self-training loss of the discriminator and its adversarial loss are combined with the supervised segmentation loss and the consistency loss to update the total segmentation loss, and according to the updated total segmentation loss, the semi-supervised image segmentation framework optimize.
  • the present invention uses the consistency mechanism based on multi-scale features to improve the mean teacher model, and incorporates voxel-level regularization information into the semi-supervised model, thereby further improving the mean teacher model and making it more suitable for image segmentation;
  • the present invention is deeply integrated with adversarial networks (such as discriminators for adversarial learning), which can achieve semi-supervised segmentation without additional image-level labels, and the role of adversarial networks is not only to extract multi-scale images containing spatial context information features, and can be used to measure the confidence of the segmentation probability map that implements the self-training scheme;
  • adversarial networks such as discriminators for adversarial learning
  • the present invention establishes a general semi-supervised segmentation framework that can be used for various MRI images (medical images).
  • FIG. 1 is a flowchart of a method for constructing a semi-supervised image segmentation framework proposed by an embodiment of the present invention
  • FIG. 2 is an application scene diagram before preprocessing of MRI images of four modalities in a method for constructing a semi-supervised image segmentation framework proposed by an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a system for constructing a semi-supervised image segmentation framework proposed by an embodiment of the present invention.
  • a method for constructing a semi-supervised image segmentation framework proposed in an embodiment of the present invention includes the following steps:
  • Step S1 constructing a semi-supervised image segmentation framework including a student model, a teacher model and a discriminator;
  • the constructed semi-supervised image segmentation framework is mainly composed of two modules: the mean teacher model and the adversarial network.
  • the framework deeply integrates the adversarial network into the improved mean teacher model, which mainly includes a mean teacher model formed by a student model S and a teacher model R and an adversarial network formed by a discriminator. All these models (including the discriminator) are based on CNN, especially the student and teacher models are based on the same segmentation network (such as U-Net).
  • Step S2 obtaining a marked MRI image and its corresponding gold standard, and importing the marked MRI image as a first training set image into the student model for training to obtain a segmentation probability map, and further combining the Gold standard to calculate supervised segmentation loss;
  • h, w, d are the height, width and length of each image; C is the number of label categories; c is a certain category in the number of label categories C.
  • Step S3 obtain the original unlabeled MRI image and the noise unlabeled MRI image after it is combined with the noise of the preset Gaussian distribution, obtain a second training set image, and import the second training set image into the student
  • the model and the teacher model are trained respectively to obtain the corresponding student segmentation probability result map and the teacher segmentation probability result map. Further, the student segmentation probability result map and the teacher segmentation probability result map are respectively overlaid on the original After the marked MRI image, the corresponding student segmentation area and teacher segmentation area are generated and passed to the discriminator for similarity comparison to calculate the consistency loss; wherein, the teacher model is based on the The weights of the student model use an exponential moving average strategy to update the model parameters;
  • the specific process is that, since the traditional mean teacher model has two losses, one is segmentation loss and the other is consistency loss, which is usually calculated directly from the segmentation maps of the student model S and the teacher model R. . Therefore, in order to overcome the inaccuracy of the accuracy caused by the direct conversion of consistency loss in the traditional mean teacher model, a consistency mechanism based on multi-scale features is used to improve the traditional mean teacher model and make it more suitable for image segmentation.
  • the process is as follows:
  • EMA exponential moving average
  • f( ) is the hierarchical feature map extracted from the corresponding segmented area
  • ⁇ mae is K is the number of network layers in the discriminator A
  • f(x i ) is the feature vector output by the i-th layer.
  • the discriminator A for adversarial learning is used as another important component in the framework, and a consistency loss calculated based on multi-scale features is designed.
  • the student model S and the teacher model R output the original unlabeled MRI image Xu and its corresponding noise unlabeled MRI image corresponding to the student segmentation probability result map S(X u ) and the teacher segmentation probability result map R respectively.
  • (X u ) After (X u ), it is overlaid on the original unlabeled MRI image X u to obtain two sets of segmented regions in MRI, which are generated by pixel-by-pixel multiplication of the input MRI and the segmentation probability map , that is, the student segmentation area Divide the area with the teacher In consistency training, the two obtained segmentation regions are encouraged to be similar, instead of only considering the consistency of the segmentation probability map as in the traditional mean teacher model.
  • CNNs can effectively learn image features at multiple scales, in order to better measure the consistency of segmented regions, the hierarchical features of segmented regions are extracted from CNN-based discriminator A and concatenated together, and compared the student segmented regions Divide the area with the teacher Corresponding multi-scale features, which are considered as student segmentation regions Divide the area with the teacher difference between.
  • Step S4 Obtain a total segmentation loss according to the supervised segmentation loss and the consistency loss, and optimize the semi-supervised image segmentation framework according to the total segmentation loss.
  • the specific process is, according to formula (3), calculate the total segmentation loss
  • ⁇ con is a weighting coefficient used to balance the relative importance of the designed loss function.
  • the discriminator A in addition to generating the above-mentioned multi-scale features for calculating consistency loss, the discriminator A also outputs a confidence map for self-training.
  • This confidence map can be used to guide and constrain the target region so that the learned distribution is closer to the true distribution.
  • reliable confidence regions can be obtained to select high-confidence segmentation results and convert them to pseudo-labels for self-training. Therefore, a subset of valid segmentation results from unlabeled MRI images Xu can be directly viewed as labels, which can be added to the training set to further enrich the dataset.
  • discriminator A is also used to define the adversarial loss It can further enhance the ability of the student model to fool the discriminator, as shown in Equation (5):
  • the adversarial loss Can be applied to all training samples, as it depends only on the adversarial network, regardless of whether there are labels.
  • the method further includes:
  • the self-training loss of the discriminator is calculated, and the adversarial loss of the discriminator is obtained, and the self-training loss of the discriminator and its adversarial loss are further combined with the supervised segmentation loss and Consistency losses are combined to update the total segmentation loss and optimize the semi-supervised image segmentation framework based on the updated total segmentation loss.
  • ⁇ con , ⁇ self and ⁇ adv are the corresponding weighting coefficients to balance the relative importance of the designed loss function.
  • FIG. 2 it is an application scene diagram of brain MRI segmentation jointly trained by mean teacher model and adversarial network in a method for constructing a semi-supervised image segmentation framework according to an embodiment of the present invention.
  • a system for constructing a semi-supervised image segmentation framework including an image segmentation framework construction unit 110, a supervised segmentation loss calculation unit 120, a consistency loss calculation unit 130, and an image segmentation frame construction unit 110.
  • Segmentation frame optimization unit 140 wherein,
  • an image segmentation frame construction unit 110 configured to construct a semi-supervised image segmentation frame comprising a student model, a teacher model and a discriminator;
  • the supervised segmentation loss calculation unit 120 is used to obtain the marked MRI image and its corresponding gold standard, and import the marked MRI image as the first training set image into the student model for training to obtain the segmentation probability graph, and further combined with the gold standard to calculate a supervised segmentation loss;
  • the consistency loss calculation unit 130 is configured to obtain the original unlabeled MRI image and the noise unlabeled MRI image after it is combined with the noise of the preset Gaussian distribution to obtain a second training set image, and the second training set image is obtained.
  • the set images are imported into the student model and the teacher model for training respectively, and the corresponding student segmentation probability result map and teacher segmentation probability result map are obtained.
  • the student segmentation probability result map and the teacher segmentation probability result map are respectively After being overlaid on the original unlabeled MRI image, the corresponding student segmentation area and teacher segmentation area are generated and passed to the discriminator for similarity comparison to calculate the consistency loss; wherein, the teacher model is in During the training process, the model parameters are updated based on the weight of the student model using an exponential moving average strategy;
  • the image segmentation framework optimization unit 140 is configured to obtain a total segmentation loss according to the supervised segmentation loss and the consistency loss, and optimize the semi-supervised image segmentation framework according to the total segmentation loss.
  • the image segmentation framework re-optimization unit 150 is used to calculate the self-training loss of the discriminator according to the student segmentation probability result map and the gold standard set correspondingly, and obtain the confrontation loss of the discriminator, and further The discriminator's self-training loss and its adversarial loss are combined with the supervised segmentation loss and the consistency loss to update the total segmentation loss, and segment the semi-supervised image according to the updated total segmentation loss
  • the framework is optimized.
  • the present invention uses the consistency mechanism based on multi-scale features to improve the mean teacher model, and incorporates voxel-level regularization information into the semi-supervised model, thereby further improving the mean teacher model and making it more suitable for image segmentation;
  • the present invention is deeply integrated with adversarial networks (such as discriminators for adversarial learning), which can achieve semi-supervised segmentation without additional image-level labels, and the role of adversarial networks is not only to extract multi-scale images containing spatial context information features, and can be used to measure the confidence of the segmentation probability map that implements the self-training scheme;
  • adversarial networks such as discriminators for adversarial learning
  • the present invention establishes a general semi-supervised segmentation framework that can be used for various MRI images (medical images).
  • each system unit included is only divided according to functional logic, but is not limited to the above-mentioned division, as long as the corresponding function can be realized; in addition, the specific The names are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

A method for constructing a semi-supervised image segmentation framework, the method comprising: constructing a semi-supervised image segmentation framework which comprises a student model, a teacher model and a discriminator (S1); acquiring a marked MRI image and a corresponding gold standard thereof, so as to calculate a supervised segmentation loss; acquiring an original unmarked MRI image and an unmarked MRI noise image that is obtained after combining the original unmarked MRI image with pre-set Gaussian distribution noise, so as to obtain a corresponding student segmentation probability result graph and a corresponding teacher segmentation probability result graph, then respectively covering the original unmarked MRI image with same, and generating a student segmentation area and a teacher segmentation area and transmitting same together to the discriminator for similarity comparison, so as to calculate a consistency loss; and obtaining the total segmentation loss according to the supervised segmentation loss and the consistency loss, and optimizing the semi-supervised image segmentation framework according to the total segmentation loss (S4). By means of implementing the method, a universal semi-supervised segmentation framework capable of being used for 3D medical images is established by means of improving a mean teacher model, and no additional image-level mark is needed.

Description

一种构建半监督图像分割框架的方法及系统A method and system for constructing a semi-supervised image segmentation framework 技术领域technical field
本发明涉及图像处理技术领域,尤其涉及一种构建半监督图像分割框架的方法及系统。The invention relates to the technical field of image processing, in particular to a method and system for constructing a semi-supervised image segmentation framework.
背景技术Background technique
医学图像分割在临床应用和科学研究中起着至关重要的作用。准确的医学图像分割可以为病变分级,分类和疾病诊断提供重要的定量措施,并进一步帮助临床医生评估对相关疾病的治疗反应,并为手术计划和康复策略提供可靠的依据。Medical image segmentation plays a vital role in clinical applications and scientific research. Accurate medical image segmentation can provide important quantitative measures for lesion grading, classification, and disease diagnosis, and further help clinicians evaluate treatment response to related diseases and provide a reliable basis for surgical planning and rehabilitation strategies.
近年来,涌现了许多基于计算机辅助的深度学习方法,如可自动提取和学习图像特征的卷积神经网络,其在图像分割中的应用取得了精度上的极大提升。但是,这些方法依赖于具有高质量标记的大量数据,特别是在医学成像领域,由于需要专家的领域知识,对大规模数据进行标记的过程通常更加昂贵且费时,难以获取大量的手工标记。此外,这种分割可能会受到标记者(例如临床医生)的变化的影响,不具有可重复性。In recent years, many computer-aided deep learning methods have emerged, such as convolutional neural networks that can automatically extract and learn image features, and their application in image segmentation has achieved great improvements in accuracy. However, these methods rely on large amounts of data with high-quality labels, especially in the medical imaging domain, where the process of labeling large-scale data is often more expensive and time-consuming due to the need for expert domain knowledge, making it difficult to obtain large amounts of hand-labeled labels. Furthermore, this segmentation may be subject to variations in labelers (e.g. clinicians) and is not reproducible.
为了避免标记数据的需求,已经有研究者提出了医学图像的无监督学习。然而,由于非常低的分割精度,这种完全无监督的方法对于形状和大小具有较大变化的复杂解剖结构或病变的效果不佳,因此都需要构建具有适当大小和准确标记的数据集以训练深度学习模型,这在医学成像的实际应用中通常难以实现。To avoid the need for labeled data, unsupervised learning of medical images has been proposed. However, due to the very low segmentation accuracy, this fully unsupervised approach does not work well for complex anatomical structures or lesions with large variations in shape and size, both requiring the construction of appropriately sized and accurately labeled datasets for training Deep learning models, which are often difficult to implement in practical applications of medical imaging.
作为另一种解决方案,弱监督学习不需要体素级别的标记数据,而是使用图像级别的标记数据作为网络训练中的弱监督信号。尽管如此,用于医学图像的图像级标记或边界框也需要领域知识,并且获取成本很高,弱监督学习模型在医学成像中的应用仍然受到限制,仍需要图像级标记和边界框之类的简单标 记。As another solution, weakly supervised learning does not require voxel-level labeled data, but uses image-level labeled data as weakly supervised signals in network training. Nonetheless, image-level labels or bounding boxes for medical images also require domain knowledge and are expensive to acquire, and the application of weakly supervised learning models in medical imaging is still limited, where image-level labels and bounding boxes are still required. Simple markup.
因此,有必要设计有效且无需其他辅助标记的半监督学习方法。该半监督学习方法利用标签和未标签的数据,在繁琐的监督和无监督之间取得了平衡,以仅利用少量的有标记样本来训练模型以准确分割医学图像,这对于设计医学中的图像分割框架可能是更有意义的选择。Therefore, it is necessary to design efficient semi-supervised learning methods that do not require additional auxiliary labels. This semi-supervised learning method utilizes both labeled and unlabeled data, and strikes a balance between tediously supervised and unsupervised to train a model to accurately segment medical images with only a small number of labeled samples, which is important for designing images in medicine A split frame might be a more meaningful option.
但是,现有的半监督分割方法并不仅仅利用未标记的数据,还需要图像级的标记(如边框标记)来辅助半监督网络的训练和学习,不是真正意义上的半监督,且在3D医学图像上的应用效果还未得到充分验证;同时,现有的半监督分割方法中采用的均值教师模型几乎都只用于图像分类,还未广泛用于图像分割。However, the existing semi-supervised segmentation methods not only utilize unlabeled data, but also require image-level labels (such as bounding box labels) to assist the training and learning of semi-supervised networks, which are not semi-supervised in the true sense, and in 3D The application effect on medical images has not been fully verified; at the same time, the mean teacher model used in the existing semi-supervised segmentation methods is almost only used for image classification, and has not been widely used in image segmentation.
发明内容SUMMARY OF THE INVENTION
本发明实施例所要解决的技术问题在于,提供一种构建半监督图像分割框架的方法及系统,通过改进均值教师模型来建立能够用于3D医学图像的通用半监督分割框架,并且无需额外的图像级的标记。The technical problem to be solved by the embodiments of the present invention is to provide a method and system for constructing a semi-supervised image segmentation framework, by improving the mean teacher model to establish a general semi-supervised segmentation framework that can be used for 3D medical images, and no additional images are required. level mark.
为了解决上述技术问题,本发明实施例提供了一种构建半监督图像分割框架的方法,包括以下步骤:In order to solve the above technical problems, an embodiment of the present invention provides a method for constructing a semi-supervised image segmentation framework, including the following steps:
步骤S1、构建包括学生模型、教师模型和判别器的半监督图像分割框架;Step S1, constructing a semi-supervised image segmentation framework including a student model, a teacher model and a discriminator;
步骤S2、获取有标记的MRI图像和其对应的金标准,并将所述有标记的MRI图像作为第一训练集图像导入所述学生模型中进行训练,得到分割概率图,且进一步结合所述金标准,以计算出监督型分割损失;Step S2, obtaining a marked MRI image and its corresponding gold standard, and importing the marked MRI image as a first training set image into the student model for training, obtaining a segmentation probability map, and further combining the Gold standard to calculate supervised segmentation loss;
步骤S3、获取原始未标记的MRI图像和其与预设的高斯分布的噪声相结合后的噪声未标记MRI图像,得到第二训练集图像,且将所述第二训练集图像导入所述学生模型及所述教师模型中分别进行训练,得到对应的学生分割概率结果图和教师分割概率结果图,进一步待所述学生分割概率结果图和所述教师分割概率结果图各自覆盖在所述原始未标记的MRI图像上后,生成对应的学生分割区域和教师分割区域并一同传递给所述判别器进行相似度比较,以计算出一致性损失;其中,所述教师模型在训练过程中基于所述学生模型的权重使用指 数移动平均策略来更新模型参数;Step S3, obtain the original unlabeled MRI image and the noise unlabeled MRI image after it is combined with the noise of the preset Gaussian distribution, obtain a second training set image, and import the second training set image into the student The model and the teacher model are trained respectively to obtain the corresponding student segmentation probability result map and the teacher segmentation probability result map. Further, the student segmentation probability result map and the teacher segmentation probability result map are respectively overlaid on the original After the marked MRI image, the corresponding student segmentation area and teacher segmentation area are generated and passed to the discriminator for similarity comparison to calculate the consistency loss; wherein, the teacher model is based on the The weights of the student model use an exponential moving average strategy to update the model parameters;
步骤S4、根据所述监督型分割损失及所述一致性损失,得到总分割损失,并根据所述总分割损失,对所述半监督图像分割框架进行优化。Step S4: Obtain a total segmentation loss according to the supervised segmentation loss and the consistency loss, and optimize the semi-supervised image segmentation framework according to the total segmentation loss.
其中,所述步骤S3具体包括:Wherein, the step S3 specifically includes:
获取原始未标记的MRI图像和将所述原始未标记的MRI图像与预设的高斯分布的噪声相结合后的噪声未标记MRI图像,得到第二训练集图像;obtaining the original unlabeled MRI image and the noise unlabeled MRI image obtained by combining the original unlabeled MRI image with the preset Gaussian distribution noise, to obtain a second training set image;
将所述第二训练集图像的原始未标记MRI图像导入所述学生模型中进行训练,得到对应的学生分割概率结果图,并将所述第二训练集图像的噪声未标记MRI图像导入所述教师模型中进行训练,且所述教师模型在训练过程中基于所述学生模型的权重使用指数移动平均策略来更新模型参数,得到教师分割概率结果图;Import the original unlabeled MRI image of the second training set image into the student model for training to obtain a corresponding student segmentation probability result map, and import the noise unlabeled MRI image of the second training set image into the student model. Training is performed in the teacher model, and the teacher model uses the exponential moving average strategy to update the model parameters based on the weight of the student model in the training process to obtain a teacher segmentation probability result map;
将所述学生分割概率结果图和所述教师分割概率结果图与所述原始未标记的MRI图像分别进行逐像素相乘,得到对应的学生分割区域和教师分割区域;Multiply the student segmentation probability result map and the teacher segmentation probability result map with the original unlabeled MRI image pixel by pixel, respectively, to obtain the corresponding student segmentation area and teacher segmentation area;
将所述学生分割区域和所述教师分割区域一同传递给所述判别器进行相似度比较,分别提取出学生多尺度特征和教师多尺度特征,并根据所述学生多尺度特征和所述教师多尺度特征,计算出一致性损失。The student segmentation area and the teacher segmentation area are passed to the discriminator for similarity comparison, and the student multi-scale features and teacher multi-scale features are extracted respectively, and according to the student multi-scale features and the teacher multi-scale features Scale features, and calculate the consistency loss.
其中,所述教师模型更新的模型参数为权重,其通过公式θ’ t=αθ’ t-1+(1-α)θ t来实现;其中,θ’为所述教师模型的权重,θ为所述学生模型的权重,α为控制指数移动平均策略衰减的超参数,t为训练步骤次数。 Wherein, the model parameter updated by the teacher model is the weight, which is realized by the formula θ' t =αθ' t-1 +(1-α)θ t ; wherein, θ' is the weight of the teacher model, and θ is The weight of the student model, α is a hyperparameter controlling the decay of the exponential moving average strategy, and t is the number of training steps.
其中,所述一致性损失的计算公式为
Figure PCTCN2020113496-appb-000001
Figure PCTCN2020113496-appb-000002
其中,
Wherein, the calculation formula of the consistency loss is:
Figure PCTCN2020113496-appb-000001
Figure PCTCN2020113496-appb-000002
in,
Figure PCTCN2020113496-appb-000003
为所述一致性损失;
Figure PCTCN2020113496-appb-000004
为两个图像的逐体素的乘法运算;
Figure PCTCN2020113496-appb-000005
为所述原始未标记的MRI图像与所述学生分割概率结果图相乘而获得的学生分割区域;
Figure PCTCN2020113496-appb-000006
为所述原始未标记的MRI图像与所述教师分割概率结果图相乘而获得的教师分割区域;X u为所述原始未标记的MRI图像;S(X u)为所述学生分割概率结果图;R(X u)为所述教师分割概率结果图;f(·)为相应分割区域提取的分层特征 图;h,w,d为每个图像的高、宽、长尺寸;δ mae
Figure PCTCN2020113496-appb-000007
K为所述判别器中网络层的数量;f(x i)为第i层输出的特征向量。
Figure PCTCN2020113496-appb-000003
is the consistency loss;
Figure PCTCN2020113496-appb-000004
is the voxel-by-voxel multiplication of the two images;
Figure PCTCN2020113496-appb-000005
a student segmentation region obtained by multiplying the original unlabeled MRI image by the student segmentation probability result map;
Figure PCTCN2020113496-appb-000006
is the teacher segmentation area obtained by multiplying the original unlabeled MRI image and the teacher segmentation probability result map; X u is the original unlabeled MRI image; S(X u ) is the student segmentation probability result Figure; R(X u ) is the teacher segmentation probability result map; f( ) is the hierarchical feature map extracted from the corresponding segmentation area; h, w, d are the height, width and length of each image; δ mae for
Figure PCTCN2020113496-appb-000007
K is the number of network layers in the discriminator; f(x i ) is the feature vector output by the ith layer.
其中,所述监督型分割损失的计算公式为
Figure PCTCN2020113496-appb-000008
其中,
Wherein, the calculation formula of the supervised segmentation loss is:
Figure PCTCN2020113496-appb-000008
in,
Figure PCTCN2020113496-appb-000009
为所述监督型分割损失;Y l为有标记图像的金标准;h,w,d为每个图像的高、宽、长尺寸;C为标签类别数;c为标签类别数C中的某一个类别;X l为所述有标记的MRI图像;S(X l)为所述分割概率图。
Figure PCTCN2020113496-appb-000009
is the supervised segmentation loss; Y l is the gold standard for labeled images; h, w, d are the height, width, and length of each image; C is the number of label categories; c is a certain number of label categories in C a class; X l is the labeled MRI image; S(X l ) is the segmentation probability map.
其中,所述方法进一步包括:Wherein, the method further includes:
根据所述学生分割概率结果图及其对应设置的金标准,计算出所述判别器的自训练损失,并获取所述判别器的对抗损失,且进一步将所述判别器的自训练损失及其对抗损失与所述监督型分割损失及所述一致性损失结合,更新所述总分割损失,并根据更新后的总分割损失,对所述半监督图像分割框架进行优化。According to the student segmentation probability result map and its corresponding gold standard, the self-training loss of the discriminator is calculated, the adversarial loss of the discriminator is obtained, and the self-training loss of the discriminator and its corresponding loss are obtained. The adversarial loss is combined with the supervised segmentation loss and the consistency loss, the total segmentation loss is updated, and the semi-supervised image segmentation framework is optimized according to the updated total segmentation loss.
其中,所述判别器的自训练损失的计算公式为
Figure PCTCN2020113496-appb-000010
Figure PCTCN2020113496-appb-000011
其中,
Wherein, the calculation formula of the self-training loss of the discriminator is:
Figure PCTCN2020113496-appb-000010
Figure PCTCN2020113496-appb-000011
in,
Figure PCTCN2020113496-appb-000012
为所述判别器的自训练损失;
Figure PCTCN2020113496-appb-000013
为所述学生分割概率结果图和相应分割区域的级联,其中||表示两个图像的级联操作;A(·)为从
Figure PCTCN2020113496-appb-000014
Figure PCTCN2020113496-appb-000015
生成的相应置信度图;μ self为置信度的阈值;
Figure PCTCN2020113496-appb-000016
为从argmax cS(X u)生成的地面实况的单次热编码;
Figure PCTCN2020113496-appb-000017
为所述学生分割概率结果图对应设置的金标准。
Figure PCTCN2020113496-appb-000012
is the self-training loss of the discriminator;
Figure PCTCN2020113496-appb-000013
is the concatenation of the student segmentation probability result map and the corresponding segmentation region, where || represents the cascade operation of the two images; A( ) is the
Figure PCTCN2020113496-appb-000014
Figure PCTCN2020113496-appb-000015
The corresponding confidence map generated; μ self is the confidence threshold;
Figure PCTCN2020113496-appb-000016
is the one-hot encoding of the ground truth generated from argmax c S(X u );
Figure PCTCN2020113496-appb-000017
The gold standard set for the student segmentation probability result map correspondingly.
其中,所述判别器的对抗损失的计算公式为
Figure PCTCN2020113496-appb-000018
Figure PCTCN2020113496-appb-000019
其中,
The formula for calculating the adversarial loss of the discriminator is:
Figure PCTCN2020113496-appb-000018
Figure PCTCN2020113496-appb-000019
in,
Figure PCTCN2020113496-appb-000020
为所述判别器的对抗损失;X n为所述有标记的MRI图像X l和所述原始未标记的MRI图像X u形成的图像集,X n={X l,X u}。
Figure PCTCN2020113496-appb-000020
is the adversarial loss of the discriminator; X n is the image set formed by the labeled MRI image X l and the original unlabeled MRI image Xu , X n ={X l , Xu }.
本发明实施例还提供了一种构建半监督图像分割框架的系统,包括图像分割框架构建单元、监督型分割损失计算单元、一致性损失计算单元和图像分割框架优化单元;其中,The embodiment of the present invention also provides a system for constructing a semi-supervised image segmentation framework, including an image segmentation framework construction unit, a supervised segmentation loss calculation unit, a consistency loss calculation unit, and an image segmentation framework optimization unit; wherein,
图像分割框架构建单元,用于构建包括学生模型、教师模型和判别器的半监督图像分割框架;The image segmentation framework building unit is used to construct a semi-supervised image segmentation framework including a student model, a teacher model and a discriminator;
监督型分割损失计算单元,用于获取有标记的MRI图像和其对应的金标准,并将所述有标记的MRI图像作为第一训练集图像导入所述学生模型中进行训练,得到分割概率图,且进一步结合所述金标准,以计算出监督型分割损失;A supervised segmentation loss calculation unit is used to obtain the labeled MRI image and its corresponding gold standard, and import the labeled MRI image as the first training set image into the student model for training to obtain a segmentation probability map , and further combined with the gold standard to calculate the supervised segmentation loss;
一致性损失计算单元,用于获取原始未标记的MRI图像和其与预设的高斯分布的噪声相结合后的噪声未标记MRI图像,得到第二训练集图像,且将所述第二训练集图像导入所述学生模型及所述教师模型中分别进行训练,得到对应的学生分割概率结果图和教师分割概率结果图,进一步待所述学生分割概率结果图和所述教师分割概率结果图各自覆盖在所述原始未标记的MRI图像上后,生成对应的学生分割区域和教师分割区域并一同传递给所述判别器进行相似度比较,以计算出一致性损失;其中,所述教师模型在训练过程中基于所述学生模型的权重使用指数移动平均策略来更新模型参数;The consistency loss calculation unit is used to obtain the original unlabeled MRI image and the noise unlabeled MRI image after it is combined with the noise of the preset Gaussian distribution to obtain a second training set image, and the second training set The image is imported into the student model and the teacher model for training respectively, and the corresponding student segmentation probability result map and teacher segmentation probability result map are obtained. Further, the student segmentation probability result map and the teacher segmentation probability result map are respectively covered On the original unlabeled MRI image, the corresponding student segmentation area and teacher segmentation area are generated and passed to the discriminator for similarity comparison, so as to calculate the consistency loss; wherein, the teacher model is trained during training. In the process, the model parameters are updated using an exponential moving average strategy based on the weight of the student model;
图像分割框架优化单元,用于根据所述监督型分割损失及所述一致性损失,得到总分割损失,并根据所述总分割损失,对所述半监督图像分割框架进行优化。An image segmentation framework optimization unit, configured to obtain a total segmentation loss according to the supervised segmentation loss and the consistency loss, and optimize the semi-supervised image segmentation framework according to the total segmentation loss.
其中,还包括:Among them, it also includes:
图像分割框架再次优化单元,用于根据所述学生分割概率结果图及其对应设置的金标准,计算出所述判别器的自训练损失,并获取所述判别器的对抗损失,且进一步将所述判别器的自训练损失及其对抗损失与所述监督型分割损失及所述一致性损失结合,更新所述总分割损失,并根据更新后的总分割损失,对所述半监督图像分割框架进行优化。The image segmentation framework re-optimization unit is used to calculate the self-training loss of the discriminator according to the student segmentation probability result map and the gold standard set correspondingly, and obtain the confrontation loss of the discriminator, and further The self-training loss of the discriminator and its adversarial loss are combined with the supervised segmentation loss and the consistency loss to update the total segmentation loss, and according to the updated total segmentation loss, the semi-supervised image segmentation framework optimize.
实施本发明实施例,具有如下有益效果:Implementing the embodiment of the present invention has the following beneficial effects:
1、本发明利用基于多尺度特征的一致性机制来改进均值教师模型,将体素级正则化信息纳入了半监督模型中,从而进一步改进了均值教师模型,使其更适用于图像分割;1. The present invention uses the consistency mechanism based on multi-scale features to improve the mean teacher model, and incorporates voxel-level regularization information into the semi-supervised model, thereby further improving the mean teacher model and making it more suitable for image segmentation;
2、本发明深度结合对抗网络(如用于对抗学习的判别器),无需额外的图 像级标记,即可实现半监督分割,且对抗网络的作用不仅在于可提取包含空间上下文信息的多尺度图像特征,而且能够用于衡量实现自训练方案的分割概率图的置信度;2. The present invention is deeply integrated with adversarial networks (such as discriminators for adversarial learning), which can achieve semi-supervised segmentation without additional image-level labels, and the role of adversarial networks is not only to extract multi-scale images containing spatial context information features, and can be used to measure the confidence of the segmentation probability map that implements the self-training scheme;
3、本发明建立能够用于多种MRI图像(医学图像)的通用半监督分割框架。3. The present invention establishes a general semi-supervised segmentation framework that can be used for various MRI images (medical images).
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,根据这些附图获得其他的附图仍属于本发明的范畴。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention, and for those of ordinary skill in the art, obtaining other drawings according to these drawings still belongs to the scope of the present invention without any creative effort.
图1为本发明实施例提出的一种构建半监督图像分割框架的方法的流程图;1 is a flowchart of a method for constructing a semi-supervised image segmentation framework proposed by an embodiment of the present invention;
图2为本发明实施例提出的一种构建半监督图像分割框架的方法中四个模态的MRI图像预处理之前的应用场景图;FIG. 2 is an application scene diagram before preprocessing of MRI images of four modalities in a method for constructing a semi-supervised image segmentation framework proposed by an embodiment of the present invention;
图3为本发明实施例提出的一种构建半监督图像分割框架的系统的结构示意图。FIG. 3 is a schematic structural diagram of a system for constructing a semi-supervised image segmentation framework proposed by an embodiment of the present invention.
具体实施方式detailed description
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings.
如图1所示,为本发明实施例中,提出的一种构建半监督图像分割框架的方法,包括以下步骤:As shown in FIG. 1, a method for constructing a semi-supervised image segmentation framework proposed in an embodiment of the present invention includes the following steps:
步骤S1、构建包括学生模型、教师模型和判别器的半监督图像分割框架;Step S1, constructing a semi-supervised image segmentation framework including a student model, a teacher model and a discriminator;
具体过程为,所构建的半监督图像分割框架主要由两个模块组成:均值教师模型和对抗网络。总的来说,该框架将对抗网络深度整合到改进后的均值教师模型中,主要包括由一个学生模型S和一个教师模型R形成的均值教师模型和一个判别器形成的对抗网络。所有这些模型(包括判别器)都基于CNN,特别是学生和教师模型基于相同的分割网络(如U-Net)。The specific process is that the constructed semi-supervised image segmentation framework is mainly composed of two modules: the mean teacher model and the adversarial network. In general, the framework deeply integrates the adversarial network into the improved mean teacher model, which mainly includes a mean teacher model formed by a student model S and a teacher model R and an adversarial network formed by a discriminator. All these models (including the discriminator) are based on CNN, especially the student and teacher models are based on the same segmentation network (such as U-Net).
步骤S2、获取有标记的MRI图像和其对应的金标准,并将所述有标记的MRI图像作为第一训练集图像导入所述学生模型中进行训练,得到分割概率图, 且进一步结合所述金标准,以计算出监督型分割损失;Step S2, obtaining a marked MRI image and its corresponding gold standard, and importing the marked MRI image as a first training set image into the student model for training to obtain a segmentation probability map, and further combining the Gold standard to calculate supervised segmentation loss;
具体过程为,对于有标记的MRI图像X l,将其与对应的金标准Y l一起输入到学生模型S中进行训练,得到分割概率图S(X l)后,并通过公式(1)计算监督型分割损失
Figure PCTCN2020113496-appb-000021
The specific process is as follows: for the labeled MRI image X l , input it together with the corresponding gold standard Y l into the student model S for training, and after obtaining the segmentation probability map S(X l ), calculate it by formula (1) Supervised segmentation loss
Figure PCTCN2020113496-appb-000021
Figure PCTCN2020113496-appb-000022
Figure PCTCN2020113496-appb-000022
h,w,d为每个图像的高、宽、长尺寸;C为标签类别数;c为标签类别数C中的某一个类别。h, w, d are the height, width and length of each image; C is the number of label categories; c is a certain category in the number of label categories C.
步骤S3、获取原始未标记的MRI图像和其与预设的高斯分布的噪声相结合后的噪声未标记MRI图像,得到第二训练集图像,且将所述第二训练集图像导入所述学生模型及所述教师模型中分别进行训练,得到对应的学生分割概率结果图和教师分割概率结果图,进一步待所述学生分割概率结果图和所述教师分割概率结果图各自覆盖在所述原始未标记的MRI图像上后,生成对应的学生分割区域和教师分割区域并一同传递给所述判别器进行相似度比较,以计算出一致性损失;其中,所述教师模型在训练过程中基于所述学生模型的权重使用指数移动平均策略来更新模型参数;Step S3, obtain the original unlabeled MRI image and the noise unlabeled MRI image after it is combined with the noise of the preset Gaussian distribution, obtain a second training set image, and import the second training set image into the student The model and the teacher model are trained respectively to obtain the corresponding student segmentation probability result map and the teacher segmentation probability result map. Further, the student segmentation probability result map and the teacher segmentation probability result map are respectively overlaid on the original After the marked MRI image, the corresponding student segmentation area and teacher segmentation area are generated and passed to the discriminator for similarity comparison to calculate the consistency loss; wherein, the teacher model is based on the The weights of the student model use an exponential moving average strategy to update the model parameters;
具体过程为,由于传统的均值教师模型存在两种损失,一种是分割损失,另一种是一致性损失,这种损失通常是根据学生模型S和教师模型R的分割图直接计算得出的。因此,为了克服传统的均值教师模型中一致性损失直接换算所带来的精度不准确问题,利用基于多尺度特征的一致性机制来改进传统的均值教师模型,使其更适用于图像分割,具体过程如下:The specific process is that, since the traditional mean teacher model has two losses, one is segmentation loss and the other is consistency loss, which is usually calculated directly from the segmentation maps of the student model S and the teacher model R. . Therefore, in order to overcome the inaccuracy of the accuracy caused by the direct conversion of consistency loss in the traditional mean teacher model, a consistency mechanism based on multi-scale features is used to improve the traditional mean teacher model and make it more suitable for image segmentation. The process is as follows:
获取原始未标记的MRI图像X u和将原始未标记的MRI图像X u与预设的高斯分布的噪声相结合后的噪声未标记MRI图像,得到第二训练集图像; acquiring the original unlabeled MRI image Xu and the noise unlabeled MRI image obtained by combining the original unlabeled MRI image Xu with the noise of the preset Gaussian distribution, to obtain a second training set image;
将第二训练集图像的原始未标记MRI图像X u导入学生模型S中进行训练,得到对应的学生分割概率结果图S(X u),并将第二训练集图的噪声未标记MRI图像导入教师模型R中进行训练,且教师模型R在训练过程中基于学生模型S的权重θ’使用指数移动平均(EMA)策略来更新模型参数(如权重θ’),得到教师分割概 率结果图R(X u);其中,教师模型R更新的模型权重θ’,通过公式θ’ t=αθ’ t-1+(1-α)θ t来实现;α为控制指数移动平均策略衰减的超参数,t为训练步骤次数; Import the original unlabeled MRI image X u of the second training set image into the student model S for training to obtain the corresponding student segmentation probability result map S(X u ), and import the noisy unlabeled MRI image of the second training set image into The teacher model R is trained in the teacher model R, and the teacher model R uses the exponential moving average (EMA) strategy to update the model parameters (such as the weight θ') based on the weight θ' of the student model S during the training process, and obtains the teacher segmentation probability result graph R ( X u ); wherein, the model weight θ' updated by the teacher model R is realized by the formula θ' t =αθ' t-1 +(1-α)θ t ; α is a hyperparameter that controls the decay of the exponential moving average strategy, t is the number of training steps;
将学生分割概率结果图S(X u)和教师分割概率结果图R(X u)与原始未标记的MRI图像X u分别进行逐像素相乘,得到对应的学生分割区域
Figure PCTCN2020113496-appb-000023
和教师分割区域
Figure PCTCN2020113496-appb-000024
Multiply the student segmentation probability result map S(X u ) and the teacher segmentation probability result map R(X u ) with the original unlabeled MRI image X u pixel by pixel respectively to obtain the corresponding student segmentation area
Figure PCTCN2020113496-appb-000023
Divide the area with the teacher
Figure PCTCN2020113496-appb-000024
将学生分割区域
Figure PCTCN2020113496-appb-000025
和教师分割区域
Figure PCTCN2020113496-appb-000026
一同传递给判别器A进行相似度比较,分别提取出学生多尺度特征和教师多尺度特征,并根据学生多尺度特征和教师多尺度特征,计算出一致性损失
Figure PCTCN2020113496-appb-000027
Divide students into areas
Figure PCTCN2020113496-appb-000025
Divide the area with the teacher
Figure PCTCN2020113496-appb-000026
It is passed to the discriminator A for similarity comparison, and the multi-scale features of students and the multi-scale features of teachers are extracted respectively, and the consistency loss is calculated according to the multi-scale features of students and the multi-scale features of teachers.
Figure PCTCN2020113496-appb-000027
其中,根据公式(2),计算出一致性损失
Figure PCTCN2020113496-appb-000028
Among them, according to formula (2), the consistency loss is calculated
Figure PCTCN2020113496-appb-000028
Figure PCTCN2020113496-appb-000029
Figure PCTCN2020113496-appb-000029
其中,
Figure PCTCN2020113496-appb-000030
为两个图像的逐体素的乘法运算;f(·)为相应分割区域提取的分层特征图;δ mae
Figure PCTCN2020113496-appb-000031
K为判别器A中网络层的数量;f(x i)为第i层输出的特征向量。
in,
Figure PCTCN2020113496-appb-000030
is the voxel-by-voxel multiplication of the two images; f( ) is the hierarchical feature map extracted from the corresponding segmented area; δmae is
Figure PCTCN2020113496-appb-000031
K is the number of network layers in the discriminator A; f(x i ) is the feature vector output by the i-th layer.
应当说明的是,可以将整个训练集表示为集合S={X n,Y l},包括所有图像X n和有标记图像的金标准Y l,其中X n={X l,X u}={x 1,…,x L,x L+1,…,x L+U}∈R H×W×D×N,Y l={y 1,…,y L}∈R H×W×D×C×L,每个图像的尺寸为H×W×D,每个分割任务中的标签类别数为C,具有地面真实标签图的图像数为L,训练集中的图像数量设置为N。 It should be noted that the entire training set can be represented as the set S={X n ,Y l }, including all images X n and the gold standard Y l of labeled images, where X n ={X l ,X u }= {x 1 ,…,x L ,x L+1 ,…,x L+U }∈R H×W×D×N , Y l ={y 1 ,…,y L }∈R H×W×D ×C×L , the size of each image is H×W×D, the number of label categories in each segmentation task is C, the number of images with ground truth label maps is L, and the number of images in the training set is set to N.
当将原始未标记的MRI图像X u输入到学生模型S中时,为了获得一致性训练所需的相似样本,还将基于高斯分布的噪声添加到了相同的原始未标记的MRI图像X u中,为教师模型R生成了相似的输入。基于一致性机制的假设,这两个网络有望产生相似的分割结果,在训练过程中的每个训练步骤t中使用指数移动平均根据学生模型的权重θ更新教师模型的权重θ’。 When the original unlabeled MRI image X u was input into the student model S, in order to obtain similar samples required for consistent training, Gaussian distribution-based noise was also added to the same original unlabeled MRI image X u , Similar inputs were generated for the teacher model R. Based on the assumption of a consensus mechanism, the two networks are expected to produce similar segmentation results, using an exponential moving average to update the weights θ' of the teacher model according to the weights θ of the student model at each training step t in the training process.
同时,与以前的基于简单一致性的均值教师方法不同,在框架中将用于对抗学习的判别器A作为另一个重要组成部分,设计了基于多尺度特征计算出的一致性损失。具体来说,从学生模型S和教师模型R输出原始未标记的MRI图像X u和其相应噪声未标记MRI图像所分别对应的学生分割概率结果图S(X u)和教师分 割概率结果图R(X u)后,其覆盖在原始未标记的MRI图像X u上,以得到MRI中两组分割区域,这两组分割区域型MRI是根据输入MRI与分割概率图逐像素相乘所生成的,即学生分割区域
Figure PCTCN2020113496-appb-000032
和教师分割区域
Figure PCTCN2020113496-appb-000033
在一致性训练中,鼓励这两个获得的分割区域相似,而不是像传统的均值教师模型那样仅考虑分割概率图的一致性。
Meanwhile, different from previous mean teacher methods based on simple consistency, the discriminator A for adversarial learning is used as another important component in the framework, and a consistency loss calculated based on multi-scale features is designed. Specifically, from the student model S and the teacher model R, output the original unlabeled MRI image Xu and its corresponding noise unlabeled MRI image corresponding to the student segmentation probability result map S(X u ) and the teacher segmentation probability result map R respectively. After (X u ), it is overlaid on the original unlabeled MRI image X u to obtain two sets of segmented regions in MRI, which are generated by pixel-by-pixel multiplication of the input MRI and the segmentation probability map , that is, the student segmentation area
Figure PCTCN2020113496-appb-000032
Divide the area with the teacher
Figure PCTCN2020113496-appb-000033
In consistency training, the two obtained segmentation regions are encouraged to be similar, instead of only considering the consistency of the segmentation probability map as in the traditional mean teacher model.
由于CNN可以有效地学习多层尺度的图像特征,为了更好地测量分割区域的一致性,从基于CNN的判别器A中提取分割区域的层次特征并将其串联在一起,并比较学生分割区域
Figure PCTCN2020113496-appb-000034
和教师分割区域
Figure PCTCN2020113496-appb-000035
相应的多尺度特征,将其视为学生分割区域
Figure PCTCN2020113496-appb-000036
和教师分割区域
Figure PCTCN2020113496-appb-000037
之间的差异。
Since CNNs can effectively learn image features at multiple scales, in order to better measure the consistency of segmented regions, the hierarchical features of segmented regions are extracted from CNN-based discriminator A and concatenated together, and compared the student segmented regions
Figure PCTCN2020113496-appb-000034
Divide the area with the teacher
Figure PCTCN2020113496-appb-000035
Corresponding multi-scale features, which are considered as student segmentation regions
Figure PCTCN2020113496-appb-000036
Divide the area with the teacher
Figure PCTCN2020113496-appb-000037
difference between.
步骤S4、根据所述监督型分割损失及所述一致性损失,得到总分割损失,并根据所述总分割损失,对所述半监督图像分割框架进行优化。Step S4: Obtain a total segmentation loss according to the supervised segmentation loss and the consistency loss, and optimize the semi-supervised image segmentation framework according to the total segmentation loss.
具体过程为,根据公式(3),计算出总分割损失
Figure PCTCN2020113496-appb-000038
The specific process is, according to formula (3), calculate the total segmentation loss
Figure PCTCN2020113496-appb-000038
Figure PCTCN2020113496-appb-000039
Figure PCTCN2020113496-appb-000039
λ con为加权系数,用于平衡所设计的损失函数的相对重要性。 λcon is a weighting coefficient used to balance the relative importance of the designed loss function.
然后,使用总分割损失
Figure PCTCN2020113496-appb-000040
优化半监督图像分割框架。
Then, use the total segmentation loss
Figure PCTCN2020113496-appb-000040
Optimizing a semi-supervised image segmentation framework.
在本发明实施例中,除生成上述用于计算一致性损失的多尺度特征外,判别器A还输出置信度图用于自训练。该置信度图可用于指导和约束目标区域,从而使学习到的分布更接近于真实分布情况。通过为置信度图设置阈值,就可以获得可靠的置信度区域,以选出高置信度的分割结果,将其转换为伪标签用于自训练。因此,来自未标记的MRI图像X u中一部分有效的分割结果可被直接看作标签,将其加入到训练集中可进一步丰富数据集。 In the embodiment of the present invention, in addition to generating the above-mentioned multi-scale features for calculating consistency loss, the discriminator A also outputs a confidence map for self-training. This confidence map can be used to guide and constrain the target region so that the learned distribution is closer to the true distribution. By thresholding the confidence map, reliable confidence regions can be obtained to select high-confidence segmentation results and convert them to pseudo-labels for self-training. Therefore, a subset of valid segmentation results from unlabeled MRI images Xu can be directly viewed as labels, which can be added to the training set to further enrich the dataset.
判别器A的自训练损失
Figure PCTCN2020113496-appb-000041
如公式(4)所示:
Self-training loss for discriminator A
Figure PCTCN2020113496-appb-000041
As shown in formula (4):
Figure PCTCN2020113496-appb-000042
Figure PCTCN2020113496-appb-000042
其中,
Figure PCTCN2020113496-appb-000043
为学生分割概率结果图和相应分割区域的级联,其中||表示两个图像的级联操作;A(·)为从
Figure PCTCN2020113496-appb-000044
生成的相应置信度图;μ self为置信度的阈值;
Figure PCTCN2020113496-appb-000045
为从argmax cS(X u)生成的地面实况的单次热编码;
Figure PCTCN2020113496-appb-000046
为学生分 割概率结果图对应设置的金标准,仅当判别器A输出的置信度图的对应体素值大于用户定义的阈值μ self时。
in,
Figure PCTCN2020113496-appb-000043
The concatenation of the segmentation probability result map and the corresponding segmentation area for students, where || represents the cascade operation of the two images; A( ) is the
Figure PCTCN2020113496-appb-000044
The corresponding confidence map generated; μ self is the confidence threshold;
Figure PCTCN2020113496-appb-000045
is the one-hot encoding of the ground truth generated from argmax c S(X u );
Figure PCTCN2020113496-appb-000046
The gold standard corresponding to the segmentation probability result map for students, only when the corresponding voxel value of the confidence map output by discriminator A is greater than the user-defined threshold μself .
对于对抗性学习,判别器A还用于定义对抗损失
Figure PCTCN2020113496-appb-000047
它可以进一步增强学生模型欺骗判别器的能力,如公式(5)所示:
For adversarial learning, discriminator A is also used to define the adversarial loss
Figure PCTCN2020113496-appb-000047
It can further enhance the ability of the student model to fool the discriminator, as shown in Equation (5):
Figure PCTCN2020113496-appb-000048
Figure PCTCN2020113496-appb-000048
其中,对抗损失
Figure PCTCN2020113496-appb-000049
可以适用于所有训练样本,因为它仅取决于对抗网络,与是否有标签无关。
Among them, the adversarial loss
Figure PCTCN2020113496-appb-000049
Can be applied to all training samples, as it depends only on the adversarial network, regardless of whether there are labels.
在框架的对抗训练过程中,学生模型S和教师模型R被迫生成一致的分割概率图以欺骗判别器A,而判别器A经过训练以增强区分学生分割概率图和教师分割概率图的能力。因此,判别器A的空间交叉熵损失定义。如公式(6)所示:During the adversarial training of the framework, the student model S and the teacher model R are forced to generate consistent segmentation probability maps to fool the discriminator A, while the discriminator A is trained to enhance the ability to distinguish the student segmentation probability maps from the teacher segmentation probability maps. Hence, the spatial cross-entropy loss of discriminator A is defined. As shown in formula (6):
Figure PCTCN2020113496-appb-000050
Figure PCTCN2020113496-appb-000050
其中,E n=0为输入到判别器A的分割概率图是由学生模型S所生成的。E n=1表示样本是来自教师模型R的;
Figure PCTCN2020113496-appb-000051
是教师分割概率结果图和教师分割区域的级联,这是判别器A的另一个输入。
Among them, E n =0 means that the segmentation probability map input to the discriminator A is generated by the student model S. En = 1 indicates that the sample is from the teacher model R;
Figure PCTCN2020113496-appb-000051
is the concatenation of the teacher segmentation probability result map and the teacher segmentation region, which is another input to the discriminator A.
由此可见,可以将判别器A的自训练损失
Figure PCTCN2020113496-appb-000052
及其对抗损失
Figure PCTCN2020113496-appb-000053
与监督型分割损失
Figure PCTCN2020113496-appb-000054
及一致性损失
Figure PCTCN2020113496-appb-000055
结合,更新总分割损失
Figure PCTCN2020113496-appb-000056
因此,所述方法进一步包括:
It can be seen that the self-training loss of discriminator A can be
Figure PCTCN2020113496-appb-000052
and its adversarial loss
Figure PCTCN2020113496-appb-000053
with supervised segmentation loss
Figure PCTCN2020113496-appb-000054
and consistency loss
Figure PCTCN2020113496-appb-000055
Combine, update the total segmentation loss
Figure PCTCN2020113496-appb-000056
Therefore, the method further includes:
根据学生分割概率结果图及其对应设置的金标准,计算出判别器的自训练损失,并获取判别器的对抗损失,且进一步将判别器的自训练损失及其对抗损失与监督型分割损失及一致性损失结合,更新总分割损失,并根据更新后的总分割损失,对半监督图像分割框架进行优化。According to the student segmentation probability result map and its corresponding gold standard, the self-training loss of the discriminator is calculated, and the adversarial loss of the discriminator is obtained, and the self-training loss of the discriminator and its adversarial loss are further combined with the supervised segmentation loss and Consistency losses are combined to update the total segmentation loss and optimize the semi-supervised image segmentation framework based on the updated total segmentation loss.
即,根据公式(7),更新总分割损失
Figure PCTCN2020113496-appb-000057
That is, according to Equation (7), update the total segmentation loss
Figure PCTCN2020113496-appb-000057
Figure PCTCN2020113496-appb-000058
Figure PCTCN2020113496-appb-000058
其中,λ con,,λ self和λ adv是相应的加权系数,用于平衡所设计的损失函数的相对重要性。 where λ con , λ self and λ adv are the corresponding weighting coefficients to balance the relative importance of the designed loss function.
如图2所示,为本发明实施例的一种构建半监督图像分割框架的方法中均 值教师模型和对抗网络联合训练的脑部MRI分割的应用场景图。As shown in Figure 2, it is an application scene diagram of brain MRI segmentation jointly trained by mean teacher model and adversarial network in a method for constructing a semi-supervised image segmentation framework according to an embodiment of the present invention.
如图3所示,为本发明实施例中,提供的一种构建半监督图像分割框架的系统,包括图像分割框架构建单元110、监督型分割损失计算单元120、一致性损失计算单元130和图像分割框架优化单元140;其中,As shown in FIG. 3 , in an embodiment of the present invention, a system for constructing a semi-supervised image segmentation framework is provided, including an image segmentation framework construction unit 110, a supervised segmentation loss calculation unit 120, a consistency loss calculation unit 130, and an image segmentation frame construction unit 110. Segmentation frame optimization unit 140; wherein,
图像分割框架构建单元110,用于构建包括学生模型、教师模型和判别器的半监督图像分割框架;an image segmentation frame construction unit 110, configured to construct a semi-supervised image segmentation frame comprising a student model, a teacher model and a discriminator;
监督型分割损失计算单元120,用于获取有标记的MRI图像和其对应的金标准,并将所述有标记的MRI图像作为第一训练集图像导入所述学生模型中进行训练,得到分割概率图,且进一步结合所述金标准,以计算出监督型分割损失;The supervised segmentation loss calculation unit 120 is used to obtain the marked MRI image and its corresponding gold standard, and import the marked MRI image as the first training set image into the student model for training to obtain the segmentation probability graph, and further combined with the gold standard to calculate a supervised segmentation loss;
一致性损失计算单元130,用于获取原始未标记的MRI图像和其与预设的高斯分布的噪声相结合后的噪声未标记MRI图像,得到第二训练集图像,且将所述第二训练集图像导入所述学生模型及所述教师模型中分别进行训练,得到对应的学生分割概率结果图和教师分割概率结果图,进一步待所述学生分割概率结果图和所述教师分割概率结果图各自覆盖在所述原始未标记的MRI图像上后,生成对应的学生分割区域和教师分割区域并一同传递给所述判别器进行相似度比较,以计算出一致性损失;其中,所述教师模型在训练过程中基于所述学生模型的权重使用指数移动平均策略来更新模型参数;The consistency loss calculation unit 130 is configured to obtain the original unlabeled MRI image and the noise unlabeled MRI image after it is combined with the noise of the preset Gaussian distribution to obtain a second training set image, and the second training set image is obtained. The set images are imported into the student model and the teacher model for training respectively, and the corresponding student segmentation probability result map and teacher segmentation probability result map are obtained. Further, the student segmentation probability result map and the teacher segmentation probability result map are respectively After being overlaid on the original unlabeled MRI image, the corresponding student segmentation area and teacher segmentation area are generated and passed to the discriminator for similarity comparison to calculate the consistency loss; wherein, the teacher model is in During the training process, the model parameters are updated based on the weight of the student model using an exponential moving average strategy;
图像分割框架优化单元140,用于根据所述监督型分割损失及所述一致性损失,得到总分割损失,并根据所述总分割损失,对所述半监督图像分割框架进行优化。The image segmentation framework optimization unit 140 is configured to obtain a total segmentation loss according to the supervised segmentation loss and the consistency loss, and optimize the semi-supervised image segmentation framework according to the total segmentation loss.
其中,还包括:Among them, it also includes:
图像分割框架再次优化单元150,用于根据所述学生分割概率结果图及其对应设置的金标准,计算出所述判别器的自训练损失,并获取所述判别器的对抗损失,且进一步将所述判别器的自训练损失及其对抗损失与所述监督型分割损失及所述一致性损失结合,更新所述总分割损失,并根据更新后的总分割损失,对所述半监督图像分割框架进行优化。The image segmentation framework re-optimization unit 150 is used to calculate the self-training loss of the discriminator according to the student segmentation probability result map and the gold standard set correspondingly, and obtain the confrontation loss of the discriminator, and further The discriminator's self-training loss and its adversarial loss are combined with the supervised segmentation loss and the consistency loss to update the total segmentation loss, and segment the semi-supervised image according to the updated total segmentation loss The framework is optimized.
实施本发明实施例,具有如下有益效果:Implementing the embodiment of the present invention has the following beneficial effects:
1、本发明利用基于多尺度特征的一致性机制来改进均值教师模型,将体素级正则化信息纳入了半监督模型中,从而进一步改进了均值教师模型,使其更适用于图像分割;1. The present invention uses the consistency mechanism based on multi-scale features to improve the mean teacher model, and incorporates voxel-level regularization information into the semi-supervised model, thereby further improving the mean teacher model and making it more suitable for image segmentation;
2、本发明深度结合对抗网络(如用于对抗学习的判别器),无需额外的图像级标记,即可实现半监督分割,且对抗网络的作用不仅在于可提取包含空间上下文信息的多尺度图像特征,而且能够用于衡量实现自训练方案的分割概率图的置信度;2. The present invention is deeply integrated with adversarial networks (such as discriminators for adversarial learning), which can achieve semi-supervised segmentation without additional image-level labels, and the role of adversarial networks is not only to extract multi-scale images containing spatial context information features, and can be used to measure the confidence of the segmentation probability map that implements the self-training scheme;
3、本发明建立能够用于多种MRI图像(医学图像)的通用半监督分割框架。3. The present invention establishes a general semi-supervised segmentation framework that can be used for various MRI images (medical images).
值得注意的是,上述系统实施例中,所包括的各个系统单元只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。It is worth noting that, in the above system embodiment, each system unit included is only divided according to functional logic, but is not limited to the above-mentioned division, as long as the corresponding function can be realized; in addition, the specific The names are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present invention.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,所述的程序可以存储于一计算机可读取存储介质中,所述的存储介质,如ROM/RAM、磁盘、光盘等。Those skilled in the art can understand that all or part of the steps in the methods of the above embodiments can be implemented by instructing relevant hardware through a program, and the program can be stored in a computer-readable storage medium, and the storage Media such as ROM/RAM, magnetic disk, optical disk, etc.
以上所揭露的仅为本发明一种较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。The above disclosure is only a preferred embodiment of the present invention, and of course, it cannot limit the scope of the rights of the present invention. Therefore, the equivalent changes made according to the claims of the present invention are still within the scope of the present invention.

Claims (10)

  1. 一种构建半监督图像分割框架的方法,其特征在于,包括以下步骤:A method for constructing a semi-supervised image segmentation framework, comprising the following steps:
    步骤S1、构建包括学生模型、教师模型和判别器的半监督图像分割框架;Step S1, constructing a semi-supervised image segmentation framework including a student model, a teacher model and a discriminator;
    步骤S2、获取有标记的MRI图像和其对应的金标准,并将所述有标记的MRI图像作为第一训练集图像导入所述学生模型中进行训练,得到分割概率图,且进一步结合所述金标准,以计算出监督型分割损失;Step S2, obtaining a marked MRI image and its corresponding gold standard, and importing the marked MRI image as a first training set image into the student model for training, obtaining a segmentation probability map, and further combining the Gold standard to calculate supervised segmentation loss;
    步骤S3、获取原始未标记的MRI图像和其与预设的高斯分布的噪声相结合后的噪声未标记MRI图像,得到第二训练集图像,且将所述第二训练集图像导入所述学生模型及所述教师模型中分别进行训练,得到对应的学生分割概率结果图和教师分割概率结果图,进一步待所述学生分割概率结果图和所述教师分割概率结果图各自覆盖在所述原始未标记的MRI图像上后,生成对应的学生分割区域和教师分割区域并一同传递给所述判别器进行相似度比较,以计算出一致性损失;其中,所述教师模型在训练过程中基于所述学生模型的权重使用指数移动平均策略来更新模型参数;Step S3, obtain the original unlabeled MRI image and the noise unlabeled MRI image after it is combined with the noise of the preset Gaussian distribution, obtain a second training set image, and import the second training set image into the student The model and the teacher model are trained respectively to obtain the corresponding student segmentation probability result map and the teacher segmentation probability result map. Further, the student segmentation probability result map and the teacher segmentation probability result map are respectively overlaid on the original After the marked MRI image, the corresponding student segmentation area and teacher segmentation area are generated and passed to the discriminator for similarity comparison to calculate the consistency loss; wherein, the teacher model is based on the The weights of the student model use an exponential moving average strategy to update the model parameters;
    步骤S4、根据所述监督型分割损失及所述一致性损失,得到总分割损失,并根据所述总分割损失,对所述半监督图像分割框架进行优化。Step S4: Obtain a total segmentation loss according to the supervised segmentation loss and the consistency loss, and optimize the semi-supervised image segmentation framework according to the total segmentation loss.
  2. 如权利要求1所述的构建半监督图像分割框架的方法,其特征在于,所述步骤S3具体包括:The method for constructing a semi-supervised image segmentation framework according to claim 1, wherein the step S3 specifically comprises:
    获取原始未标记的MRI图像和将所述原始未标记的MRI图像与预设的高斯分布的噪声相结合后的噪声未标记MRI图像,得到第二训练集图像;obtaining the original unlabeled MRI image and the noise unlabeled MRI image obtained by combining the original unlabeled MRI image with the preset Gaussian distribution noise, to obtain a second training set image;
    将所述第二训练集图像的原始未标记MRI图像导入所述学生模型中进行训练,得到对应的学生分割概率结果图,并将所述第二训练集图像的噪声未标记MRI图像导入所述教师模型中进行训练,且所述教师模型在训练过程中基于所述学生模型的权重使用指数移动平均策略来更新模型参数,得到教师分割概率结果图;Import the original unlabeled MRI image of the second training set image into the student model for training to obtain a corresponding student segmentation probability result map, and import the noise unlabeled MRI image of the second training set image into the student model. Training is performed in the teacher model, and the teacher model uses the exponential moving average strategy to update the model parameters based on the weight of the student model in the training process to obtain a teacher segmentation probability result map;
    将所述学生分割概率结果图和所述教师分割概率结果图与所述原始未标记 的MRI图像分别进行逐像素相乘,得到对应的学生分割区域和教师分割区域;The student segmentation probability result map and the teacher segmentation probability result map and the original unlabeled MRI image are respectively multiplied pixel by pixel to obtain the corresponding student segmentation area and teacher segmentation area;
    将所述学生分割区域和所述教师分割区域一同传递给所述判别器进行相似度比较,分别提取出学生多尺度特征和教师多尺度特征,并根据所述学生多尺度特征和所述教师多尺度特征,计算出一致性损失。The student segmentation area and the teacher segmentation area are passed to the discriminator for similarity comparison, and the student multi-scale features and teacher multi-scale features are extracted respectively, and according to the student multi-scale features and the teacher multi-scale features Scale features, and calculate the consistency loss.
  3. 如权利要求2所述的构建半监督图像分割框架的方法,其特征在于,所述教师模型更新的模型参数为权重,其通过公式θ’ t=αθ’ t-1+(1-α)θ t来实现;其中,θ’为所述教师模型的权重,θ为所述学生模型的权重,α为控制指数移动平均策略衰减的超参数,t为训练步骤次数。 The method for constructing a semi-supervised image segmentation framework according to claim 2, wherein the model parameter updated by the teacher model is a weight, which is calculated by the formula θ' t =αθ' t-1 +(1-α)θ t to achieve; wherein, θ' is the weight of the teacher model, θ is the weight of the student model, α is a hyperparameter that controls the decay of the exponential moving average strategy, and t is the number of training steps.
  4. 如权利要求2所述的构建半监督图像分割框架的方法,其特征在于,所述一致性损失的计算公式为
    Figure PCTCN2020113496-appb-100001
    其中,
    The method for constructing a semi-supervised image segmentation framework according to claim 2, wherein the calculation formula of the consistency loss is:
    Figure PCTCN2020113496-appb-100001
    in,
    Figure PCTCN2020113496-appb-100002
    为所述一致性损失;
    Figure PCTCN2020113496-appb-100003
    为两个图像的逐体素的乘法运算;
    Figure PCTCN2020113496-appb-100004
    为所述原始未标记的MRI图像与所述学生分割概率结果图相乘而获得的学生分割区域;
    Figure PCTCN2020113496-appb-100005
    为所述原始未标记的MRI图像与所述教师分割概率结果图相乘而获得的教师分割区域;X u为所述原始未标记的MRI图像;S(X u)为所述学生分割概率结果图;R(X u)为所述教师分割概率结果图;f(·)为相应分割区域提取的分层特征图;h,w,d为每个图像的高、宽、长尺寸;δ mae
    Figure PCTCN2020113496-appb-100006
    K为所述判别器中网络层的数量;f(x i)为第i层输出的特征向量。
    Figure PCTCN2020113496-appb-100002
    is the consistency loss;
    Figure PCTCN2020113496-appb-100003
    is the voxel-by-voxel multiplication of the two images;
    Figure PCTCN2020113496-appb-100004
    a student segmentation region obtained by multiplying the original unlabeled MRI image by the student segmentation probability result map;
    Figure PCTCN2020113496-appb-100005
    is the teacher segmentation area obtained by multiplying the original unlabeled MRI image and the teacher segmentation probability result map; X u is the original unlabeled MRI image; S(X u ) is the student segmentation probability result Figure; R(X u ) is the teacher segmentation probability result map; f( ) is the hierarchical feature map extracted from the corresponding segmentation area; h, w, d are the height, width and length of each image; δ mae for
    Figure PCTCN2020113496-appb-100006
    K is the number of network layers in the discriminator; f(x i ) is the feature vector output by the ith layer.
  5. 如权利要求1所述的构建半监督图像分割框架的方法,其特征在于,所述监督型分割损失的计算公式为
    Figure PCTCN2020113496-appb-100007
    其中,
    The method for constructing a semi-supervised image segmentation framework according to claim 1, wherein the calculation formula of the supervised segmentation loss is:
    Figure PCTCN2020113496-appb-100007
    in,
    Figure PCTCN2020113496-appb-100008
    为所述监督型分割损失;Y l为有标记图像的金标准;h,w,d为每个图像的高、宽、长尺寸;C为标签类别数;c为标签类别数C中的某一个类别;X l为所述有标记的MRI图像;S(X l)为所述分割概率图。
    Figure PCTCN2020113496-appb-100008
    is the supervised segmentation loss; Y l is the gold standard for labeled images; h, w, d are the height, width, and length of each image; C is the number of label categories; c is a certain number of label categories in C a class; X l is the labeled MRI image; S(X l ) is the segmentation probability map.
  6. 如权利要求4或5所述的构建半监督图像分割框架的方法,其特征在于,所述方法进一步包括:The method for constructing a semi-supervised image segmentation framework according to claim 4 or 5, wherein the method further comprises:
    根据所述学生分割概率结果图及其对应设置的金标准,计算出所述判别器 的自训练损失,并获取所述判别器的对抗损失,且进一步将所述判别器的自训练损失及其对抗损失与所述监督型分割损失及所述一致性损失结合,更新所述总分割损失,并根据更新后的总分割损失,对所述半监督图像分割框架进行优化。According to the student segmentation probability result map and its corresponding gold standard, the self-training loss of the discriminator is calculated, the adversarial loss of the discriminator is obtained, and the self-training loss of the discriminator and its corresponding loss are obtained. The adversarial loss is combined with the supervised segmentation loss and the consistency loss, the total segmentation loss is updated, and the semi-supervised image segmentation framework is optimized according to the updated total segmentation loss.
  7. 如权利要求6所述的构建半监督图像分割框架的方法,其特征在于,所述判别器的自训练损失的计算公式为
    Figure PCTCN2020113496-appb-100009
    Figure PCTCN2020113496-appb-100010
    其中,
    The method for constructing a semi-supervised image segmentation framework according to claim 6, wherein the calculation formula of the self-training loss of the discriminator is:
    Figure PCTCN2020113496-appb-100009
    Figure PCTCN2020113496-appb-100010
    in,
    Figure PCTCN2020113496-appb-100011
    为所述判别器的自训练损失;
    Figure PCTCN2020113496-appb-100012
    为所述学生分割概率结果图和相应分割区域的级联,其中||表示两个图像的级联操作;A(·)为从
    Figure PCTCN2020113496-appb-100013
    Figure PCTCN2020113496-appb-100014
    生成的相应置信度图;μ self为置信度的阈值;
    Figure PCTCN2020113496-appb-100015
    为从argmax cS(X u)生成的地面实况的单次热编码;
    Figure PCTCN2020113496-appb-100016
    为所述学生分割概率结果图对应设置的金标准。
    Figure PCTCN2020113496-appb-100011
    is the self-training loss of the discriminator;
    Figure PCTCN2020113496-appb-100012
    is the concatenation of the student segmentation probability result map and the corresponding segmentation region, where || represents the cascade operation of the two images; A( ) is the
    Figure PCTCN2020113496-appb-100013
    Figure PCTCN2020113496-appb-100014
    The corresponding confidence map generated; μ self is the confidence threshold;
    Figure PCTCN2020113496-appb-100015
    is the one-hot encoding of the ground truth generated from argmax c S(X u );
    Figure PCTCN2020113496-appb-100016
    The gold standard set for the student segmentation probability result map correspondingly.
  8. 如权利要求6所述的构建半监督图像分割框架的方法,其特征在于,所述判别器的对抗损失的计算公式为
    Figure PCTCN2020113496-appb-100017
    其中,
    The method for constructing a semi-supervised image segmentation framework according to claim 6, wherein the formula for calculating the adversarial loss of the discriminator is:
    Figure PCTCN2020113496-appb-100017
    in,
    Figure PCTCN2020113496-appb-100018
    为所述判别器的对抗损失;X n为所述有标记的MRI图像X l和所述原始未标记的MRI图像X u形成的图像集,X n=(X l,X u}。
    Figure PCTCN2020113496-appb-100018
    is the adversarial loss of the discriminator; X n is the image set formed by the labeled MRI image X l and the original unlabeled MRI image Xu , X n =(X l , Xu }.
  9. 一种构建半监督图像分割框架的系统,其特征在于,包括图像分割框架构建单元、监督型分割损失计算单元、一致性损失计算单元和图像分割框架优化单元;其中,A system for constructing a semi-supervised image segmentation framework is characterized by comprising an image segmentation framework construction unit, a supervised segmentation loss calculation unit, a consistency loss calculation unit and an image segmentation framework optimization unit; wherein,
    图像分割框架构建单元,用于构建包括学生模型、教师模型和判别器的半监督图像分割框架;The image segmentation framework building unit is used to construct a semi-supervised image segmentation framework including a student model, a teacher model and a discriminator;
    监督型分割损失计算单元,用于获取有标记的MRI图像和其对应的金标准,并将所述有标记的MRI图像作为第一训练集图像导入所述学生模型中进行训练,得到分割概率图,且进一步结合所述金标准,以计算出监督型分割损失;A supervised segmentation loss calculation unit is used to obtain the labeled MRI image and its corresponding gold standard, and import the labeled MRI image as the first training set image into the student model for training to obtain a segmentation probability map , and further combined with the gold standard to calculate the supervised segmentation loss;
    一致性损失计算单元,用于获取原始未标记的MRI图像和其与预设的高斯分布的噪声相结合后的噪声未标记MRI图像,得到第二训练集图像,且将所述 第二训练集图像导入所述学生模型及所述教师模型中分别进行训练,得到对应的学生分割概率结果图和教师分割概率结果图,进一步待所述学生分割概率结果图和所述教师分割概率结果图各自覆盖在所述原始未标记的MRI图像上后,生成对应的学生分割区域和教师分割区域并一同传递给所述判别器进行相似度比较,以计算出一致性损失;其中,所述教师模型在训练过程中基于所述学生模型的权重使用指数移动平均策略来更新模型参数;The consistency loss calculation unit is used to obtain the original unlabeled MRI image and the noise unlabeled MRI image after it is combined with the noise of the preset Gaussian distribution to obtain a second training set image, and the second training set The image is imported into the student model and the teacher model for training respectively, and the corresponding student segmentation probability result map and teacher segmentation probability result map are obtained. Further, the student segmentation probability result map and the teacher segmentation probability result map are respectively covered On the original unlabeled MRI image, the corresponding student segmentation area and teacher segmentation area are generated and passed to the discriminator for similarity comparison, so as to calculate the consistency loss; wherein, the teacher model is trained during training. In the process, the model parameters are updated using an exponential moving average strategy based on the weight of the student model;
    图像分割框架优化单元,用于根据所述监督型分割损失及所述一致性损失,得到总分割损失,并根据所述总分割损失,对所述半监督图像分割框架进行优化。An image segmentation framework optimization unit, configured to obtain a total segmentation loss according to the supervised segmentation loss and the consistency loss, and optimize the semi-supervised image segmentation framework according to the total segmentation loss.
  10. 如权利要求9所述的构建半监督图像分割框架的系统,其特征在于,还包括:The system for constructing a semi-supervised image segmentation framework according to claim 9, further comprising:
    图像分割框架再次优化单元,用于根据所述学生分割概率结果图及其对应设置的金标准,计算出所述判别器的自训练损失,并获取所述判别器的对抗损失,且进一步将所述判别器的自训练损失及其对抗损失与所述监督型分割损失及所述一致性损失结合,更新所述总分割损失,并根据更新后的总分割损失,对所述半监督图像分割框架进行优化。The image segmentation framework re-optimization unit is used to calculate the self-training loss of the discriminator according to the student segmentation probability result map and the gold standard set correspondingly, and obtain the confrontation loss of the discriminator, and further The self-training loss of the discriminator and its adversarial loss are combined with the supervised segmentation loss and the consistency loss to update the total segmentation loss, and according to the updated total segmentation loss, the semi-supervised image segmentation framework optimize.
PCT/CN2020/113496 2020-08-31 2020-09-04 Method and system for constructing semi-supervised image segmentation framework WO2022041307A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010892241.7A CN112150478B (en) 2020-08-31 2020-08-31 Method and system for constructing semi-supervised image segmentation framework
CN202010892241.7 2020-08-31

Publications (1)

Publication Number Publication Date
WO2022041307A1 true WO2022041307A1 (en) 2022-03-03

Family

ID=73890865

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/113496 WO2022041307A1 (en) 2020-08-31 2020-09-04 Method and system for constructing semi-supervised image segmentation framework

Country Status (2)

Country Link
CN (1) CN112150478B (en)
WO (1) WO2022041307A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332135A (en) * 2022-03-10 2022-04-12 之江实验室 Semi-supervised medical image segmentation method and device based on dual-model interactive learning
CN114549842A (en) * 2022-04-22 2022-05-27 山东建筑大学 Self-adaptive semi-supervised image segmentation method and system based on uncertain knowledge domain
CN114693753A (en) * 2022-03-24 2022-07-01 北京理工大学 Three-dimensional ultrasonic elastic registration method and device based on texture keeping constraint
CN114742799A (en) * 2022-04-18 2022-07-12 华中科技大学 Industrial scene unknown type defect segmentation method based on self-supervision heterogeneous network
CN114882227A (en) * 2022-07-07 2022-08-09 南方医科大学第三附属医院(广东省骨科研究院) Human tissue image segmentation method and related equipment
CN114882325A (en) * 2022-07-12 2022-08-09 之江实验室 Semi-supervisor detection and training method and device based on two-stage object detector
CN114897914A (en) * 2022-03-16 2022-08-12 华东师范大学 Semi-supervised CT image segmentation method based on confrontation training
CN114926471A (en) * 2022-05-24 2022-08-19 北京医准智能科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN115496732A (en) * 2022-09-26 2022-12-20 电子科技大学 Semi-supervised heart semantic segmentation algorithm
CN116188876A (en) * 2023-03-29 2023-05-30 上海锡鼎智能科技有限公司 Semi-supervised learning method and semi-supervised learning device based on information mixing
CN116258861A (en) * 2023-03-20 2023-06-13 南通锡鼎智能科技有限公司 Semi-supervised semantic segmentation method and segmentation device based on multi-label learning
CN116468746A (en) * 2023-03-27 2023-07-21 华东师范大学 Bidirectional copy-paste semi-supervised medical image segmentation method
CN116645507A (en) * 2023-05-18 2023-08-25 丽水瑞联医疗科技有限公司 Placenta image processing method and system based on semantic segmentation
CN116664602A (en) * 2023-07-26 2023-08-29 中南大学 OCTA blood vessel segmentation method and imaging method based on few sample learning
CN116778239A (en) * 2023-06-16 2023-09-19 酷哇科技有限公司 Instance segmentation model-oriented semi-supervised training method and equipment
CN117173401A (en) * 2022-12-06 2023-12-05 南华大学 Semi-supervised medical image segmentation method and system based on cross guidance and feature level consistency dual regularization
CN117333874A (en) * 2023-10-27 2024-01-02 江苏新希望科技有限公司 Image segmentation method, system, storage medium and device
CN117593648A (en) * 2024-01-17 2024-02-23 中国人民解放军海军航空大学 Remote sensing target building extraction method based on weak supervision learning
CN117765532A (en) * 2024-02-22 2024-03-26 中国科学院宁波材料技术与工程研究所 cornea Langerhans cell segmentation method and device based on confocal microscopic image

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734723B (en) * 2021-01-08 2023-06-30 温州医科大学 Multi-source data-oriented breast tumor image classification prediction method and device
CN112749801A (en) * 2021-01-22 2021-05-04 上海商汤智能科技有限公司 Neural network training and image processing method and device
CN113129309B (en) * 2021-03-04 2023-04-07 同济大学 Medical image semi-supervised segmentation system based on object context consistency constraint
CN113256646B (en) * 2021-04-13 2024-03-22 浙江工业大学 Cerebrovascular image segmentation method based on semi-supervised learning
CN113239924B (en) * 2021-05-21 2022-04-26 上海交通大学 Weak supervision target detection method and system based on transfer learning
CN113256639A (en) * 2021-05-27 2021-08-13 燕山大学 Coronary angiography blood vessel image segmentation method based on semi-supervised average teacher model
CN113344896B (en) * 2021-06-24 2023-01-17 鹏城实验室 Breast CT image focus segmentation model training method and system
CN113763406B (en) * 2021-07-28 2024-04-26 华中师范大学 Infant brain MRI (magnetic resonance imaging) segmentation method based on semi-supervised learning
CN113743474B (en) * 2021-08-10 2023-09-26 扬州大学 Digital picture classification method and system based on collaborative semi-supervised convolutional neural network
CN113793304A (en) * 2021-08-23 2021-12-14 天津大学 Intelligent segmentation method for lung cancer target area and organs at risk
CN117523327A (en) * 2022-07-29 2024-02-06 马上消费金融股份有限公司 Image processing method, device, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087303A (en) * 2018-08-15 2018-12-25 中山大学 The frame of semantic segmentation modelling effect is promoted based on transfer learning
CN111275713A (en) * 2020-02-03 2020-06-12 武汉大学 Cross-domain semantic segmentation method based on countermeasure self-integration network
CN111401406A (en) * 2020-02-21 2020-07-10 华为技术有限公司 Neural network training method, video frame processing method and related equipment
CN111489365A (en) * 2020-04-10 2020-08-04 上海商汤临港智能科技有限公司 Neural network training method, image processing method and device
AU2020103905A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Unsupervised cross-domain self-adaptive medical image segmentation method based on deep adversarial learning

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091333A (en) * 2014-07-01 2014-10-08 黄河科技学院 Multi-class unsupervised color texture image segmentation method based on credible regional integration
CN108764462A (en) * 2018-05-29 2018-11-06 成都视观天下科技有限公司 A kind of convolutional neural networks optimization method of knowledge based distillation
CN109949317B (en) * 2019-03-06 2020-12-11 东南大学 Semi-supervised image example segmentation method based on gradual confrontation learning
CN109978850B (en) * 2019-03-21 2020-12-22 华南理工大学 Multi-modal medical image semi-supervised deep learning segmentation system
CN110059740A (en) * 2019-04-12 2019-07-26 杭州电子科技大学 A kind of deep learning semantic segmentation model compression method for embedded mobile end
CN110059698B (en) * 2019-04-30 2022-12-23 福州大学 Semantic segmentation method and system based on edge dense reconstruction for street view understanding
CN110428426A (en) * 2019-07-02 2019-11-08 温州医科大学 A kind of MRI image automatic division method based on improvement random forests algorithm
CN110503654B (en) * 2019-08-01 2022-04-26 中国科学院深圳先进技术研究院 Medical image segmentation method and system based on generation countermeasure network and electronic equipment
CN110490881A (en) * 2019-08-19 2019-11-22 腾讯科技(深圳)有限公司 Medical image dividing method, device, computer equipment and readable storage medium storing program for executing
CN111047594B (en) * 2019-11-06 2023-04-07 安徽医科大学 Tumor MRI weak supervised learning analysis modeling method and model thereof
CN111080645B (en) * 2019-11-12 2023-08-15 中国矿业大学 Remote sensing image semi-supervised semantic segmentation method based on generation type countermeasure network
CN111062951B (en) * 2019-12-11 2022-03-25 华中科技大学 Knowledge distillation method based on semantic segmentation intra-class feature difference
CN111369618A (en) * 2020-02-20 2020-07-03 清华大学 Human body posture estimation method and device based on compressed sampling RF signals
CN111402278B (en) * 2020-02-21 2023-10-27 华为云计算技术有限公司 Segmentation model training method, image labeling method and related devices
CN111369535B (en) * 2020-03-05 2023-04-07 笑纳科技(苏州)有限公司 Cell detection method
CN111507993B (en) * 2020-03-18 2023-05-19 南方电网科学研究院有限责任公司 Image segmentation method, device and storage medium based on generation countermeasure network
CN111507227B (en) * 2020-04-10 2023-04-18 南京汉韬科技有限公司 Multi-student individual segmentation and state autonomous identification method based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087303A (en) * 2018-08-15 2018-12-25 中山大学 The frame of semantic segmentation modelling effect is promoted based on transfer learning
CN111275713A (en) * 2020-02-03 2020-06-12 武汉大学 Cross-domain semantic segmentation method based on countermeasure self-integration network
CN111401406A (en) * 2020-02-21 2020-07-10 华为技术有限公司 Neural network training method, video frame processing method and related equipment
CN111489365A (en) * 2020-04-10 2020-08-04 上海商汤临港智能科技有限公司 Neural network training method, image processing method and device
AU2020103905A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Unsupervised cross-domain self-adaptive medical image segmentation method based on deep adversarial learning

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332135A (en) * 2022-03-10 2022-04-12 之江实验室 Semi-supervised medical image segmentation method and device based on dual-model interactive learning
CN114897914A (en) * 2022-03-16 2022-08-12 华东师范大学 Semi-supervised CT image segmentation method based on confrontation training
CN114693753B (en) * 2022-03-24 2024-05-03 北京理工大学 Three-dimensional ultrasonic elastic registration method and device based on texture retention constraint
CN114693753A (en) * 2022-03-24 2022-07-01 北京理工大学 Three-dimensional ultrasonic elastic registration method and device based on texture keeping constraint
CN114742799A (en) * 2022-04-18 2022-07-12 华中科技大学 Industrial scene unknown type defect segmentation method based on self-supervision heterogeneous network
CN114742799B (en) * 2022-04-18 2024-04-26 华中科技大学 Industrial scene unknown type defect segmentation method based on self-supervision heterogeneous network
CN114549842A (en) * 2022-04-22 2022-05-27 山东建筑大学 Self-adaptive semi-supervised image segmentation method and system based on uncertain knowledge domain
CN114926471A (en) * 2022-05-24 2022-08-19 北京医准智能科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN114926471B (en) * 2022-05-24 2023-03-28 北京医准智能科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN114882227A (en) * 2022-07-07 2022-08-09 南方医科大学第三附属医院(广东省骨科研究院) Human tissue image segmentation method and related equipment
CN114882227B (en) * 2022-07-07 2022-11-04 南方医科大学第三附属医院(广东省骨科研究院) Human tissue image segmentation method and related equipment
CN114882325A (en) * 2022-07-12 2022-08-09 之江实验室 Semi-supervisor detection and training method and device based on two-stage object detector
CN114882325B (en) * 2022-07-12 2022-12-02 之江实验室 Semi-supervisor detection and training method and device based on two-stage object detector
CN115496732B (en) * 2022-09-26 2024-03-15 电子科技大学 Semi-supervised heart semantic segmentation algorithm
CN115496732A (en) * 2022-09-26 2022-12-20 电子科技大学 Semi-supervised heart semantic segmentation algorithm
CN117173401B (en) * 2022-12-06 2024-05-03 南华大学 Semi-supervised medical image segmentation method and system based on cross guidance and feature level consistency dual regularization
CN117173401A (en) * 2022-12-06 2023-12-05 南华大学 Semi-supervised medical image segmentation method and system based on cross guidance and feature level consistency dual regularization
CN116258861A (en) * 2023-03-20 2023-06-13 南通锡鼎智能科技有限公司 Semi-supervised semantic segmentation method and segmentation device based on multi-label learning
CN116258861B (en) * 2023-03-20 2023-09-22 南通锡鼎智能科技有限公司 Semi-supervised semantic segmentation method and segmentation device based on multi-label learning
CN116468746B (en) * 2023-03-27 2023-12-26 华东师范大学 Bidirectional copy-paste semi-supervised medical image segmentation method
CN116468746A (en) * 2023-03-27 2023-07-21 华东师范大学 Bidirectional copy-paste semi-supervised medical image segmentation method
CN116188876B (en) * 2023-03-29 2024-04-19 上海锡鼎智能科技有限公司 Semi-supervised learning method and semi-supervised learning device based on information mixing
CN116188876A (en) * 2023-03-29 2023-05-30 上海锡鼎智能科技有限公司 Semi-supervised learning method and semi-supervised learning device based on information mixing
CN116645507A (en) * 2023-05-18 2023-08-25 丽水瑞联医疗科技有限公司 Placenta image processing method and system based on semantic segmentation
CN116778239A (en) * 2023-06-16 2023-09-19 酷哇科技有限公司 Instance segmentation model-oriented semi-supervised training method and equipment
CN116778239B (en) * 2023-06-16 2024-06-11 酷哇科技有限公司 Instance segmentation model-oriented semi-supervised training method and equipment
CN116664602B (en) * 2023-07-26 2023-11-03 中南大学 OCTA blood vessel segmentation method and imaging method based on few sample learning
CN116664602A (en) * 2023-07-26 2023-08-29 中南大学 OCTA blood vessel segmentation method and imaging method based on few sample learning
CN117333874A (en) * 2023-10-27 2024-01-02 江苏新希望科技有限公司 Image segmentation method, system, storage medium and device
CN117593648B (en) * 2024-01-17 2024-04-05 中国人民解放军海军航空大学 Remote sensing target building extraction method based on weak supervision learning
CN117593648A (en) * 2024-01-17 2024-02-23 中国人民解放军海军航空大学 Remote sensing target building extraction method based on weak supervision learning
CN117765532A (en) * 2024-02-22 2024-03-26 中国科学院宁波材料技术与工程研究所 cornea Langerhans cell segmentation method and device based on confocal microscopic image
CN117765532B (en) * 2024-02-22 2024-05-31 中国科学院宁波材料技术与工程研究所 Cornea Langerhans cell segmentation method and device based on confocal microscopic image

Also Published As

Publication number Publication date
CN112150478B (en) 2021-06-22
CN112150478A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
WO2022041307A1 (en) Method and system for constructing semi-supervised image segmentation framework
WO2020215984A1 (en) Medical image detection method based on deep learning, and related device
CN109493308B (en) Medical image synthesis and classification method for generating confrontation network based on condition multi-discrimination
Tang et al. A two-stage approach for automatic liver segmentation with Faster R-CNN and DeepLab
Pu et al. Fetal cardiac cycle detection in multi-resource echocardiograms using hybrid classification framework
Kisilev et al. Medical image description using multi-task-loss CNN
Solovyev et al. 3D convolutional neural networks for stalled brain capillary detection
CN111325750B (en) Medical image segmentation method based on multi-scale fusion U-shaped chain neural network
US8170303B2 (en) Automatic cardiac view classification of echocardiography
CN108898606A (en) Automatic division method, system, equipment and the storage medium of medical image
CN111932529B (en) Image classification and segmentation method, device and system
Huang et al. Omni-supervised learning: scaling up to large unlabelled medical datasets
CN114897914B (en) Semi-supervised CT image segmentation method based on countermeasure training
Li et al. Recurrent aggregation learning for multi-view echocardiographic sequences segmentation
Salim et al. Ridge regression neural network for pediatric bone age assessment
Cui et al. Bidirectional cross-modality unsupervised domain adaptation using generative adversarial networks for cardiac image segmentation
Venturini et al. Uncertainty estimates as data selection criteria to boost omni-supervised learning
CN110992352A (en) Automatic infant head circumference CT image measuring method based on convolutional neural network
Sirjani et al. Automatic cardiac evaluations using a deep video object segmentation network
Wang et al. Medical matting: a new perspective on medical segmentation with uncertainty
Li et al. Automatic annotation algorithm of medical radiological images using convolutional neural network
CN113643297B (en) Computer-aided age analysis method based on neural network
Wang et al. Deep learning based fetal middle cerebral artery segmentation in large-scale ultrasound images
Wang et al. Automatic measurement of fetal head circumference using a novel GCN-assisted deep convolutional network
CN117437423A (en) Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20950976

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20950976

Country of ref document: EP

Kind code of ref document: A1