WO2022041307A1 - Procédé et système de construction de cadre de segmentation d'image semi-supervisée - Google Patents

Procédé et système de construction de cadre de segmentation d'image semi-supervisée Download PDF

Info

Publication number
WO2022041307A1
WO2022041307A1 PCT/CN2020/113496 CN2020113496W WO2022041307A1 WO 2022041307 A1 WO2022041307 A1 WO 2022041307A1 CN 2020113496 W CN2020113496 W CN 2020113496W WO 2022041307 A1 WO2022041307 A1 WO 2022041307A1
Authority
WO
WIPO (PCT)
Prior art keywords
segmentation
image
loss
student
supervised
Prior art date
Application number
PCT/CN2020/113496
Other languages
English (en)
Chinese (zh)
Inventor
潘志方
陈高翔
茹劲涛
Original Assignee
温州医科大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 温州医科大学 filed Critical 温州医科大学
Publication of WO2022041307A1 publication Critical patent/WO2022041307A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the invention relates to the technical field of image processing, in particular to a method and system for constructing a semi-supervised image segmentation framework.
  • Medical image segmentation plays a vital role in clinical applications and scientific research. Accurate medical image segmentation can provide important quantitative measures for lesion grading, classification, and disease diagnosis, and further help clinicians evaluate treatment response to related diseases and provide a reliable basis for surgical planning and rehabilitation strategies.
  • weakly supervised learning does not require voxel-level labeled data, but uses image-level labeled data as weakly supervised signals in network training. Nonetheless, image-level labels or bounding boxes for medical images also require domain knowledge and are expensive to acquire, and the application of weakly supervised learning models in medical imaging is still limited, where image-level labels and bounding boxes are still required. Simple markup.
  • This semi-supervised learning method utilizes both labeled and unlabeled data, and strikes a balance between tediously supervised and unsupervised to train a model to accurately segment medical images with only a small number of labeled samples, which is important for designing images in medicine A split frame might be a more meaningful option.
  • the existing semi-supervised segmentation methods not only utilize unlabeled data, but also require image-level labels (such as bounding box labels) to assist the training and learning of semi-supervised networks, which are not semi-supervised in the true sense, and in 3D
  • image-level labels such as bounding box labels
  • the technical problem to be solved by the embodiments of the present invention is to provide a method and system for constructing a semi-supervised image segmentation framework, by improving the mean teacher model to establish a general semi-supervised segmentation framework that can be used for 3D medical images, and no additional images are required. level mark.
  • an embodiment of the present invention provides a method for constructing a semi-supervised image segmentation framework, including the following steps:
  • Step S1 constructing a semi-supervised image segmentation framework including a student model, a teacher model and a discriminator;
  • Step S2 obtaining a marked MRI image and its corresponding gold standard, and importing the marked MRI image as a first training set image into the student model for training, obtaining a segmentation probability map, and further combining the Gold standard to calculate supervised segmentation loss;
  • Step S3 obtain the original unlabeled MRI image and the noise unlabeled MRI image after it is combined with the noise of the preset Gaussian distribution, obtain a second training set image, and import the second training set image into the student
  • the model and the teacher model are trained respectively to obtain the corresponding student segmentation probability result map and the teacher segmentation probability result map. Further, the student segmentation probability result map and the teacher segmentation probability result map are respectively overlaid on the original After the marked MRI image, the corresponding student segmentation area and teacher segmentation area are generated and passed to the discriminator for similarity comparison to calculate the consistency loss; wherein, the teacher model is based on the The weights of the student model use an exponential moving average strategy to update the model parameters;
  • Step S4 Obtain a total segmentation loss according to the supervised segmentation loss and the consistency loss, and optimize the semi-supervised image segmentation framework according to the total segmentation loss.
  • step S3 specifically includes:
  • Training is performed in the teacher model, and the teacher model uses the exponential moving average strategy to update the model parameters based on the weight of the student model in the training process to obtain a teacher segmentation probability result map;
  • the student segmentation area and the teacher segmentation area are passed to the discriminator for similarity comparison, and the student multi-scale features and teacher multi-scale features are extracted respectively, and according to the student multi-scale features and the teacher multi-scale features Scale features, and calculate the consistency loss.
  • X u is the original unlabeled MRI image
  • S(X u ) is the student segmentation probability result Figure
  • R(X u ) is the teacher segmentation probability result map
  • f( ) is the hierarchical feature map extracted from the corresponding segmentation area
  • h, w, d are the height, width and length of each image
  • ⁇ mae for K is the number of network layers in the discriminator
  • f(x i ) is the feature vector output by the ith layer.
  • Y l is the gold standard for labeled images
  • h, w, d are the height, width, and length of each image
  • C is the number of label categories
  • c is a certain number of label categories in C a class
  • X l is the labeled MRI image
  • S(X l ) is the segmentation probability map.
  • the method further includes:
  • the self-training loss of the discriminator is calculated, the adversarial loss of the discriminator is obtained, and the self-training loss of the discriminator and its corresponding loss are obtained.
  • the adversarial loss is combined with the supervised segmentation loss and the consistency loss, the total segmentation loss is updated, and the semi-supervised image segmentation framework is optimized according to the updated total segmentation loss.
  • the embodiment of the present invention also provides a system for constructing a semi-supervised image segmentation framework, including an image segmentation framework construction unit, a supervised segmentation loss calculation unit, a consistency loss calculation unit, and an image segmentation framework optimization unit; wherein,
  • the image segmentation framework building unit is used to construct a semi-supervised image segmentation framework including a student model, a teacher model and a discriminator;
  • a supervised segmentation loss calculation unit is used to obtain the labeled MRI image and its corresponding gold standard, and import the labeled MRI image as the first training set image into the student model for training to obtain a segmentation probability map , and further combined with the gold standard to calculate the supervised segmentation loss;
  • the consistency loss calculation unit is used to obtain the original unlabeled MRI image and the noise unlabeled MRI image after it is combined with the noise of the preset Gaussian distribution to obtain a second training set image, and the second training set
  • the image is imported into the student model and the teacher model for training respectively, and the corresponding student segmentation probability result map and teacher segmentation probability result map are obtained. Further, the student segmentation probability result map and the teacher segmentation probability result map are respectively covered
  • the corresponding student segmentation area and teacher segmentation area are generated and passed to the discriminator for similarity comparison, so as to calculate the consistency loss; wherein, the teacher model is trained during training.
  • the model parameters are updated using an exponential moving average strategy based on the weight of the student model;
  • An image segmentation framework optimization unit configured to obtain a total segmentation loss according to the supervised segmentation loss and the consistency loss, and optimize the semi-supervised image segmentation framework according to the total segmentation loss.
  • the image segmentation framework re-optimization unit is used to calculate the self-training loss of the discriminator according to the student segmentation probability result map and the gold standard set correspondingly, and obtain the confrontation loss of the discriminator, and further
  • the self-training loss of the discriminator and its adversarial loss are combined with the supervised segmentation loss and the consistency loss to update the total segmentation loss, and according to the updated total segmentation loss, the semi-supervised image segmentation framework optimize.
  • the present invention uses the consistency mechanism based on multi-scale features to improve the mean teacher model, and incorporates voxel-level regularization information into the semi-supervised model, thereby further improving the mean teacher model and making it more suitable for image segmentation;
  • the present invention is deeply integrated with adversarial networks (such as discriminators for adversarial learning), which can achieve semi-supervised segmentation without additional image-level labels, and the role of adversarial networks is not only to extract multi-scale images containing spatial context information features, and can be used to measure the confidence of the segmentation probability map that implements the self-training scheme;
  • adversarial networks such as discriminators for adversarial learning
  • the present invention establishes a general semi-supervised segmentation framework that can be used for various MRI images (medical images).
  • FIG. 1 is a flowchart of a method for constructing a semi-supervised image segmentation framework proposed by an embodiment of the present invention
  • FIG. 2 is an application scene diagram before preprocessing of MRI images of four modalities in a method for constructing a semi-supervised image segmentation framework proposed by an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a system for constructing a semi-supervised image segmentation framework proposed by an embodiment of the present invention.
  • a method for constructing a semi-supervised image segmentation framework proposed in an embodiment of the present invention includes the following steps:
  • Step S1 constructing a semi-supervised image segmentation framework including a student model, a teacher model and a discriminator;
  • the constructed semi-supervised image segmentation framework is mainly composed of two modules: the mean teacher model and the adversarial network.
  • the framework deeply integrates the adversarial network into the improved mean teacher model, which mainly includes a mean teacher model formed by a student model S and a teacher model R and an adversarial network formed by a discriminator. All these models (including the discriminator) are based on CNN, especially the student and teacher models are based on the same segmentation network (such as U-Net).
  • Step S2 obtaining a marked MRI image and its corresponding gold standard, and importing the marked MRI image as a first training set image into the student model for training to obtain a segmentation probability map, and further combining the Gold standard to calculate supervised segmentation loss;
  • h, w, d are the height, width and length of each image; C is the number of label categories; c is a certain category in the number of label categories C.
  • Step S3 obtain the original unlabeled MRI image and the noise unlabeled MRI image after it is combined with the noise of the preset Gaussian distribution, obtain a second training set image, and import the second training set image into the student
  • the model and the teacher model are trained respectively to obtain the corresponding student segmentation probability result map and the teacher segmentation probability result map. Further, the student segmentation probability result map and the teacher segmentation probability result map are respectively overlaid on the original After the marked MRI image, the corresponding student segmentation area and teacher segmentation area are generated and passed to the discriminator for similarity comparison to calculate the consistency loss; wherein, the teacher model is based on the The weights of the student model use an exponential moving average strategy to update the model parameters;
  • the specific process is that, since the traditional mean teacher model has two losses, one is segmentation loss and the other is consistency loss, which is usually calculated directly from the segmentation maps of the student model S and the teacher model R. . Therefore, in order to overcome the inaccuracy of the accuracy caused by the direct conversion of consistency loss in the traditional mean teacher model, a consistency mechanism based on multi-scale features is used to improve the traditional mean teacher model and make it more suitable for image segmentation.
  • the process is as follows:
  • EMA exponential moving average
  • f( ) is the hierarchical feature map extracted from the corresponding segmented area
  • ⁇ mae is K is the number of network layers in the discriminator A
  • f(x i ) is the feature vector output by the i-th layer.
  • the discriminator A for adversarial learning is used as another important component in the framework, and a consistency loss calculated based on multi-scale features is designed.
  • the student model S and the teacher model R output the original unlabeled MRI image Xu and its corresponding noise unlabeled MRI image corresponding to the student segmentation probability result map S(X u ) and the teacher segmentation probability result map R respectively.
  • (X u ) After (X u ), it is overlaid on the original unlabeled MRI image X u to obtain two sets of segmented regions in MRI, which are generated by pixel-by-pixel multiplication of the input MRI and the segmentation probability map , that is, the student segmentation area Divide the area with the teacher In consistency training, the two obtained segmentation regions are encouraged to be similar, instead of only considering the consistency of the segmentation probability map as in the traditional mean teacher model.
  • CNNs can effectively learn image features at multiple scales, in order to better measure the consistency of segmented regions, the hierarchical features of segmented regions are extracted from CNN-based discriminator A and concatenated together, and compared the student segmented regions Divide the area with the teacher Corresponding multi-scale features, which are considered as student segmentation regions Divide the area with the teacher difference between.
  • Step S4 Obtain a total segmentation loss according to the supervised segmentation loss and the consistency loss, and optimize the semi-supervised image segmentation framework according to the total segmentation loss.
  • the specific process is, according to formula (3), calculate the total segmentation loss
  • ⁇ con is a weighting coefficient used to balance the relative importance of the designed loss function.
  • the discriminator A in addition to generating the above-mentioned multi-scale features for calculating consistency loss, the discriminator A also outputs a confidence map for self-training.
  • This confidence map can be used to guide and constrain the target region so that the learned distribution is closer to the true distribution.
  • reliable confidence regions can be obtained to select high-confidence segmentation results and convert them to pseudo-labels for self-training. Therefore, a subset of valid segmentation results from unlabeled MRI images Xu can be directly viewed as labels, which can be added to the training set to further enrich the dataset.
  • discriminator A is also used to define the adversarial loss It can further enhance the ability of the student model to fool the discriminator, as shown in Equation (5):
  • the adversarial loss Can be applied to all training samples, as it depends only on the adversarial network, regardless of whether there are labels.
  • the method further includes:
  • the self-training loss of the discriminator is calculated, and the adversarial loss of the discriminator is obtained, and the self-training loss of the discriminator and its adversarial loss are further combined with the supervised segmentation loss and Consistency losses are combined to update the total segmentation loss and optimize the semi-supervised image segmentation framework based on the updated total segmentation loss.
  • ⁇ con , ⁇ self and ⁇ adv are the corresponding weighting coefficients to balance the relative importance of the designed loss function.
  • FIG. 2 it is an application scene diagram of brain MRI segmentation jointly trained by mean teacher model and adversarial network in a method for constructing a semi-supervised image segmentation framework according to an embodiment of the present invention.
  • a system for constructing a semi-supervised image segmentation framework including an image segmentation framework construction unit 110, a supervised segmentation loss calculation unit 120, a consistency loss calculation unit 130, and an image segmentation frame construction unit 110.
  • Segmentation frame optimization unit 140 wherein,
  • an image segmentation frame construction unit 110 configured to construct a semi-supervised image segmentation frame comprising a student model, a teacher model and a discriminator;
  • the supervised segmentation loss calculation unit 120 is used to obtain the marked MRI image and its corresponding gold standard, and import the marked MRI image as the first training set image into the student model for training to obtain the segmentation probability graph, and further combined with the gold standard to calculate a supervised segmentation loss;
  • the consistency loss calculation unit 130 is configured to obtain the original unlabeled MRI image and the noise unlabeled MRI image after it is combined with the noise of the preset Gaussian distribution to obtain a second training set image, and the second training set image is obtained.
  • the set images are imported into the student model and the teacher model for training respectively, and the corresponding student segmentation probability result map and teacher segmentation probability result map are obtained.
  • the student segmentation probability result map and the teacher segmentation probability result map are respectively After being overlaid on the original unlabeled MRI image, the corresponding student segmentation area and teacher segmentation area are generated and passed to the discriminator for similarity comparison to calculate the consistency loss; wherein, the teacher model is in During the training process, the model parameters are updated based on the weight of the student model using an exponential moving average strategy;
  • the image segmentation framework optimization unit 140 is configured to obtain a total segmentation loss according to the supervised segmentation loss and the consistency loss, and optimize the semi-supervised image segmentation framework according to the total segmentation loss.
  • the image segmentation framework re-optimization unit 150 is used to calculate the self-training loss of the discriminator according to the student segmentation probability result map and the gold standard set correspondingly, and obtain the confrontation loss of the discriminator, and further The discriminator's self-training loss and its adversarial loss are combined with the supervised segmentation loss and the consistency loss to update the total segmentation loss, and segment the semi-supervised image according to the updated total segmentation loss
  • the framework is optimized.
  • the present invention uses the consistency mechanism based on multi-scale features to improve the mean teacher model, and incorporates voxel-level regularization information into the semi-supervised model, thereby further improving the mean teacher model and making it more suitable for image segmentation;
  • the present invention is deeply integrated with adversarial networks (such as discriminators for adversarial learning), which can achieve semi-supervised segmentation without additional image-level labels, and the role of adversarial networks is not only to extract multi-scale images containing spatial context information features, and can be used to measure the confidence of the segmentation probability map that implements the self-training scheme;
  • adversarial networks such as discriminators for adversarial learning
  • the present invention establishes a general semi-supervised segmentation framework that can be used for various MRI images (medical images).
  • each system unit included is only divided according to functional logic, but is not limited to the above-mentioned division, as long as the corresponding function can be realized; in addition, the specific The names are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

L'invention concerne un procédé de construction d'un cadre de segmentation d'image semi-supervisée, le procédé comprenant les étapes consistant à : construire un cadre de segmentation d'image semi-supervisée qui comprend un modèle d'étudiant, un modèle d'enseignant et un discriminateur (S1) ; acquérir une image d'IRM marquée et une référence standard correspondante de celle-ci, de façon à calculer une perte de segmentation supervisée ; acquérir une image d'IRM non marquée d'origine et une image de bruit d'IRM non marquée qui est obtenue après combinaison de l'image d'IRM non marquée d'origine avec un bruit de distribution gaussien prédéfini, de façon à obtenir un graphe de résultat de probabilité de segmentation d'étudiant correspondant et un graphe de résultat de probabilité de segmentation d'enseignant correspondant, puis recouvrir respectivement l'image d'IRM non marquée d'origine avec ceux-ci, et générer une zone de segmentation d'étudiant et une zone de segmentation d'enseignant et les transmettre conjointement au discriminateur pour une comparaison de similarité, de façon à calculer une perte de cohérence ; et obtenir la perte de segmentation totale en fonction de la perte de segmentation supervisée et de la perte de cohérence, et optimiser le cadre de segmentation d'image semi-supervisée en fonction de la perte de segmentation totale (S4). Au moyen de la mise en œuvre du procédé, un cadre de segmentation semi-supervisée universel pouvant être utilisé pour des images médicales 3D est établi au moyen de l'amélioration d'un modèle d'enseignant moyen, et aucune marque de niveau d'image supplémentaire n'est nécessaire.
PCT/CN2020/113496 2020-08-31 2020-09-04 Procédé et système de construction de cadre de segmentation d'image semi-supervisée WO2022041307A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010892241.7A CN112150478B (zh) 2020-08-31 2020-08-31 一种构建半监督图像分割框架的方法及系统
CN202010892241.7 2020-08-31

Publications (1)

Publication Number Publication Date
WO2022041307A1 true WO2022041307A1 (fr) 2022-03-03

Family

ID=73890865

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/113496 WO2022041307A1 (fr) 2020-08-31 2020-09-04 Procédé et système de construction de cadre de segmentation d'image semi-supervisée

Country Status (2)

Country Link
CN (1) CN112150478B (fr)
WO (1) WO2022041307A1 (fr)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332135A (zh) * 2022-03-10 2022-04-12 之江实验室 一种基于双模型交互学习的半监督医学图像分割方法及装置
CN114549842A (zh) * 2022-04-22 2022-05-27 山东建筑大学 基于不确定性知识域自适应的半监督图像分割方法及系统
CN114693753A (zh) * 2022-03-24 2022-07-01 北京理工大学 基于纹理保持约束的三维超声弹性配准方法及装置
CN114742799A (zh) * 2022-04-18 2022-07-12 华中科技大学 基于自监督异构网络的工业场景未知类型缺陷分割方法
CN114882227A (zh) * 2022-07-07 2022-08-09 南方医科大学第三附属医院(广东省骨科研究院) 一种人体组织图像分割方法及相关设备
CN114882325A (zh) * 2022-07-12 2022-08-09 之江实验室 基于二阶段物体检测器的半监督物检测及训练方法、装置
CN114897914A (zh) * 2022-03-16 2022-08-12 华东师范大学 基于对抗训练的半监督ct图像分割方法
CN114926471A (zh) * 2022-05-24 2022-08-19 北京医准智能科技有限公司 一种图像分割方法、装置、电子设备及存储介质
CN115496732A (zh) * 2022-09-26 2022-12-20 电子科技大学 一种半监督心脏语义分割算法
CN116188876A (zh) * 2023-03-29 2023-05-30 上海锡鼎智能科技有限公司 基于信息混合的半监督学习方法及半监督学习装置
CN116258861A (zh) * 2023-03-20 2023-06-13 南通锡鼎智能科技有限公司 基于多标签学习的半监督语义分割方法以及分割装置
CN116468746A (zh) * 2023-03-27 2023-07-21 华东师范大学 一种双向复制粘贴的半监督医学图像分割方法
CN116645507A (zh) * 2023-05-18 2023-08-25 丽水瑞联医疗科技有限公司 一种基于语义分割的胎盘图像处理方法及系统
CN116664602A (zh) * 2023-07-26 2023-08-29 中南大学 基于少样本学习的octa血管分割方法及成像方法
CN116778239A (zh) * 2023-06-16 2023-09-19 酷哇科技有限公司 面向实例分割模型的半监督训练方法及设备
CN117173401A (zh) * 2022-12-06 2023-12-05 南华大学 基于交叉指导和特征级一致性双正则化的半监督医学图像分割方法及系统
CN117333874A (zh) * 2023-10-27 2024-01-02 江苏新希望科技有限公司 一种图像分割方法、系统、存储介质和装置
CN117593648A (zh) * 2024-01-17 2024-02-23 中国人民解放军海军航空大学 基于弱监督学习的遥感目标建筑物提取方法
CN117765532A (zh) * 2024-02-22 2024-03-26 中国科学院宁波材料技术与工程研究所 基于共聚焦显微图像的角膜朗格汉斯细胞分割方法和装置

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734723B (zh) * 2021-01-08 2023-06-30 温州医科大学 一种面向多源数据的乳腺肿瘤图像分类预测方法及装置
CN112749801A (zh) * 2021-01-22 2021-05-04 上海商汤智能科技有限公司 神经网络训练和图像处理方法及装置
CN113129309B (zh) * 2021-03-04 2023-04-07 同济大学 基于对象上下文一致性约束的医学图像半监督分割系统
CN113256646B (zh) * 2021-04-13 2024-03-22 浙江工业大学 一种基于半监督学习的脑血管图像分割方法
CN113239924B (zh) * 2021-05-21 2022-04-26 上海交通大学 一种基于迁移学习的弱监督目标检测方法及系统
CN113256639A (zh) * 2021-05-27 2021-08-13 燕山大学 基于半监督平均教师模型的冠脉造影血管图像分割方法
CN113344896B (zh) * 2021-06-24 2023-01-17 鹏城实验室 一种胸部ct图像病灶分割模型的训练方法及系统
CN113763406B (zh) * 2021-07-28 2024-04-26 华中师范大学 基于半监督学习的婴儿脑mri分割方法
CN113743474B (zh) * 2021-08-10 2023-09-26 扬州大学 基于协同半监督卷积神经网络的数字图片分类方法与系统
CN113793304A (zh) * 2021-08-23 2021-12-14 天津大学 一种面向肺癌靶区及危及器官智能分割方法
CN117523327A (zh) * 2022-07-29 2024-02-06 马上消费金融股份有限公司 图像处理方法、装置、设备和介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087303A (zh) * 2018-08-15 2018-12-25 中山大学 基于迁移学习提升语义分割模型效果的框架
CN111275713A (zh) * 2020-02-03 2020-06-12 武汉大学 一种基于对抗自集成网络的跨域语义分割方法
CN111401406A (zh) * 2020-02-21 2020-07-10 华为技术有限公司 一种神经网络训练方法、视频帧处理方法以及相关设备
CN111489365A (zh) * 2020-04-10 2020-08-04 上海商汤临港智能科技有限公司 神经网络的训练方法、图像处理方法及装置
AU2020103905A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Unsupervised cross-domain self-adaptive medical image segmentation method based on deep adversarial learning

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091333A (zh) * 2014-07-01 2014-10-08 黄河科技学院 基于区域可信融合的多类无监督彩色纹理图像分割方法
CN108764462A (zh) * 2018-05-29 2018-11-06 成都视观天下科技有限公司 一种基于知识蒸馏的卷积神经网络优化方法
CN109949317B (zh) * 2019-03-06 2020-12-11 东南大学 基于逐步对抗学习的半监督图像实例分割方法
CN109978850B (zh) * 2019-03-21 2020-12-22 华南理工大学 一种多模态医学影像半监督深度学习分割系统
CN110059740A (zh) * 2019-04-12 2019-07-26 杭州电子科技大学 一种针对嵌入式移动端的深度学习语义分割模型压缩方法
CN110059698B (zh) * 2019-04-30 2022-12-23 福州大学 用于街景理解的基于边缘稠密重建的语义分割方法及系统
CN110428426A (zh) * 2019-07-02 2019-11-08 温州医科大学 一种基于改进随机森林算法的mri图像自动分割方法
CN110503654B (zh) * 2019-08-01 2022-04-26 中国科学院深圳先进技术研究院 一种基于生成对抗网络的医学图像分割方法、系统及电子设备
CN110490881A (zh) * 2019-08-19 2019-11-22 腾讯科技(深圳)有限公司 医学影像分割方法、装置、计算机设备及可读存储介质
CN111047594B (zh) * 2019-11-06 2023-04-07 安徽医科大学 肿瘤mri弱监督学习分析建模方法及其模型
CN111080645B (zh) * 2019-11-12 2023-08-15 中国矿业大学 基于生成式对抗网络的遥感图像半监督语义分割方法
CN111062951B (zh) * 2019-12-11 2022-03-25 华中科技大学 一种基于语义分割类内特征差异性的知识蒸馏方法
CN111369618A (zh) * 2020-02-20 2020-07-03 清华大学 基于压缩采样的rf信号人体姿态估计方法及装置
CN111402278B (zh) * 2020-02-21 2023-10-27 华为云计算技术有限公司 分割模型训练方法、图像标注方法及相关装置
CN111369535B (zh) * 2020-03-05 2023-04-07 笑纳科技(苏州)有限公司 一种细胞检测方法
CN111507993B (zh) * 2020-03-18 2023-05-19 南方电网科学研究院有限责任公司 一种基于生成对抗网络的图像分割方法、装置及存储介质
CN111507227B (zh) * 2020-04-10 2023-04-18 南京汉韬科技有限公司 基于深度学习的多学生个体分割及状态自主识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087303A (zh) * 2018-08-15 2018-12-25 中山大学 基于迁移学习提升语义分割模型效果的框架
CN111275713A (zh) * 2020-02-03 2020-06-12 武汉大学 一种基于对抗自集成网络的跨域语义分割方法
CN111401406A (zh) * 2020-02-21 2020-07-10 华为技术有限公司 一种神经网络训练方法、视频帧处理方法以及相关设备
CN111489365A (zh) * 2020-04-10 2020-08-04 上海商汤临港智能科技有限公司 神经网络的训练方法、图像处理方法及装置
AU2020103905A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Unsupervised cross-domain self-adaptive medical image segmentation method based on deep adversarial learning

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332135A (zh) * 2022-03-10 2022-04-12 之江实验室 一种基于双模型交互学习的半监督医学图像分割方法及装置
CN114897914A (zh) * 2022-03-16 2022-08-12 华东师范大学 基于对抗训练的半监督ct图像分割方法
CN114693753B (zh) * 2022-03-24 2024-05-03 北京理工大学 基于纹理保持约束的三维超声弹性配准方法及装置
CN114693753A (zh) * 2022-03-24 2022-07-01 北京理工大学 基于纹理保持约束的三维超声弹性配准方法及装置
CN114742799A (zh) * 2022-04-18 2022-07-12 华中科技大学 基于自监督异构网络的工业场景未知类型缺陷分割方法
CN114742799B (zh) * 2022-04-18 2024-04-26 华中科技大学 基于自监督异构网络的工业场景未知类型缺陷分割方法
CN114549842A (zh) * 2022-04-22 2022-05-27 山东建筑大学 基于不确定性知识域自适应的半监督图像分割方法及系统
CN114926471A (zh) * 2022-05-24 2022-08-19 北京医准智能科技有限公司 一种图像分割方法、装置、电子设备及存储介质
CN114926471B (zh) * 2022-05-24 2023-03-28 北京医准智能科技有限公司 一种图像分割方法、装置、电子设备及存储介质
CN114882227A (zh) * 2022-07-07 2022-08-09 南方医科大学第三附属医院(广东省骨科研究院) 一种人体组织图像分割方法及相关设备
CN114882227B (zh) * 2022-07-07 2022-11-04 南方医科大学第三附属医院(广东省骨科研究院) 一种人体组织图像分割方法及相关设备
CN114882325A (zh) * 2022-07-12 2022-08-09 之江实验室 基于二阶段物体检测器的半监督物检测及训练方法、装置
CN114882325B (zh) * 2022-07-12 2022-12-02 之江实验室 基于二阶段物体检测器的半监督物检测及训练方法、装置
CN115496732B (zh) * 2022-09-26 2024-03-15 电子科技大学 一种半监督心脏语义分割算法
CN115496732A (zh) * 2022-09-26 2022-12-20 电子科技大学 一种半监督心脏语义分割算法
CN117173401B (zh) * 2022-12-06 2024-05-03 南华大学 基于交叉指导和特征级一致性双正则化的半监督医学图像分割方法及系统
CN117173401A (zh) * 2022-12-06 2023-12-05 南华大学 基于交叉指导和特征级一致性双正则化的半监督医学图像分割方法及系统
CN116258861A (zh) * 2023-03-20 2023-06-13 南通锡鼎智能科技有限公司 基于多标签学习的半监督语义分割方法以及分割装置
CN116258861B (zh) * 2023-03-20 2023-09-22 南通锡鼎智能科技有限公司 基于多标签学习的半监督语义分割方法以及分割装置
CN116468746B (zh) * 2023-03-27 2023-12-26 华东师范大学 一种双向复制粘贴的半监督医学图像分割方法
CN116468746A (zh) * 2023-03-27 2023-07-21 华东师范大学 一种双向复制粘贴的半监督医学图像分割方法
CN116188876B (zh) * 2023-03-29 2024-04-19 上海锡鼎智能科技有限公司 基于信息混合的半监督学习方法及半监督学习装置
CN116188876A (zh) * 2023-03-29 2023-05-30 上海锡鼎智能科技有限公司 基于信息混合的半监督学习方法及半监督学习装置
CN116645507A (zh) * 2023-05-18 2023-08-25 丽水瑞联医疗科技有限公司 一种基于语义分割的胎盘图像处理方法及系统
CN116778239A (zh) * 2023-06-16 2023-09-19 酷哇科技有限公司 面向实例分割模型的半监督训练方法及设备
CN116778239B (zh) * 2023-06-16 2024-06-11 酷哇科技有限公司 面向实例分割模型的半监督训练方法及设备
CN116664602B (zh) * 2023-07-26 2023-11-03 中南大学 基于少样本学习的octa血管分割方法及成像方法
CN116664602A (zh) * 2023-07-26 2023-08-29 中南大学 基于少样本学习的octa血管分割方法及成像方法
CN117333874A (zh) * 2023-10-27 2024-01-02 江苏新希望科技有限公司 一种图像分割方法、系统、存储介质和装置
CN117593648B (zh) * 2024-01-17 2024-04-05 中国人民解放军海军航空大学 基于弱监督学习的遥感目标建筑物提取方法
CN117593648A (zh) * 2024-01-17 2024-02-23 中国人民解放军海军航空大学 基于弱监督学习的遥感目标建筑物提取方法
CN117765532A (zh) * 2024-02-22 2024-03-26 中国科学院宁波材料技术与工程研究所 基于共聚焦显微图像的角膜朗格汉斯细胞分割方法和装置
CN117765532B (zh) * 2024-02-22 2024-05-31 中国科学院宁波材料技术与工程研究所 基于共聚焦显微图像的角膜朗格汉斯细胞分割方法和装置

Also Published As

Publication number Publication date
CN112150478B (zh) 2021-06-22
CN112150478A (zh) 2020-12-29

Similar Documents

Publication Publication Date Title
WO2022041307A1 (fr) Procédé et système de construction de cadre de segmentation d'image semi-supervisée
WO2020215984A1 (fr) Procédé de détection d'images médicales basée sur un apprentissage profond, et dispositif associé
CN109493308B (zh) 基于条件多判别生成对抗网络的医疗图像合成与分类方法
Tang et al. A two-stage approach for automatic liver segmentation with Faster R-CNN and DeepLab
Pu et al. Fetal cardiac cycle detection in multi-resource echocardiograms using hybrid classification framework
Kisilev et al. Medical image description using multi-task-loss CNN
Solovyev et al. 3D convolutional neural networks for stalled brain capillary detection
CN111325750B (zh) 一种基于多尺度融合u型链神经网络的医学图像分割方法
US8170303B2 (en) Automatic cardiac view classification of echocardiography
CN108898606A (zh) 医学图像的自动分割方法、系统、设备及存储介质
CN111932529B (zh) 一种图像分类分割方法、装置及系统
Huang et al. Omni-supervised learning: scaling up to large unlabelled medical datasets
CN114897914B (zh) 基于对抗训练的半监督ct图像分割方法
Li et al. Recurrent aggregation learning for multi-view echocardiographic sequences segmentation
Salim et al. Ridge regression neural network for pediatric bone age assessment
Cui et al. Bidirectional cross-modality unsupervised domain adaptation using generative adversarial networks for cardiac image segmentation
Venturini et al. Uncertainty estimates as data selection criteria to boost omni-supervised learning
CN110992352A (zh) 基于卷积神经网络的婴儿头围ct图像自动测量方法
Sirjani et al. Automatic cardiac evaluations using a deep video object segmentation network
Wang et al. Medical matting: a new perspective on medical segmentation with uncertainty
Li et al. Automatic annotation algorithm of medical radiological images using convolutional neural network
CN113643297B (zh) 一种基于神经网络的计算机辅助牙龄分析方法
Wang et al. Deep learning based fetal middle cerebral artery segmentation in large-scale ultrasound images
Wang et al. Automatic measurement of fetal head circumference using a novel GCN-assisted deep convolutional network
CN117437423A (zh) 基于sam协同学习和跨层特征聚合增强的弱监督医学图像分割方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20950976

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20950976

Country of ref document: EP

Kind code of ref document: A1