CN117830324A - 3D medical image segmentation method based on multi-dimensional and global local combination - Google Patents

3D medical image segmentation method based on multi-dimensional and global local combination Download PDF

Info

Publication number
CN117830324A
CN117830324A CN202311852915.0A CN202311852915A CN117830324A CN 117830324 A CN117830324 A CN 117830324A CN 202311852915 A CN202311852915 A CN 202311852915A CN 117830324 A CN117830324 A CN 117830324A
Authority
CN
China
Prior art keywords
dimensional
segmentation
medical image
net
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311852915.0A
Other languages
Chinese (zh)
Other versions
CN117830324B (en
Inventor
何志权
欧阳雅婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202311852915.0A priority Critical patent/CN117830324B/en
Publication of CN117830324A publication Critical patent/CN117830324A/en
Application granted granted Critical
Publication of CN117830324B publication Critical patent/CN117830324B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a 3D medical image segmentation method based on multi-dimensional and global local combination, which belongs to the technical field of medical image processing and comprises the following steps: step 1, processing an unlabeled image and a marked image through two parallel tasks to obtain a corresponding loss function; step 2, summing the loss functions obtained in the step 1 to obtain a total loss function; step 3, updating a student model of the 3D medical image segmentation network by using random gradient descent of the total loss function; and 4, repeating the steps 1 to 3 until the total loss function is not reduced, and optimizing the 3D medical image segmentation. The invention provides a 3D medical image segmentation method based on multi-dimensional and global local combination, which has the advantages of accurate segmentation, fine segmentation and reliable prediction.

Description

3D medical image segmentation method based on multi-dimensional and global local combination
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to a 3D medical image segmentation method based on multi-dimensional and global local combination.
Background
With the rapid development of deep learning, many image segmentation methods have shown their effectiveness in facilitating quantitative analysis of medical images. Fully supervised image segmentation requires large amounts of well annotated data to ensure satisfactory performance, while manually annotating the data is very expensive and time consuming. The semi-supervised segmentation achieves the purpose of accurate segmentation by using a small amount of marked data and a large amount of unmarked data, and has important application value in the aspects of medical image processing, disease diagnosis, treatment scheme formulation and the like.
The existing semi-supervised medical image segmentation method reduces a great deal of cost and manpower compared with the full-supervised medical image segmentation method, but has some defects. 1) In the three-dimensional field, the currently known image segmentation network mostly uses a complete three-dimensional image as an input of the network, but feature extraction in a three-dimensional space can only capture finer global feature information, and local feature differences are relatively weak in the three-dimensional space, so that the accuracy of three-dimensional segmentation is not high. 2) Segmentation based on the entire image may be not fine for fine branches and edge segmentation. 3) Semi-supervised segmentation is commonly used to teachers and students models, but predictions of unlabeled data by the teacher network may be unreliable and noisy. Therefore, there is a need for a 3D medical image segmentation method based on multi-dimensional and global local integration, which solves the problems of the segmentation method.
Disclosure of Invention
The invention aims to provide a 3D medical image segmentation method based on multi-dimensional and global local combination, which solves the problems of low accuracy of three-dimensional segmentation, imprecision of segmentation of fine branches and edges, unreliable prediction and noise existing in the prior art.
In order to achieve the above object, the present invention provides a 3D medical image segmentation method based on multi-dimensional and global local association, comprising the steps of:
step 1, processing an unlabeled image and a marked image through two parallel tasks to obtain a corresponding loss function;
step 2, summing the loss functions obtained in the step 1 to obtain a total loss function;
step 3, updating a student model of the 3D medical image segmentation network by using random gradient descent of the total loss function;
and 4, repeating the steps 1 to 3 until the total loss function is not reduced, and optimizing the 3D medical image segmentation.
Preferably, the two parallel tasks in the step 1 comprise a multidimensional joint segmentation task and an image block contrast learning task.
Preferably, the specific process of the multi-dimensional joint segmentation task is as follows:
a1, two unlabeled imagesAnd->Inputting a 3D U-Net teacher model and obtaining a reliable pseudo tag by a multi-level confidence coefficient module>And->The pseudo tag and the real tag are copied and stuck in two directions to obtain a mixed tag Y out And Y in
A2, two marker imagesAnd->And two unlabeled images->And->X is obtained through bidirectional copy and paste out And X in Input 3D U-Net student model, output Q out And Q in
A3, X is out And X in Input two-dimensional consistency learning module to obtainAnd->
A4、And Q is equal to out Obtaining->And Q is equal to in Obtained through a fusion module
A5、Respectively with Y out And Y in Comparing and calculating loss to obtain L out And L in Two segmentation loss functions.
Preferably, the specific process of the image block contrast learning task is as follows:
b1, for two marker imagesAnd->And two unlabeled images->And->X obtained by bidirectional copy and paste out And X in Performing blocking treatment;
b2, inputting the obtained small blocks into a 3D U-Net teacher model and a 3D U-Net student model respectively to obtain output;
b3, calculating the block contrast learning loss functionAnd->The outputs of the same block through the 3D U-Net teacher model and the 3D U-Net student model are directly opposite, and vice versa.
Preferably, the calculation expression of the total loss function in step 2 is as follows:
wherein L is all Represents the total loss function, L in And L out The segmentation loss between the representation and the label,and->Representing block contrast learning penalty, M is a mask, < ->And->Representing a fusion result of the three-dimensional segmentation result and the two-dimensional segmentation result, < >>Y out And Y in Representing a mixed label obtained after the two-way copy pasting of a pseudo label and a real label, L seg Is a linear combination of the Dice loss and the cross entropy loss, and the Dice loss calculation formula is +.>Wherein |X| and |Y| respectively represent the real label and the prediction mask of the segmentation, N represents the number of samples, m represents the number of image blocks, d is the embedding distance, and the calculation formula is X represents out Results of each pixel of (a) output by the student model, a->X represents out Results of the respective pixels output by the teacher model, and (2)>X represents in Results of each pixel of (a) output by the student model, a->X represents in Results of each pixel of (3) output by the teacher model.
Preferably, two unlabeled images in A1And->Inputting a 3D U-Net teacher model and obtaining a reliable pseudo tag by a multi-level confidence coefficient module>And->The specific process of (2) is as follows: the 3D U-Net teacher model comprises an encoder and a decoder, wherein the decoder comprises a plurality of layers of networks, the output of each layer of the three layers is regulated to be the same by using bilinear interpolation, and the channel pooling is used for regulating the channel number of each layer of output to be equal; and obtaining a predicted value by utilizing the Monte Carlo Dropout, wherein the entropy has a fixed numerical range, measuring the multi-scale uncertainty u by utilizing the predicted entropy, and obtaining a reliable pseudo tag by utilizing u as an adaptive weight and correspondingly if the accuracy of pseudo tag prediction is high, wherein the occupied weight is large.
Preferably, the two-dimensional consistency learning module in the A3 performs dimension reduction operation by acquiring two-dimensional slices of the three-dimensional input image frame by frame, and takes the two-dimensional subgraph as input of a two-dimensional segmentation network, wherein the two-dimensional segmentation network obtains finer local differences from slice layers.
Therefore, the invention adopts the 3D medical image segmentation method based on the combination of the multidimensional and global local, and in order to encourage unlabeled data to learn comprehensive semantics from labeled data, the scheme designs a bidirectional copy-paste model based on a student model and a teacher model, and the distribution alignment is further promoted through a consistent learning strategy of the labeled data and the unlabeled data, so that the segmentation performance is improved. Aiming at the problem that local feature difference is difficult to extract in a three-dimensional space, the scheme designs a multi-dimensional combined network, and more abundant feature information is extracted from two dimensions and three dimensions. Aiming at the problem that segmentation based on the whole image is possibly not fine, the scheme designs a global local joint strategy of image block contrast learning, and utilizes the contrast learning of the image block to extract finer local feature details. Aiming at the problem that the pseudo tag output by the teacher model is possibly unreliable, the scheme designs a multi-level confidence coefficient module to select the high-certainty pseudo tag and the real tag to be copied and pasted in two directions to form a mixed tag, and the mixed tag is used for supervising the output of the input image through the student model.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a general flow chart of a 3D medical image segmentation method based on multi-dimensional and global local federation of the present invention;
FIG. 2 is a block diagram of a two-dimensional consistency learning module of the present invention;
FIG. 3 is a block diagram of a fusion module of the present invention;
FIG. 4 is a block diagram of image block contrast learning of the present invention;
FIG. 5 is a block diagram of a multi-level confidence module of the present invention.
Detailed Description
The following detailed description of the embodiments of the invention, provided in the accompanying drawings, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-5, a 3D medical image segmentation method based on multi-dimensional and global local association includes the steps of:
step 1, processing an unlabeled image and a marked image through two parallel tasks to obtain a corresponding loss function, wherein the two parallel tasks comprise a multidimensional joint segmentation task and an image block comparison learning task;
the specific process of the multidimensional joint segmentation task is as follows:
a1, two unlabeled imagesAnd->Inputting a 3D U-Net teacher model and obtaining a reliable pseudo tag by a multi-level confidence coefficient module>And->The pseudo tag and the real tag are copied and stuck in two directions to obtain a mixed tag Y out And Y in The method comprises the steps of carrying out a first treatment on the surface of the Two unlabeled images->And->Inputting a 3D U-Net teacher model and obtaining a reliable pseudo tag by a multi-level confidence coefficient module>Andthe specific process of (2) is as follows: the 3D U-Net teacher model comprises an encoder and a decoder, wherein the decoder comprises a plurality of layers of networks, the output of each layer of the three layers is regulated to be the same by using bilinear interpolation, and the channel pooling is used for regulating the channel number of each layer of output to be equal; and obtaining a predicted value by utilizing the Monte Carlo Dropout, wherein the entropy has a fixed numerical range, measuring the multi-scale uncertainty u by utilizing the predicted entropy, and obtaining a reliable pseudo tag by utilizing u as an adaptive weight and correspondingly if the accuracy of pseudo tag prediction is high, wherein the occupied weight is large.
A2, two marker imagesAnd->And two unlabeled images->And->X is obtained through bidirectional copy and paste out And X in Input 3D U-Net student model, output Q out And Q in
A3, X is out And X in Input two-dimensional consistency learning module to obtainAnd->The two-dimensional consistency learning module performs dimension reduction operation by acquiring two-dimensional slices of a three-dimensional input image frame by frame, takes a two-dimensional subgraph as input of a two-dimensional segmentation network, and the two-dimensional segmentation network obtains finer local differences from slice layers;
A4、and Q is equal to out Obtaining->And Q is equal to in Obtained through a fusion moduleThe fusion module is used for fusing the three-dimensional model output and the two-dimensional model output, and a weight regulator, Q is applied out ,/>And->The initial weights of (a) are α=0.4, β=0.3, γ=0.3 and then the three weight parameters are automatically adjusted by the back propagation algorithm, by calculating the loss functionAnd (3) carrying out parameter updating by using gradient of the number relative to the weight parameter and utilizing optimization algorithms such as gradient descent and the like.
A5、Respectively with Y out And Y in Comparing and calculating loss to obtain L out And L in Two segmentation loss functions.
The specific process of the image block contrast learning task is as follows:
b1, for two marker imagesAnd->And two unlabeled images->And->X obtained by bidirectional copy and paste out And X in Performing blocking treatment;
b2, inputting the obtained small blocks into a 3D U-Net teacher model and a 3D U-Net student model respectively to obtain output;
b3, calculating the block contrast learning loss functionAnd->The outputs of the same block through the 3D U-Net teacher model and the 3D U-Net student model are directly opposite, and vice versa.
Step 2, summing the loss functions obtained in the step 1 to obtain a total loss function; the specific calculation expression is as follows:
wherein L is all Represents the total loss function, L in And L out The segmentation loss between the representation and the label,and->Representing block contrast learning penalty, M is a mask, < ->And->Representing a fusion result of the three-dimensional segmentation result and the two-dimensional segmentation result, < >>Y out And Y in Representing a mixed label obtained after the two-way copy pasting of a pseudo label and a real label, L seg Is a linear combination of the Dice loss and the cross entropy loss, and the Dice loss calculation formula is +.>Wherein |X| and |Y| respectively represent the real label and the prediction mask of the segmentation, N represents the number of samples, m represents the number of image blocks, d is the embedding distance, and the calculation formula is X represents out Results of each pixel of (a) output by the student model, a->X represents out Results of the respective pixels output by the teacher model, and (2)>X represents in Results of each pixel of (a) output by the student model, a->X represents in Results of each pixel of (3) output by the teacher model.
Step 3, updating a student model of the 3D medical image segmentation network by using random gradient descent of the total loss function;
and 4, repeating the steps 1 to 3 until the total loss function is not reduced, and optimizing the 3D medical image segmentation.
Therefore, the 3D medical image segmentation method based on the multi-dimensional and global local combination is adopted, a 3D teacher model and a 2D teacher model are trained through labeled data, bidirectional copy and paste of unlabeled data and labeled data are used as input of a 3D student model, image slices after bidirectional copy and paste are used as input of a 2D student model, unlabeled images are used as input of the 3D teacher model to obtain pseudo labels, and the pseudo labels and the real labels are subjected to bidirectional copy and paste to obtain mixed labels, the mixed labels are used for supervising the mixed output of the 3D student model and the 2D student model, meanwhile, auxiliary task image block contrast learning is added to improve performance, and a student model capable of accurately segmenting the 3D medical image is trained through random gradient reduction of total loss.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention and not for limiting it, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that: the technical scheme of the invention can be modified or replaced by the same, and the modified technical scheme cannot deviate from the spirit and scope of the technical scheme of the invention.

Claims (7)

1. A 3D medical image segmentation method based on multi-dimensional and global local association, comprising the steps of:
step 1, processing an unlabeled image and a marked image through two parallel tasks to obtain a corresponding loss function;
step 2, summing the loss functions obtained in the step 1 to obtain a total loss function;
step 3, updating a student model of the 3D medical image segmentation network by using random gradient descent of the total loss function;
and 4, repeating the steps 1 to 3 until the total loss function is not reduced, and optimizing the 3D medical image segmentation.
2. A 3D medical image segmentation method based on a combination of multi-dimensions and global localization as claimed in claim 1, characterized in that: in the step 1, two parallel tasks comprise a multidimensional joint segmentation task and an image block comparison learning task.
3. The 3D medical image segmentation method based on multi-dimensional and global local joint according to claim 2, wherein the specific process of the multi-dimensional joint segmentation task is as follows:
a1, two unlabeled imagesAnd->Inputting a 3D U-Net teacher model and obtaining a reliable pseudo tag by a multi-level confidence coefficient module>And->The pseudo tag and the real tag are copied and stuck in two directions to obtain a mixed tag Y out And Y in
A2, two marker imagesAnd->And two unlabeled images->And->X is obtained through bidirectional copy and paste out And X in Input 3D U-Net student model, output Q out And Q in
A3, X is out And X in Input two-dimensional consistency learning module to obtainAnd->
A4、And Q is equal to out Obtaining-> And Q is equal to in Obtaining->
A5、Respectively with Y out And Y in Comparing and calculating loss to obtain L out And L in Two segmentation loss functions.
4. A 3D medical image segmentation method based on multi-dimensional and global local association according to claim 3, wherein the specific process of the image block contrast learning task is as follows:
b1, for two marker imagesAnd->And two unlabeled images->And->X obtained by bidirectional copy and paste out And X in Performing blocking treatment;
b2, inputting the obtained small blocks into a 3D U-Net teacher model and a 3D U-Net student model respectively to obtain output;
b3, calculating the block contrast learning loss functionAnd->The outputs of the same block through the 3D U-Net teacher model and the 3D U-Net student model are directly opposite, and vice versa.
5. The method for 3D medical image segmentation based on multi-dimensional and global local integration according to claim 4, wherein the calculation expression of the total loss function in step 2 is as follows:
wherein L is all Represents the total loss function, L in And L out The segmentation loss between the representation and the label,and->Representing block contrast learning penalty, M is a mask, < ->And->Representing a fusion result of the three-dimensional segmentation result and the two-dimensional segmentation result,Y out and Y in Representing a mixed label obtained after the two-way copy pasting of a pseudo label and a real label, L seg Is a linear combination of the Dice loss and the cross entropy loss, and the Dice loss calculation formula is +.>Wherein |X| and |Y| respectively represent the real label and the prediction mask of the segmentation, N represents the number of samples, m represents the number of image blocks, d is the embedding distance, and the calculation formula is X represents out Results of each pixel of (a) output by the student model, a->X represents out Results of the respective pixels output by the teacher model, and (2)>X represents in Results of each pixel of (a) output by the student model, a->X represents in Results of each pixel of (3) output by the teacher model.
6. The method for 3D medical image segmentation based on multi-dimensional and global local integration according to claim 5, wherein the two unlabeled images in A1And->Inputting a 3D U-Net teacher model and obtaining a reliable pseudo tag by a multi-level confidence coefficient module>And->The specific process of (2) is as follows: the 3D U-Net teacher model comprises an encoder and a decoder, wherein the decoder comprises a plurality of layers of networks, the output of each layer of the three layers is regulated to be the same by using bilinear interpolation, and the channel pooling is used for regulating the channel number of each layer of output to be equal; and obtaining a predicted value by utilizing the Monte Carlo Dropout, wherein the entropy has a fixed numerical range, measuring the multi-scale uncertainty u by utilizing the predicted entropy, and obtaining a reliable pseudo tag by utilizing u as an adaptive weight and correspondingly if the accuracy of pseudo tag prediction is high, wherein the occupied weight is large.
7. The multi-dimensional and global local joint-based 3D medical image segmentation method according to claim 6, wherein: in A3, the two-dimensional consistency learning module performs dimension reduction operation by acquiring two-dimensional slices of the three-dimensional input image frame by frame, takes the two-dimensional subgraph as input of a two-dimensional segmentation network, and the two-dimensional segmentation network obtains finer local differences from slice layers.
CN202311852915.0A 2023-12-28 2023-12-28 3D medical image segmentation method based on multi-dimensional and global local combination Active CN117830324B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311852915.0A CN117830324B (en) 2023-12-28 2023-12-28 3D medical image segmentation method based on multi-dimensional and global local combination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311852915.0A CN117830324B (en) 2023-12-28 2023-12-28 3D medical image segmentation method based on multi-dimensional and global local combination

Publications (2)

Publication Number Publication Date
CN117830324A true CN117830324A (en) 2024-04-05
CN117830324B CN117830324B (en) 2024-07-12

Family

ID=90520362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311852915.0A Active CN117830324B (en) 2023-12-28 2023-12-28 3D medical image segmentation method based on multi-dimensional and global local combination

Country Status (1)

Country Link
CN (1) CN117830324B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826458A (en) * 2019-10-31 2020-02-21 河海大学 Multispectral remote sensing image change detection method and system based on deep learning
CN112801107A (en) * 2021-02-01 2021-05-14 联想(北京)有限公司 Image segmentation method and electronic equipment
US20220309674A1 (en) * 2021-03-26 2022-09-29 Nanjing University Of Posts And Telecommunications Medical image segmentation method based on u-net
CN115331009A (en) * 2022-08-17 2022-11-11 西安理工大学 Medical image segmentation method based on multitask MeanTeacher
CN115861164A (en) * 2022-09-16 2023-03-28 重庆邮电大学 Medical image segmentation method based on multi-field semi-supervision
CN116468746A (en) * 2023-03-27 2023-07-21 华东师范大学 Bidirectional copy-paste semi-supervised medical image segmentation method
CN116664588A (en) * 2023-05-29 2023-08-29 华中科技大学 Mask modeling-based 3D medical image segmentation model building method and application thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826458A (en) * 2019-10-31 2020-02-21 河海大学 Multispectral remote sensing image change detection method and system based on deep learning
CN112801107A (en) * 2021-02-01 2021-05-14 联想(北京)有限公司 Image segmentation method and electronic equipment
US20220309674A1 (en) * 2021-03-26 2022-09-29 Nanjing University Of Posts And Telecommunications Medical image segmentation method based on u-net
CN115331009A (en) * 2022-08-17 2022-11-11 西安理工大学 Medical image segmentation method based on multitask MeanTeacher
CN115861164A (en) * 2022-09-16 2023-03-28 重庆邮电大学 Medical image segmentation method based on multi-field semi-supervision
CN116468746A (en) * 2023-03-27 2023-07-21 华东师范大学 Bidirectional copy-paste semi-supervised medical image segmentation method
CN116664588A (en) * 2023-05-29 2023-08-29 华中科技大学 Mask modeling-based 3D medical image segmentation model building method and application thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘静等: "基于改进的FLICM的医学图像分割研究", 微型机与应用, vol. 35, no. 23, 31 December 2016 (2016-12-31) *

Also Published As

Publication number Publication date
CN117830324B (en) 2024-07-12

Similar Documents

Publication Publication Date Title
CN108428229B (en) Lung texture recognition method based on appearance and geometric features extracted by deep neural network
WO2022001623A1 (en) Image processing method and apparatus based on artificial intelligence, and device and storage medium
WO2018023734A1 (en) Significance testing method for 3d image
CN109903301B (en) Image contour detection method based on multistage characteristic channel optimization coding
CN107247971B (en) Intelligent analysis method and system for ultrasonic thyroid nodule risk index
CN110276402B (en) Salt body identification method based on deep learning semantic boundary enhancement
CN111445474B (en) Kidney CT image segmentation method based on bidirectional re-attention depth network
CN102542302A (en) Automatic complicated target identification method based on hierarchical object semantic graph
CN115761222B (en) Image segmentation method, remote sensing image segmentation method and device
CN113111716B (en) Remote sensing image semiautomatic labeling method and device based on deep learning
CN104517120B (en) Orthogonal matched remote sensing images scene classification method is layered based on multichannel
CN110866921A (en) Weakly supervised vertebral body segmentation method and system based on self-training and slice propagation
CN117557414B (en) Cultivated land supervision method, device, equipment and storage medium based on automatic interpretation of remote sensing image
CN116486408B (en) Cross-domain semantic segmentation method and device for remote sensing image
CN114283285A (en) Cross consistency self-training remote sensing image semantic segmentation network training method and device
CN115966302A (en) Semi-supervised gonitis auxiliary analysis method based on deep contrast learning
CN117523194A (en) Image segmentation method based on sparse labeling
CN113591633A (en) Object-oriented land utilization information interpretation method based on dynamic self-attention Transformer
Li et al. Automatic annotation algorithm of medical radiological images using convolutional neural network
CN110533074B (en) Automatic image category labeling method and system based on double-depth neural network
CN115760869A (en) Attention-guided non-linear disturbance consistency semi-supervised medical image segmentation method
CN117152427A (en) Remote sensing image semantic segmentation method and system based on diffusion model and knowledge distillation
Nakhaee et al. DeepRadiation: An intelligent augmented reality platform for predicting urban energy performance just through 360 panoramic streetscape images utilizing various deep learning models
CN102609721B (en) Remote sensing image clustering method
CN113052759B (en) Scene complex text image editing method based on MASK and automatic encoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant