CN111161249A - Unsupervised medical image segmentation method based on domain adaptation - Google Patents

Unsupervised medical image segmentation method based on domain adaptation Download PDF

Info

Publication number
CN111161249A
CN111161249A CN201911401973.5A CN201911401973A CN111161249A CN 111161249 A CN111161249 A CN 111161249A CN 201911401973 A CN201911401973 A CN 201911401973A CN 111161249 A CN111161249 A CN 111161249A
Authority
CN
China
Prior art keywords
encoder
variational
data
target
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911401973.5A
Other languages
Chinese (zh)
Other versions
CN111161249B (en
Inventor
庄吓海
吴富平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201911401973.5A priority Critical patent/CN111161249B/en
Publication of CN111161249A publication Critical patent/CN111161249A/en
Application granted granted Critical
Publication of CN111161249B publication Critical patent/CN111161249B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a domain adaptation-based unsupervised medical image segmentation method, which comprises the following steps of: s1, acquiring imaging data with labels in different modalities, which have the same structure as the target data, as source data; s2, introducing hidden variables, and constructing a source variation self-encoder and a target variation self-encoder which have the same structure and are used for image segmentation; s3, obtaining loss functions of two variational self-encoders; s4, estimating the probability distribution of the hidden variables of the two variational self-encoders, and calculating the difference of the probability distribution of the hidden variables; s5, synthesizing the difference of the loss function and the hidden variable probability distribution to obtain a total loss function, and optimizing two variational self-encoders by using the total loss function; and S6, dividing the target data by adopting the target variational self-encoder obtained by optimization. Compared with the prior art, the method is simple and rapid in training and strong in generalization capability.

Description

Unsupervised medical image segmentation method based on domain adaptation
Technical Field
The invention relates to the technical field of image processing, in particular to a domain-adaptation-based unsupervised medical image segmentation method.
Background
Accurate cardiac segmentation has a very important auxiliary role for many clinical applications, such as three-dimensional modeling of the heart and cardiac function analysis, and in clinical practice, multi-modal medical imaging has been widely used. However, it is time and labor consuming to manually segment cardiac images of all modalities, and there are also differences between segmentation results for different physicians. In order to reduce workload and establish a uniform segmentation standard, computer automated segmentation is very important.
The learning-based training method can achieve the desired automated segmentation model. In order to reduce the labeled data required by training, a very useful method is to utilize the existing cardiac image data with gold standard, transfer and learn the anatomical structure information in the cardiac image data, and apply the learned knowledge to the target image of other modality to be segmented, so as to realize automatic segmentation of the image. However, experiments show that the segmentation model trained on one modality data is usually poor if it is directly applied to target data of other modalities, which is caused by the difference of distribution of different modality data. The domain adaptation method well solves the problem, in the segmentation problem, the domain adaptation method generally maps all data to the same hidden variable feature space, the optimization method is utilized to match the distribution of different modal data on the feature space to obtain the modal-independent features, and finally the supervised learning is carried out by utilizing the features and the labeled source data to obtain a segmentation model which has good generalization capability on target data.
At present, in the domain adaptation unsupervised segmentation method, an anti-neural network is adopted to force hidden variable modes of different domains to be unrelated. According to the strategy, a discriminator network is introduced, and a generator and the discriminator network are alternately updated, so that the discriminator cannot identify the types of hidden variables of different modes. However, this method is generally difficult to find nash equilibrium points in the optimization process, and the training process is complicated. In order to speed up the training process, the ability of explicit metrics of distribution differences in segmentation is studied, however, general explicit metrics, such as through some statistics, are only studied in the classification problem and are difficult to use in the segmentation problem because the strategy ignores the spatial correlation of the images.
Disclosure of Invention
The present invention is directed to overcome the above-mentioned drawbacks of the prior art, and to provide a domain-adaptive unsupervised medical image segmentation method.
The purpose of the invention can be realized by the following technical scheme:
a domain-adaptive unsupervised medical image segmentation method, comprising the steps of:
s1: acquiring imaging data with labels in different modalities, which have the same structure as the target data, as source data;
s2: respectively introducing hidden variables into source data and target data, and constructing two variational self-encoders with the same structure, namely a source variational self-encoder for source data image segmentation and a target variational self-encoder for target data image segmentation;
s3: respectively obtaining loss functions of two variational coders;
s4: respectively estimating the probability distribution of the hidden variables of the two variational autocoder, and calculating the difference of the probability distribution of the hidden variables;
s5: synthesizing the loss functions of the two variational self-encoders and the difference of the hidden variable probability distribution to obtain an overall loss function, and optimizing the two variational self-encoders by using the overall loss function;
s6: and partitioning the target data by adopting the optimized target variational self-encoder to obtain a partitioning result.
The variational self-encoder comprises an encoder, a decoder and a prediction module, wherein the encoder is used for inputting an original image and outputting a hidden variable, the prediction module is used for inputting the hidden variable and outputting a segmentation result, and the decoder is used for simultaneously inputting the hidden variable and the segmentation result and outputting a reconstruction result of the original image.
The loss function of the source variational self-encoder is:
Figure BDA0002347714990000021
wherein Loss1 is the Loss of the source variation from the encoder, LBVAESS) Being an object function of a source variational self-encoder, LBVAESS) In particular the lower bound of the variation of the likelihood function of the source data, thetaS、φSFor source variational self-encoder parameters to be learned, xSAs source data, ySIs a label of the source data, zSAs hidden variables of the source data, DKLRepresenting the Kullback-Leibler divergence,
Figure BDA0002347714990000022
representing given source data xSIn case of (a) zSApproximate conditional probability distribution of (c), zSObeying Gaussian distribution N (U)S,∑S) Wherein U isS=(uS1,uS2,…,uSn) Representing hidden variables zSIs the mean value, sigma of each subvariant ofS=diag(λS1S2,…,λSn) Representing the corresponding covariance diagonal matrix, n representing the hidden variable zSThe dimension (c) of (a) is,
Figure BDA0002347714990000031
denotes zSThe true probability distribution of (a) is,
Figure BDA0002347714990000032
represents known ySAnd zSIn case of (2) xSThe conditional probability distribution of (a) is,
Figure BDA0002347714990000033
represents known zSIn case of (a) ySE represents the expectation of finding the corresponding variable.
The loss function of the target variational self-encoder is:
Figure BDA0002347714990000034
wherein Loss2 is the Loss of the target variation from the encoder, LBVAETT) For the object function of the object variational self-encoder, LBVAETT) In particular the lower bound of the variation of the likelihood function of the target data, thetaT、φTTo the eyesScaling the parameters to be learned in the autoencoder, xTIn order to be the target data,
Figure BDA0002347714990000035
as a predictive tag for the target data, zTAs hidden variables of the target data, DKLRepresenting the Kullback-Leibler divergence,
Figure BDA0002347714990000036
representing given target data xTIn case of zTApproximate conditional probability distribution of (c), zTObeying Gaussian distribution N (U)T,∑T) Wherein U isT=(uT1,uT2,…,uTn) Representing hidden variables zTIs the mean value, sigma of each subvariant ofT=diag(λT1T2,…,λTn) Representing the corresponding covariance diagonal matrix, n representing the hidden variable zTThe dimension (c) of (a) is,
Figure BDA0002347714990000037
denotes zTThe true probability distribution of (a) is,
Figure BDA0002347714990000038
is shown to be known
Figure BDA0002347714990000039
And zTIn case of (2) xTThe conditional probability distribution of (a) is,
Figure BDA00023477149900000310
represents known zTIn the case of
Figure BDA00023477149900000311
E represents the expectation of finding the corresponding variable.
Step S4 estimates the probability distribution of the two variational autocoder hidden variables using monte carlo sampling.
The difference of the probability distributions of the implicit variables of the two variational autocoder is as follows:
Figure BDA00023477149900000312
Figure BDA00023477149900000313
Figure BDA00023477149900000314
Figure BDA0002347714990000041
wherein D (z)S,zT) Representing the difference of the probability distributions of the hidden variables of the two variational autocoders, zSAs an implicit variable of the source data, zTIs an implicit variable of the target data, M is the sample size of the monte carlo samples,
Figure BDA0002347714990000042
the ith data representing the source data resulting from the sampling,
Figure BDA0002347714990000043
to sample the jth data of the resulting source data,
Figure BDA0002347714990000044
the ith data representing the target data obtained by sampling,
Figure BDA0002347714990000045
to sample the jth data of the resulting target data,
Figure BDA0002347714990000046
to represent
Figure BDA0002347714990000047
Corresponding hidden variable zSOf conditional distribution of (3) a mean vector USThe first element of (a) is,
Figure BDA0002347714990000048
to represent
Figure BDA0002347714990000049
Corresponding hidden variable zSThe variance of the conditional distribution of the ith element,
Figure BDA00023477149900000410
to represent
Figure BDA00023477149900000411
Corresponding hidden variable zSOf conditional distribution of (3) a mean vector USThe first element of (a) is,
Figure BDA00023477149900000412
to represent
Figure BDA00023477149900000413
Corresponding hidden variable zSThe variance of the conditional distribution of the ith element,
Figure BDA00023477149900000414
to represent
Figure BDA00023477149900000415
Corresponding hidden variable zTOf conditional distribution of (3) a mean vector UTThe first element of (a) is,
Figure BDA00023477149900000416
to represent
Figure BDA00023477149900000417
Corresponding hidden variable zTThe variance of the conditional distribution of the ith element,
Figure BDA00023477149900000418
to represent
Figure BDA00023477149900000419
Corresponding hidden variable zTOf conditional distribution of (3) a mean vector UTThe first element of (a) is,
Figure BDA00023477149900000420
to represent
Figure BDA00023477149900000421
Corresponding hidden variable zTThe variance of the conditional distribution of the l-th element.
The overall loss function of step S5 is specifically:
FullLoss=α1Loss1+α2Loss2+α3D(zS,zT),
among them, FullLoss is the total Loss, Loss1 is the Loss of the source variational self-encoder, Loss2 is the Loss of the target variational self-encoder, D (z)S,zT) Is the difference of the probability distribution of latent variables, zSAs an implicit variable of the source data, zTHidden variables for target data, α1、α2And α3Are balance parameters.
Step S5 optimizes the parameters of the variational autocoder with the goal of minimizing the overall loss function.
Compared with the prior art, the invention has the following advantages:
(1) the invention calculates the difference between the distributions by estimating the distribution of the hidden variables, provides an effective explicit method for measuring the distribution difference, and is used for the problem of domain adaptive segmentation;
(2) the invention has the advantages of full automation, short calculation time, convenient realization and the like.
Drawings
FIG. 1 is a block flow diagram of a domain-adaptive unsupervised medical image segmentation-based method of the present invention;
FIG. 2 is a schematic structural diagram of a variational self-encoder according to the present invention.
In the figure, 1 is an encoder, 2 is a decoder, and 3 is a prediction module.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. Note that the following description of the embodiments is merely a substantial example, and the present invention is not intended to be limited to the application or the use thereof, and is not limited to the following embodiments.
Examples
As shown in fig. 1, a domain adaptation-based unsupervised medical image segmentation method includes the following steps:
s1: acquiring imaging data with labels in different modalities, which have the same structure as the target data, as source data;
s2: respectively introducing hidden variables into source data and target data, and constructing two variational self-encoders with the same structure, namely a source variational self-encoder for source data image segmentation and a target variational self-encoder for target data image segmentation;
s3: respectively obtaining loss functions of two variational coders;
s4: respectively estimating the probability distribution of the hidden variables of the two variational autocoder, and calculating the difference of the probability distribution of the hidden variables;
s5: synthesizing the loss functions of the two variational self-encoders and the difference of the hidden variable probability distribution to obtain an overall loss function, and optimizing the two variational self-encoders by using the overall loss function;
s6: and partitioning the target data by adopting the optimized target variational self-encoder to obtain a partitioning result.
As shown in fig. 2, the variational self-encoder includes an encoder 1, a decoder 2 and a prediction module 3, wherein the encoder 1 is used for inputting an original image and outputting a hidden variable, the prediction module 3 is used for inputting the hidden variable and outputting a segmentation result, and the decoder 2 is used for simultaneously inputting the hidden variable and the segmentation result and outputting a reconstruction result of the original image. For the source data and the target data, the corresponding original images are the original image of the source data and the original image of the target data.
The loss function of the source variational self-encoder is:
Figure BDA0002347714990000051
Figure BDA0002347714990000061
wherein Loss1 is the Loss of the source variation from the encoder, LBVAESS) Being an object function of a source variational self-encoder, LBVAESS) In particular the lower bound of the variation of the likelihood function of the source data, thetaS、φSFor source variational self-encoder parameters to be learned, xSAs source data, ySIs a label of the source data, zSAs hidden variables of the source data, DKLRepresenting the Kullback-Leibler divergence,
Figure BDA0002347714990000062
representing given source data xSIn case of (a) zSApproximate conditional probability distribution of (c), zSObeying Gaussian distribution N (U)S,∑S) Wherein U isS=(uS1,uS2,…,uSn) Representing hidden variables zSIs the mean value, sigma of each subvariant ofS=diag(λS1S2,…,λSn) Representing the corresponding covariance diagonal matrix, n representing the hidden variable zSThe dimension (c) of (a) is,
Figure BDA0002347714990000063
denotes zSThe true probability distribution of (a) is,
Figure BDA0002347714990000064
represents known ySAnd zSIn case of (2) xSThe conditional probability distribution of (a) is,
Figure BDA0002347714990000065
represents known zSIn case of (a) ySE represents the expectation of finding the corresponding variable.
The loss function of the target variational self-encoder is:
Figure BDA0002347714990000066
wherein Loss2 is the Loss of the target variation from the encoder, LBVAETT) For the object function of the object variational self-encoder, LBVAETT) In particular the lower bound of the variation of the likelihood function of the target data, thetaT、φTFor the target variational self-encoder of the parameter to be learned, xTIn order to be the target data,
Figure BDA0002347714990000067
as a predictive tag for the target data, zTAs hidden variables of the target data, DKLRepresenting the Kullback-Leibler divergence,
Figure BDA0002347714990000068
representing given target data xTIn case of zTApproximate conditional probability distribution of (c), zTObeying Gaussian distribution N (U)T,∑T) Wherein U isT=(uT1,uT2,…,uTn) Representing hidden variables zTIs the mean value, sigma of each subvariant ofT=diag(λT1T2,…,λTn) Representing the corresponding covariance diagonal matrix, n representing the hidden variable zTThe dimension (c) of (a) is,
Figure BDA0002347714990000069
denotes zTThe true probability distribution of (a) is,
Figure BDA00023477149900000610
is shown to be known
Figure BDA00023477149900000611
And zTIn case of (2) xTThe conditional probability distribution of (a) is,
Figure BDA00023477149900000612
represents known zTIn the case of
Figure BDA00023477149900000613
Conditional probability distribution of (1), E tableThe expectations of the corresponding variables are shown.
Step S4 estimates the probability distribution of the two variational autocoder hidden variables using monte carlo sampling.
The difference of the probability distributions of the implicit variables of the two variational autocoder is as follows:
Figure BDA00023477149900000614
Figure BDA0002347714990000071
Figure BDA0002347714990000072
Figure BDA0002347714990000073
wherein D (z)S,zT) Representing the difference of the probability distributions of the hidden variables of the two variational autocoders, zSAs an implicit variable of the source data, zTIs an implicit variable of the target data, M is the sample size of the monte carlo samples,
Figure BDA0002347714990000074
the ith data representing the source data resulting from the sampling,
Figure BDA0002347714990000075
to sample the jth data of the resulting source data,
Figure BDA0002347714990000076
the ith data representing the target data obtained by sampling,
Figure BDA0002347714990000077
to sample the jth data of the resulting target data,
Figure BDA0002347714990000078
to represent
Figure BDA0002347714990000079
Corresponding hidden variable zSOf conditional distribution of (3) a mean vector USThe first element of (a) is,
Figure BDA00023477149900000710
to represent
Figure BDA00023477149900000711
Corresponding hidden variable zSThe variance of the conditional distribution of the ith element,
Figure BDA00023477149900000712
to represent
Figure BDA00023477149900000713
Corresponding hidden variable zSOf conditional distribution of (3) a mean vector USThe first element of (a) is,
Figure BDA00023477149900000714
to represent
Figure BDA00023477149900000715
Corresponding hidden variable zSThe variance of the conditional distribution of the ith element,
Figure BDA00023477149900000716
to represent
Figure BDA00023477149900000717
Corresponding hidden variable zTOf conditional distribution of (3) a mean vector UTThe first element of (a) is,
Figure BDA00023477149900000718
to represent
Figure BDA00023477149900000719
Corresponding hidden variable zTThe variance of the conditional distribution of the ith element,
Figure BDA00023477149900000720
to represent
Figure BDA00023477149900000721
Corresponding hidden variable zTOf conditional distribution of (3) a mean vector UTThe first element of (a) is,
Figure BDA00023477149900000722
to represent
Figure BDA00023477149900000723
Corresponding hidden variable zTThe variance of the conditional distribution of the l-th element.
The overall loss function of step S5 is specifically:
FullLoss=α1Loss1+α2Loss2+α3D(zS,zT),
among them, FullLoss is the total Loss, Loss1 is the Loss of the source variational self-encoder, Loss2 is the Loss of the target variational self-encoder, D (z)S,zT) Is the difference of the probability distribution of latent variables, zSAs an implicit variable of the source data, zTHidden variables for target data, α1、α2And α3Are balance parameters.
Step S5 optimizes the parameters of the variational autocoder with the goal of minimizing the overall loss function.
The above embodiments are merely examples and do not limit the scope of the present invention. These embodiments may be implemented in other various manners, and various omissions, substitutions, and changes may be made without departing from the technical spirit of the present invention.

Claims (8)

1. A domain-adaptive unsupervised medical image segmentation method, characterized in that the method comprises the following steps:
s1: acquiring imaging data with labels in different modalities, which have the same structure as the target data, as source data;
s2: respectively introducing hidden variables into source data and target data, and constructing two variational self-encoders with the same structure, namely a source variational self-encoder for source data image segmentation and a target variational self-encoder for target data image segmentation;
s3: respectively obtaining loss functions of two variational coders;
s4: respectively estimating the probability distribution of the hidden variables of the two variational autocoder, and calculating the difference of the probability distribution of the hidden variables;
s5: synthesizing the loss functions of the two variational self-encoders and the difference of the hidden variable probability distribution to obtain an overall loss function, and optimizing the two variational self-encoders by using the overall loss function;
s6: and partitioning the target data by adopting the optimized target variational self-encoder to obtain a partitioning result.
2. The method of claim 1, wherein the variational auto-encoder comprises an encoder, a decoder and a prediction module, the encoder is used for inputting an original image and outputting a hidden variable, the prediction module is used for inputting the hidden variable and outputting a segmentation result, and the decoder is used for simultaneously inputting the hidden variable and the segmentation result and outputting a reconstruction result of the original image.
3. The unsupervised medical image segmentation method based on domain adaptation as claimed in claim 1, wherein the loss function of the source variational auto-encoder is:
Figure FDA0002347714980000011
Figure FDA0002347714980000012
wherein Loss1 is the Loss of the source variation from the encoder, LBVAESS) Being an object function of a source variational self-encoder, LBVAESS) In particular toLower bound of variation, θ, of the source data likelihood functionS、φSFor source variational self-encoder parameters to be learned, xSAs source data, ySIs a label of the source data, zSAs hidden variables of the source data, DKLRepresenting the Kullback-Leibler divergence,
Figure FDA0002347714980000013
representing given source data xSIn case of (a) zSApproximate conditional probability distribution of (c), zSObeying Gaussian distribution N (U)S,∑S) Wherein U isS=(uS1,uS2,…,uSn) Representing hidden variables zSIs the mean value, sigma of each subvariant ofS=diag(λS1S2,…,λSn) Representing the corresponding covariance diagonal matrix, n representing the hidden variable zSThe dimension (c) of (a) is,
Figure FDA0002347714980000021
denotes zSThe true probability distribution of (a) is,
Figure FDA0002347714980000022
represents known ySAnd zSIn case of (2) xSThe conditional probability distribution of (a) is,
Figure FDA0002347714980000023
represents known zSIn case of (a) ySE represents the expectation of finding the corresponding variable.
4. The unsupervised medical image segmentation method based on domain adaptation as claimed in claim 1, wherein the loss function of the target variational auto-encoder is:
Figure FDA0002347714980000024
Figure FDA0002347714980000025
wherein Loss2 is the Loss of the target variation from the encoder, LBVAETT) For the object function of the object variational self-encoder, LBVAETT) In particular the lower bound of the variation of the likelihood function of the target data, thetaT、φTFor the target variational self-encoder of the parameter to be learned, xTIn order to be the target data,
Figure FDA0002347714980000026
as a predictive tag for the target data, zTAs hidden variables of the target data, DKLRepresenting the Kullback-Leibler divergence,
Figure FDA0002347714980000027
representing given target data xTIn case of zTApproximate conditional probability distribution of (c), zTObeying Gaussian distribution N (U)T,∑T) Wherein U isT=(uT1,uT2,…,uTn) Representing hidden variables zTIs the mean value, sigma of each subvariant ofT=diag(λT1T2,…,λTn) Representing the corresponding covariance diagonal matrix, n representing the hidden variable zTThe dimension (c) of (a) is,
Figure FDA0002347714980000028
denotes zTThe true probability distribution of (a) is,
Figure FDA0002347714980000029
is shown to be known
Figure FDA00023477149800000210
And zTIn case of (2) xTThe conditional probability distribution of (a) is,
Figure FDA00023477149800000211
represents known zTIn the case of
Figure FDA00023477149800000212
E represents the expectation of finding the corresponding variable.
5. The method of claim 1, wherein the step S4 adopts Monte Carlo sampling to estimate the probability distribution of hidden variables of two variational autocoder.
6. The unsupervised medical image segmentation method based on domain adaptation as claimed in claim 5, wherein the difference of the hidden variable probability distributions of the two variational autocoder is:
Figure FDA00023477149800000213
Figure FDA00023477149800000214
Figure FDA0002347714980000031
Figure FDA0002347714980000032
wherein D (z)S,zT) Representing the difference of the probability distributions of the hidden variables of the two variational autocoders, zSAs an implicit variable of the source data, zTIs an implicit variable of the target data, M is the sample size of the monte carlo samples,
Figure FDA0002347714980000033
the ith data representing the source data resulting from the sampling,
Figure FDA0002347714980000034
obtained for samplingThe jth data of the source data,
Figure FDA0002347714980000035
the ith data representing the target data obtained by sampling,
Figure FDA0002347714980000036
to sample the jth data of the resulting target data,
Figure FDA0002347714980000037
to represent
Figure FDA0002347714980000038
Corresponding hidden variable zSOf conditional distribution of (3) a mean vector USThe first element of (a) is,
Figure FDA0002347714980000039
to represent
Figure FDA00023477149800000310
Corresponding hidden variable zSThe variance of the conditional distribution of the ith element,
Figure FDA00023477149800000311
to represent
Figure FDA00023477149800000312
Corresponding hidden variable zSOf conditional distribution of (3) a mean vector USThe first element of (a) is,
Figure FDA00023477149800000313
to represent
Figure FDA00023477149800000314
Corresponding hidden variable zSThe variance of the conditional distribution of the ith element,
Figure FDA00023477149800000315
to represent
Figure FDA00023477149800000316
Corresponding hidden variable zTOf conditional distribution of (3) a mean vector UTThe first element of (a) is,
Figure FDA00023477149800000317
to represent
Figure FDA00023477149800000318
Corresponding hidden variable zTThe variance of the conditional distribution of the ith element,
Figure FDA00023477149800000319
to represent
Figure FDA00023477149800000320
Corresponding hidden variable zTOf conditional distribution of (3) a mean vector UTThe first element of (a) is,
Figure FDA00023477149800000321
to represent
Figure FDA00023477149800000322
Corresponding hidden variable zTThe variance of the conditional distribution of the l-th element.
7. The domain-adaptation-based unsupervised medical image segmentation method of claim 1, wherein the overall loss function of step S5 is specifically:
FullLoss=α1Loss1+α2Loss2+α3D(zS,zT),
among them, FullLoss is the total Loss, Loss1 is the Loss of the source variational self-encoder, Loss2 is the Loss of the target variational self-encoder, D (z)S,zT) Is the difference of the probability distribution of latent variables, zSAs an implicit variable of the source data, zTAs a hiding of target dataVariable, α1、α2And α3Are balance parameters.
8. The method of claim 1, wherein the step S5 optimizes the parameters of the variational autocoder with a minimum overall loss function as a target when optimizing the two variational autocoders.
CN201911401973.5A 2019-12-31 2019-12-31 Unsupervised medical image segmentation method based on domain adaptation Active CN111161249B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911401973.5A CN111161249B (en) 2019-12-31 2019-12-31 Unsupervised medical image segmentation method based on domain adaptation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911401973.5A CN111161249B (en) 2019-12-31 2019-12-31 Unsupervised medical image segmentation method based on domain adaptation

Publications (2)

Publication Number Publication Date
CN111161249A true CN111161249A (en) 2020-05-15
CN111161249B CN111161249B (en) 2023-06-02

Family

ID=70559345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911401973.5A Active CN111161249B (en) 2019-12-31 2019-12-31 Unsupervised medical image segmentation method based on domain adaptation

Country Status (1)

Country Link
CN (1) CN111161249B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667426A (en) * 2020-06-04 2020-09-15 四川轻化工大学 Medical image enhancement method based on frequency domain variation
CN112417219A (en) * 2020-11-16 2021-02-26 吉林大学 Hyper-graph convolution-based hyper-edge link prediction method
CN113160138A (en) * 2021-03-24 2021-07-23 山西大学 Brain nuclear magnetic resonance image segmentation method and system
WO2022078568A1 (en) * 2020-10-12 2022-04-21 Haag-Streit Ag A method for providing a perimetry processing tool and a perimetry device with such a tool
WO2023065070A1 (en) * 2021-10-18 2023-04-27 中国科学院深圳先进技术研究院 Multi-domain medical image segmentation method based on domain adaptation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685087A (en) * 2017-10-18 2019-04-26 富士通株式会社 Information processing method and device and information detecting method and device
CN110020623A (en) * 2019-04-04 2019-07-16 中山大学 Physical activity identifying system and method based on condition variation self-encoding encoder
JP2019139482A (en) * 2018-02-09 2019-08-22 株式会社デンソーアイティーラボラトリ Information estimation device and information estimation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685087A (en) * 2017-10-18 2019-04-26 富士通株式会社 Information processing method and device and information detecting method and device
JP2019139482A (en) * 2018-02-09 2019-08-22 株式会社デンソーアイティーラボラトリ Information estimation device and information estimation method
CN110020623A (en) * 2019-04-04 2019-07-16 中山大学 Physical activity identifying system and method based on condition variation self-encoding encoder

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
支恩玮;闫飞;任密蜂;阎高伟;: "基于迁移变分自编码器-标签映射的湿式球磨机负荷参数软测量", 化工学报 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667426A (en) * 2020-06-04 2020-09-15 四川轻化工大学 Medical image enhancement method based on frequency domain variation
CN111667426B (en) * 2020-06-04 2023-10-13 四川轻化工大学 Medical image enhancement method based on frequency domain variation
WO2022078568A1 (en) * 2020-10-12 2022-04-21 Haag-Streit Ag A method for providing a perimetry processing tool and a perimetry device with such a tool
CN112417219A (en) * 2020-11-16 2021-02-26 吉林大学 Hyper-graph convolution-based hyper-edge link prediction method
CN113160138A (en) * 2021-03-24 2021-07-23 山西大学 Brain nuclear magnetic resonance image segmentation method and system
CN113160138B (en) * 2021-03-24 2022-07-19 山西大学 Brain nuclear magnetic resonance image segmentation method and system
WO2023065070A1 (en) * 2021-10-18 2023-04-27 中国科学院深圳先进技术研究院 Multi-domain medical image segmentation method based on domain adaptation

Also Published As

Publication number Publication date
CN111161249B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN111161249A (en) Unsupervised medical image segmentation method based on domain adaptation
CN108062753B (en) Unsupervised domain self-adaptive brain tumor semantic segmentation method based on deep counterstudy
CN106547880B (en) Multi-dimensional geographic scene identification method fusing geographic area knowledge
KR101908680B1 (en) A method and apparatus for machine learning based on weakly supervised learning
CN111127364B (en) Image data enhancement strategy selection method and face recognition image data enhancement method
CN111052128B (en) Descriptor learning method for detecting and locating objects in video
AU2020102667A4 (en) Adversarial training for large scale healthcare data using machine learning system
Liu et al. Generative self-training for cross-domain unsupervised tagged-to-cine mri synthesis
Lee et al. Localization uncertainty estimation for anchor-free object detection
CN109447096B (en) Glance path prediction method and device based on machine learning
CN114549470B (en) Hand bone critical area acquisition method based on convolutional neural network and multi-granularity attention
US20230140696A1 (en) Method and system for optimizing parameter intervals of manufacturing processes based on prediction intervals
Franchi et al. Latent discriminant deterministic uncertainty
Kalash et al. Relative saliency and ranking: Models, metrics, data and benchmarks
Yu et al. A generalizable brain extraction net (BEN) for multimodal MRI data from rodents, nonhuman primates, and humans
CN114266896A (en) Image labeling method, model training method and device, electronic equipment and medium
Wang et al. MetaMorph: learning metamorphic image transformation with appearance changes
CN111582449A (en) Training method, device, equipment and storage medium for target domain detection network
Akrami et al. Quantile regression for uncertainty estimation in vaes with applications to brain lesion detection
Kamraoui et al. Popcorn: Progressive pseudo-labeling with consistency regularization and neighboring
JP6927161B2 (en) Learning devices, predictors, methods, and programs
Xiong et al. On training deep 3d cnn models with dependent samples in neuroimaging
Chen et al. A unified framework for generative data augmentation: A comprehensive survey
CN112733849A (en) Model training method, image rotation angle correction method and device
Landman et al. Multiatlas segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant