CN111161249A - Unsupervised medical image segmentation method based on domain adaptation - Google Patents
Unsupervised medical image segmentation method based on domain adaptation Download PDFInfo
- Publication number
- CN111161249A CN111161249A CN201911401973.5A CN201911401973A CN111161249A CN 111161249 A CN111161249 A CN 111161249A CN 201911401973 A CN201911401973 A CN 201911401973A CN 111161249 A CN111161249 A CN 111161249A
- Authority
- CN
- China
- Prior art keywords
- encoder
- variational
- data
- target
- representing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30048—Heart; Cardiac
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a domain adaptation-based unsupervised medical image segmentation method, which comprises the following steps of: s1, acquiring imaging data with labels in different modalities, which have the same structure as the target data, as source data; s2, introducing hidden variables, and constructing a source variation self-encoder and a target variation self-encoder which have the same structure and are used for image segmentation; s3, obtaining loss functions of two variational self-encoders; s4, estimating the probability distribution of the hidden variables of the two variational self-encoders, and calculating the difference of the probability distribution of the hidden variables; s5, synthesizing the difference of the loss function and the hidden variable probability distribution to obtain a total loss function, and optimizing two variational self-encoders by using the total loss function; and S6, dividing the target data by adopting the target variational self-encoder obtained by optimization. Compared with the prior art, the method is simple and rapid in training and strong in generalization capability.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a domain-adaptation-based unsupervised medical image segmentation method.
Background
Accurate cardiac segmentation has a very important auxiliary role for many clinical applications, such as three-dimensional modeling of the heart and cardiac function analysis, and in clinical practice, multi-modal medical imaging has been widely used. However, it is time and labor consuming to manually segment cardiac images of all modalities, and there are also differences between segmentation results for different physicians. In order to reduce workload and establish a uniform segmentation standard, computer automated segmentation is very important.
The learning-based training method can achieve the desired automated segmentation model. In order to reduce the labeled data required by training, a very useful method is to utilize the existing cardiac image data with gold standard, transfer and learn the anatomical structure information in the cardiac image data, and apply the learned knowledge to the target image of other modality to be segmented, so as to realize automatic segmentation of the image. However, experiments show that the segmentation model trained on one modality data is usually poor if it is directly applied to target data of other modalities, which is caused by the difference of distribution of different modality data. The domain adaptation method well solves the problem, in the segmentation problem, the domain adaptation method generally maps all data to the same hidden variable feature space, the optimization method is utilized to match the distribution of different modal data on the feature space to obtain the modal-independent features, and finally the supervised learning is carried out by utilizing the features and the labeled source data to obtain a segmentation model which has good generalization capability on target data.
At present, in the domain adaptation unsupervised segmentation method, an anti-neural network is adopted to force hidden variable modes of different domains to be unrelated. According to the strategy, a discriminator network is introduced, and a generator and the discriminator network are alternately updated, so that the discriminator cannot identify the types of hidden variables of different modes. However, this method is generally difficult to find nash equilibrium points in the optimization process, and the training process is complicated. In order to speed up the training process, the ability of explicit metrics of distribution differences in segmentation is studied, however, general explicit metrics, such as through some statistics, are only studied in the classification problem and are difficult to use in the segmentation problem because the strategy ignores the spatial correlation of the images.
Disclosure of Invention
The present invention is directed to overcome the above-mentioned drawbacks of the prior art, and to provide a domain-adaptive unsupervised medical image segmentation method.
The purpose of the invention can be realized by the following technical scheme:
a domain-adaptive unsupervised medical image segmentation method, comprising the steps of:
s1: acquiring imaging data with labels in different modalities, which have the same structure as the target data, as source data;
s2: respectively introducing hidden variables into source data and target data, and constructing two variational self-encoders with the same structure, namely a source variational self-encoder for source data image segmentation and a target variational self-encoder for target data image segmentation;
s3: respectively obtaining loss functions of two variational coders;
s4: respectively estimating the probability distribution of the hidden variables of the two variational autocoder, and calculating the difference of the probability distribution of the hidden variables;
s5: synthesizing the loss functions of the two variational self-encoders and the difference of the hidden variable probability distribution to obtain an overall loss function, and optimizing the two variational self-encoders by using the overall loss function;
s6: and partitioning the target data by adopting the optimized target variational self-encoder to obtain a partitioning result.
The variational self-encoder comprises an encoder, a decoder and a prediction module, wherein the encoder is used for inputting an original image and outputting a hidden variable, the prediction module is used for inputting the hidden variable and outputting a segmentation result, and the decoder is used for simultaneously inputting the hidden variable and the segmentation result and outputting a reconstruction result of the original image.
The loss function of the source variational self-encoder is:
wherein Loss1 is the Loss of the source variation from the encoder, LBVAE(θS,φS) Being an object function of a source variational self-encoder, LBVAE(θS,φS) In particular the lower bound of the variation of the likelihood function of the source data, thetaS、φSFor source variational self-encoder parameters to be learned, xSAs source data, ySIs a label of the source data, zSAs hidden variables of the source data, DKLRepresenting the Kullback-Leibler divergence,representing given source data xSIn case of (a) zSApproximate conditional probability distribution of (c), zSObeying Gaussian distribution N (U)S,∑S) Wherein U isS=(uS1,uS2,…,uSn) Representing hidden variables zSIs the mean value, sigma of each subvariant ofS=diag(λS1,λS2,…,λSn) Representing the corresponding covariance diagonal matrix, n representing the hidden variable zSThe dimension (c) of (a) is,denotes zSThe true probability distribution of (a) is,represents known ySAnd zSIn case of (2) xSThe conditional probability distribution of (a) is,represents known zSIn case of (a) ySE represents the expectation of finding the corresponding variable.
The loss function of the target variational self-encoder is:
wherein Loss2 is the Loss of the target variation from the encoder, LBVAE(θT,φT) For the object function of the object variational self-encoder, LBVAE(θT,φT) In particular the lower bound of the variation of the likelihood function of the target data, thetaT、φTTo the eyesScaling the parameters to be learned in the autoencoder, xTIn order to be the target data,as a predictive tag for the target data, zTAs hidden variables of the target data, DKLRepresenting the Kullback-Leibler divergence,representing given target data xTIn case of zTApproximate conditional probability distribution of (c), zTObeying Gaussian distribution N (U)T,∑T) Wherein U isT=(uT1,uT2,…,uTn) Representing hidden variables zTIs the mean value, sigma of each subvariant ofT=diag(λT1,λT2,…,λTn) Representing the corresponding covariance diagonal matrix, n representing the hidden variable zTThe dimension (c) of (a) is,denotes zTThe true probability distribution of (a) is,is shown to be knownAnd zTIn case of (2) xTThe conditional probability distribution of (a) is,represents known zTIn the case ofE represents the expectation of finding the corresponding variable.
Step S4 estimates the probability distribution of the two variational autocoder hidden variables using monte carlo sampling.
The difference of the probability distributions of the implicit variables of the two variational autocoder is as follows:
wherein D (z)S,zT) Representing the difference of the probability distributions of the hidden variables of the two variational autocoders, zSAs an implicit variable of the source data, zTIs an implicit variable of the target data, M is the sample size of the monte carlo samples,the ith data representing the source data resulting from the sampling,to sample the jth data of the resulting source data,the ith data representing the target data obtained by sampling,to sample the jth data of the resulting target data,to representCorresponding hidden variable zSOf conditional distribution of (3) a mean vector USThe first element of (a) is,to representCorresponding hidden variable zSThe variance of the conditional distribution of the ith element,to representCorresponding hidden variable zSOf conditional distribution of (3) a mean vector USThe first element of (a) is,to representCorresponding hidden variable zSThe variance of the conditional distribution of the ith element,to representCorresponding hidden variable zTOf conditional distribution of (3) a mean vector UTThe first element of (a) is,to representCorresponding hidden variable zTThe variance of the conditional distribution of the ith element,to representCorresponding hidden variable zTOf conditional distribution of (3) a mean vector UTThe first element of (a) is,to representCorresponding hidden variable zTThe variance of the conditional distribution of the l-th element.
The overall loss function of step S5 is specifically:
FullLoss=α1Loss1+α2Loss2+α3D(zS,zT),
among them, FullLoss is the total Loss, Loss1 is the Loss of the source variational self-encoder, Loss2 is the Loss of the target variational self-encoder, D (z)S,zT) Is the difference of the probability distribution of latent variables, zSAs an implicit variable of the source data, zTHidden variables for target data, α1、α2And α3Are balance parameters.
Step S5 optimizes the parameters of the variational autocoder with the goal of minimizing the overall loss function.
Compared with the prior art, the invention has the following advantages:
(1) the invention calculates the difference between the distributions by estimating the distribution of the hidden variables, provides an effective explicit method for measuring the distribution difference, and is used for the problem of domain adaptive segmentation;
(2) the invention has the advantages of full automation, short calculation time, convenient realization and the like.
Drawings
FIG. 1 is a block flow diagram of a domain-adaptive unsupervised medical image segmentation-based method of the present invention;
FIG. 2 is a schematic structural diagram of a variational self-encoder according to the present invention.
In the figure, 1 is an encoder, 2 is a decoder, and 3 is a prediction module.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. Note that the following description of the embodiments is merely a substantial example, and the present invention is not intended to be limited to the application or the use thereof, and is not limited to the following embodiments.
Examples
As shown in fig. 1, a domain adaptation-based unsupervised medical image segmentation method includes the following steps:
s1: acquiring imaging data with labels in different modalities, which have the same structure as the target data, as source data;
s2: respectively introducing hidden variables into source data and target data, and constructing two variational self-encoders with the same structure, namely a source variational self-encoder for source data image segmentation and a target variational self-encoder for target data image segmentation;
s3: respectively obtaining loss functions of two variational coders;
s4: respectively estimating the probability distribution of the hidden variables of the two variational autocoder, and calculating the difference of the probability distribution of the hidden variables;
s5: synthesizing the loss functions of the two variational self-encoders and the difference of the hidden variable probability distribution to obtain an overall loss function, and optimizing the two variational self-encoders by using the overall loss function;
s6: and partitioning the target data by adopting the optimized target variational self-encoder to obtain a partitioning result.
As shown in fig. 2, the variational self-encoder includes an encoder 1, a decoder 2 and a prediction module 3, wherein the encoder 1 is used for inputting an original image and outputting a hidden variable, the prediction module 3 is used for inputting the hidden variable and outputting a segmentation result, and the decoder 2 is used for simultaneously inputting the hidden variable and the segmentation result and outputting a reconstruction result of the original image. For the source data and the target data, the corresponding original images are the original image of the source data and the original image of the target data.
The loss function of the source variational self-encoder is:
wherein Loss1 is the Loss of the source variation from the encoder, LBVAE(θS,φS) Being an object function of a source variational self-encoder, LBVAE(θS,φS) In particular the lower bound of the variation of the likelihood function of the source data, thetaS、φSFor source variational self-encoder parameters to be learned, xSAs source data, ySIs a label of the source data, zSAs hidden variables of the source data, DKLRepresenting the Kullback-Leibler divergence,representing given source data xSIn case of (a) zSApproximate conditional probability distribution of (c), zSObeying Gaussian distribution N (U)S,∑S) Wherein U isS=(uS1,uS2,…,uSn) Representing hidden variables zSIs the mean value, sigma of each subvariant ofS=diag(λS1,λS2,…,λSn) Representing the corresponding covariance diagonal matrix, n representing the hidden variable zSThe dimension (c) of (a) is,denotes zSThe true probability distribution of (a) is,represents known ySAnd zSIn case of (2) xSThe conditional probability distribution of (a) is,represents known zSIn case of (a) ySE represents the expectation of finding the corresponding variable.
The loss function of the target variational self-encoder is:
wherein Loss2 is the Loss of the target variation from the encoder, LBVAE(θT,φT) For the object function of the object variational self-encoder, LBVAE(θT,φT) In particular the lower bound of the variation of the likelihood function of the target data, thetaT、φTFor the target variational self-encoder of the parameter to be learned, xTIn order to be the target data,as a predictive tag for the target data, zTAs hidden variables of the target data, DKLRepresenting the Kullback-Leibler divergence,representing given target data xTIn case of zTApproximate conditional probability distribution of (c), zTObeying Gaussian distribution N (U)T,∑T) Wherein U isT=(uT1,uT2,…,uTn) Representing hidden variables zTIs the mean value, sigma of each subvariant ofT=diag(λT1,λT2,…,λTn) Representing the corresponding covariance diagonal matrix, n representing the hidden variable zTThe dimension (c) of (a) is,denotes zTThe true probability distribution of (a) is,is shown to be knownAnd zTIn case of (2) xTThe conditional probability distribution of (a) is,represents known zTIn the case ofConditional probability distribution of (1), E tableThe expectations of the corresponding variables are shown.
Step S4 estimates the probability distribution of the two variational autocoder hidden variables using monte carlo sampling.
The difference of the probability distributions of the implicit variables of the two variational autocoder is as follows:
wherein D (z)S,zT) Representing the difference of the probability distributions of the hidden variables of the two variational autocoders, zSAs an implicit variable of the source data, zTIs an implicit variable of the target data, M is the sample size of the monte carlo samples,the ith data representing the source data resulting from the sampling,to sample the jth data of the resulting source data,the ith data representing the target data obtained by sampling,to sample the jth data of the resulting target data,to representCorresponding hidden variable zSOf conditional distribution of (3) a mean vector USThe first element of (a) is,to representCorresponding hidden variable zSThe variance of the conditional distribution of the ith element,to representCorresponding hidden variable zSOf conditional distribution of (3) a mean vector USThe first element of (a) is,to representCorresponding hidden variable zSThe variance of the conditional distribution of the ith element,to representCorresponding hidden variable zTOf conditional distribution of (3) a mean vector UTThe first element of (a) is,to representCorresponding hidden variable zTThe variance of the conditional distribution of the ith element,to representCorresponding hidden variable zTOf conditional distribution of (3) a mean vector UTThe first element of (a) is,to representCorresponding hidden variable zTThe variance of the conditional distribution of the l-th element.
The overall loss function of step S5 is specifically:
FullLoss=α1Loss1+α2Loss2+α3D(zS,zT),
among them, FullLoss is the total Loss, Loss1 is the Loss of the source variational self-encoder, Loss2 is the Loss of the target variational self-encoder, D (z)S,zT) Is the difference of the probability distribution of latent variables, zSAs an implicit variable of the source data, zTHidden variables for target data, α1、α2And α3Are balance parameters.
Step S5 optimizes the parameters of the variational autocoder with the goal of minimizing the overall loss function.
The above embodiments are merely examples and do not limit the scope of the present invention. These embodiments may be implemented in other various manners, and various omissions, substitutions, and changes may be made without departing from the technical spirit of the present invention.
Claims (8)
1. A domain-adaptive unsupervised medical image segmentation method, characterized in that the method comprises the following steps:
s1: acquiring imaging data with labels in different modalities, which have the same structure as the target data, as source data;
s2: respectively introducing hidden variables into source data and target data, and constructing two variational self-encoders with the same structure, namely a source variational self-encoder for source data image segmentation and a target variational self-encoder for target data image segmentation;
s3: respectively obtaining loss functions of two variational coders;
s4: respectively estimating the probability distribution of the hidden variables of the two variational autocoder, and calculating the difference of the probability distribution of the hidden variables;
s5: synthesizing the loss functions of the two variational self-encoders and the difference of the hidden variable probability distribution to obtain an overall loss function, and optimizing the two variational self-encoders by using the overall loss function;
s6: and partitioning the target data by adopting the optimized target variational self-encoder to obtain a partitioning result.
2. The method of claim 1, wherein the variational auto-encoder comprises an encoder, a decoder and a prediction module, the encoder is used for inputting an original image and outputting a hidden variable, the prediction module is used for inputting the hidden variable and outputting a segmentation result, and the decoder is used for simultaneously inputting the hidden variable and the segmentation result and outputting a reconstruction result of the original image.
3. The unsupervised medical image segmentation method based on domain adaptation as claimed in claim 1, wherein the loss function of the source variational auto-encoder is:
wherein Loss1 is the Loss of the source variation from the encoder, LBVAE(θS,φS) Being an object function of a source variational self-encoder, LBVAE(θS,φS) In particular toLower bound of variation, θ, of the source data likelihood functionS、φSFor source variational self-encoder parameters to be learned, xSAs source data, ySIs a label of the source data, zSAs hidden variables of the source data, DKLRepresenting the Kullback-Leibler divergence,representing given source data xSIn case of (a) zSApproximate conditional probability distribution of (c), zSObeying Gaussian distribution N (U)S,∑S) Wherein U isS=(uS1,uS2,…,uSn) Representing hidden variables zSIs the mean value, sigma of each subvariant ofS=diag(λS1,λS2,…,λSn) Representing the corresponding covariance diagonal matrix, n representing the hidden variable zSThe dimension (c) of (a) is,denotes zSThe true probability distribution of (a) is,represents known ySAnd zSIn case of (2) xSThe conditional probability distribution of (a) is,represents known zSIn case of (a) ySE represents the expectation of finding the corresponding variable.
4. The unsupervised medical image segmentation method based on domain adaptation as claimed in claim 1, wherein the loss function of the target variational auto-encoder is:
wherein Loss2 is the Loss of the target variation from the encoder, LBVAE(θT,φT) For the object function of the object variational self-encoder, LBVAE(θT,φT) In particular the lower bound of the variation of the likelihood function of the target data, thetaT、φTFor the target variational self-encoder of the parameter to be learned, xTIn order to be the target data,as a predictive tag for the target data, zTAs hidden variables of the target data, DKLRepresenting the Kullback-Leibler divergence,representing given target data xTIn case of zTApproximate conditional probability distribution of (c), zTObeying Gaussian distribution N (U)T,∑T) Wherein U isT=(uT1,uT2,…,uTn) Representing hidden variables zTIs the mean value, sigma of each subvariant ofT=diag(λT1,λT2,…,λTn) Representing the corresponding covariance diagonal matrix, n representing the hidden variable zTThe dimension (c) of (a) is,denotes zTThe true probability distribution of (a) is,is shown to be knownAnd zTIn case of (2) xTThe conditional probability distribution of (a) is,represents known zTIn the case ofE represents the expectation of finding the corresponding variable.
5. The method of claim 1, wherein the step S4 adopts Monte Carlo sampling to estimate the probability distribution of hidden variables of two variational autocoder.
6. The unsupervised medical image segmentation method based on domain adaptation as claimed in claim 5, wherein the difference of the hidden variable probability distributions of the two variational autocoder is:
wherein D (z)S,zT) Representing the difference of the probability distributions of the hidden variables of the two variational autocoders, zSAs an implicit variable of the source data, zTIs an implicit variable of the target data, M is the sample size of the monte carlo samples,the ith data representing the source data resulting from the sampling,obtained for samplingThe jth data of the source data,the ith data representing the target data obtained by sampling,to sample the jth data of the resulting target data,to representCorresponding hidden variable zSOf conditional distribution of (3) a mean vector USThe first element of (a) is,to representCorresponding hidden variable zSThe variance of the conditional distribution of the ith element,to representCorresponding hidden variable zSOf conditional distribution of (3) a mean vector USThe first element of (a) is,to representCorresponding hidden variable zSThe variance of the conditional distribution of the ith element,to representCorresponding hidden variable zTOf conditional distribution of (3) a mean vector UTThe first element of (a) is,to representCorresponding hidden variable zTThe variance of the conditional distribution of the ith element,to representCorresponding hidden variable zTOf conditional distribution of (3) a mean vector UTThe first element of (a) is,to representCorresponding hidden variable zTThe variance of the conditional distribution of the l-th element.
7. The domain-adaptation-based unsupervised medical image segmentation method of claim 1, wherein the overall loss function of step S5 is specifically:
FullLoss=α1Loss1+α2Loss2+α3D(zS,zT),
among them, FullLoss is the total Loss, Loss1 is the Loss of the source variational self-encoder, Loss2 is the Loss of the target variational self-encoder, D (z)S,zT) Is the difference of the probability distribution of latent variables, zSAs an implicit variable of the source data, zTAs a hiding of target dataVariable, α1、α2And α3Are balance parameters.
8. The method of claim 1, wherein the step S5 optimizes the parameters of the variational autocoder with a minimum overall loss function as a target when optimizing the two variational autocoders.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911401973.5A CN111161249B (en) | 2019-12-31 | 2019-12-31 | Unsupervised medical image segmentation method based on domain adaptation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911401973.5A CN111161249B (en) | 2019-12-31 | 2019-12-31 | Unsupervised medical image segmentation method based on domain adaptation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111161249A true CN111161249A (en) | 2020-05-15 |
CN111161249B CN111161249B (en) | 2023-06-02 |
Family
ID=70559345
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911401973.5A Active CN111161249B (en) | 2019-12-31 | 2019-12-31 | Unsupervised medical image segmentation method based on domain adaptation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111161249B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111667426A (en) * | 2020-06-04 | 2020-09-15 | 四川轻化工大学 | Medical image enhancement method based on frequency domain variation |
CN112417219A (en) * | 2020-11-16 | 2021-02-26 | 吉林大学 | Hyper-graph convolution-based hyper-edge link prediction method |
CN113160138A (en) * | 2021-03-24 | 2021-07-23 | 山西大学 | Brain nuclear magnetic resonance image segmentation method and system |
WO2022078568A1 (en) * | 2020-10-12 | 2022-04-21 | Haag-Streit Ag | A method for providing a perimetry processing tool and a perimetry device with such a tool |
WO2023065070A1 (en) * | 2021-10-18 | 2023-04-27 | 中国科学院深圳先进技术研究院 | Multi-domain medical image segmentation method based on domain adaptation |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109685087A (en) * | 2017-10-18 | 2019-04-26 | 富士通株式会社 | Information processing method and device and information detecting method and device |
CN110020623A (en) * | 2019-04-04 | 2019-07-16 | 中山大学 | Physical activity identifying system and method based on condition variation self-encoding encoder |
JP2019139482A (en) * | 2018-02-09 | 2019-08-22 | 株式会社デンソーアイティーラボラトリ | Information estimation device and information estimation method |
-
2019
- 2019-12-31 CN CN201911401973.5A patent/CN111161249B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109685087A (en) * | 2017-10-18 | 2019-04-26 | 富士通株式会社 | Information processing method and device and information detecting method and device |
JP2019139482A (en) * | 2018-02-09 | 2019-08-22 | 株式会社デンソーアイティーラボラトリ | Information estimation device and information estimation method |
CN110020623A (en) * | 2019-04-04 | 2019-07-16 | 中山大学 | Physical activity identifying system and method based on condition variation self-encoding encoder |
Non-Patent Citations (1)
Title |
---|
支恩玮;闫飞;任密蜂;阎高伟;: "基于迁移变分自编码器-标签映射的湿式球磨机负荷参数软测量", 化工学报 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111667426A (en) * | 2020-06-04 | 2020-09-15 | 四川轻化工大学 | Medical image enhancement method based on frequency domain variation |
CN111667426B (en) * | 2020-06-04 | 2023-10-13 | 四川轻化工大学 | Medical image enhancement method based on frequency domain variation |
WO2022078568A1 (en) * | 2020-10-12 | 2022-04-21 | Haag-Streit Ag | A method for providing a perimetry processing tool and a perimetry device with such a tool |
CN112417219A (en) * | 2020-11-16 | 2021-02-26 | 吉林大学 | Hyper-graph convolution-based hyper-edge link prediction method |
CN113160138A (en) * | 2021-03-24 | 2021-07-23 | 山西大学 | Brain nuclear magnetic resonance image segmentation method and system |
CN113160138B (en) * | 2021-03-24 | 2022-07-19 | 山西大学 | Brain nuclear magnetic resonance image segmentation method and system |
WO2023065070A1 (en) * | 2021-10-18 | 2023-04-27 | 中国科学院深圳先进技术研究院 | Multi-domain medical image segmentation method based on domain adaptation |
Also Published As
Publication number | Publication date |
---|---|
CN111161249B (en) | 2023-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111161249A (en) | Unsupervised medical image segmentation method based on domain adaptation | |
CN108062753B (en) | Unsupervised domain self-adaptive brain tumor semantic segmentation method based on deep counterstudy | |
CN106547880B (en) | Multi-dimensional geographic scene identification method fusing geographic area knowledge | |
KR101908680B1 (en) | A method and apparatus for machine learning based on weakly supervised learning | |
CN111127364B (en) | Image data enhancement strategy selection method and face recognition image data enhancement method | |
CN111052128B (en) | Descriptor learning method for detecting and locating objects in video | |
AU2020102667A4 (en) | Adversarial training for large scale healthcare data using machine learning system | |
Liu et al. | Generative self-training for cross-domain unsupervised tagged-to-cine mri synthesis | |
Lee et al. | Localization uncertainty estimation for anchor-free object detection | |
CN109447096B (en) | Glance path prediction method and device based on machine learning | |
CN114549470B (en) | Hand bone critical area acquisition method based on convolutional neural network and multi-granularity attention | |
US20230140696A1 (en) | Method and system for optimizing parameter intervals of manufacturing processes based on prediction intervals | |
Franchi et al. | Latent discriminant deterministic uncertainty | |
Kalash et al. | Relative saliency and ranking: Models, metrics, data and benchmarks | |
Yu et al. | A generalizable brain extraction net (BEN) for multimodal MRI data from rodents, nonhuman primates, and humans | |
CN114266896A (en) | Image labeling method, model training method and device, electronic equipment and medium | |
Wang et al. | MetaMorph: learning metamorphic image transformation with appearance changes | |
CN111582449A (en) | Training method, device, equipment and storage medium for target domain detection network | |
Akrami et al. | Quantile regression for uncertainty estimation in vaes with applications to brain lesion detection | |
Kamraoui et al. | Popcorn: Progressive pseudo-labeling with consistency regularization and neighboring | |
JP6927161B2 (en) | Learning devices, predictors, methods, and programs | |
Xiong et al. | On training deep 3d cnn models with dependent samples in neuroimaging | |
Chen et al. | A unified framework for generative data augmentation: A comprehensive survey | |
CN112733849A (en) | Model training method, image rotation angle correction method and device | |
Landman et al. | Multiatlas segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |