WO2021114130A1 - 一种无监督自适应乳腺病变分割方法 - Google Patents

一种无监督自适应乳腺病变分割方法 Download PDF

Info

Publication number
WO2021114130A1
WO2021114130A1 PCT/CN2019/124506 CN2019124506W WO2021114130A1 WO 2021114130 A1 WO2021114130 A1 WO 2021114130A1 CN 2019124506 W CN2019124506 W CN 2019124506W WO 2021114130 A1 WO2021114130 A1 WO 2021114130A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
network
domain
target domain
domain image
Prior art date
Application number
PCT/CN2019/124506
Other languages
English (en)
French (fr)
Inventor
李程
王珊珊
肖韬辉
郑海荣
刘新
梁栋
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Priority to PCT/CN2019/124506 priority Critical patent/WO2021114130A1/zh
Publication of WO2021114130A1 publication Critical patent/WO2021114130A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection

Definitions

  • the invention relates to the field of image processing, in particular to an unsupervised adaptive breast lesion segmentation method.
  • Magnetic Resonance Imaging as a multi-parameter, multi-contrast imaging technique, can reflect various characteristics of tissues such as T1, T2 and proton density. It has the advantages of high resolution and sensitivity and has become an early stage of breast cancer.
  • MRI Magnetic Resonance Imaging
  • Deep neural networks Deep neural network
  • convolutional neural networks convolutional neural networks
  • image segmentation of the lesion area and other areas is the prerequisite for subsequent in-depth analysis.
  • Existing image segmentation techniques mostly use supervised deep learning methods, that is, the training is focused on the lesion area and healthy area. Labeling, after training a known sample to obtain a training model or a training network, the target image is judged.
  • image data such as magnetic resonance images
  • the image acquisition system or parameter settings used by the two data sets are inconsistent, due to the difference in data distribution, it is difficult for the deep learning segmentation network trained on one data set to be used in the other. Good segmentation results are obtained on the data set.
  • the magnetic resonance scanning system or magnetic resonance imaging sequence used by different centers may be inconsistent, resulting in differences in the distribution of collected data. This difference makes the already trained MRI image segmentation model unable to guarantee a stable discrimination effect under other systems or parameters.
  • One of the solutions is to manually label the imaging sequences obtained by different magnetic resonance scanning systems or different parameters, that is, to perform retraining and supervised learning for each data set under new conditions to ensure that the The effect of each data set.
  • the disadvantage of this method is that the labeling of image segmentation is very time-consuming, and the labeling of medical images requires strong professional knowledge and experience, which cannot realize low-cost batch manual labeling, and the labeling standards are difficult to control and unify.
  • Another solution is to fine-tune the parameters of the trained segmentation network model for the new target data set, but this method requires the participation of algorithm designers, and the fine-tuning still requires the cooperation of medical expertise, and it cannot be trained. Unsupervised application of the model in other data sets.
  • the present invention proposes an unsupervised domain adaptive image segmentation method. Convert the target domain image so that its semantic information is retained, and the shallow information of the image is reconstructed based on the source domain image; then the segmentation model established by the source domain image is used to distinguish the converted and reconstructed target domain image, so as to achieve Under the premise of no new data annotations, the migration of data sets in different domains of the model is realized.
  • a network method for transforming image styles between image reconstruction and image discrimination domains there is provided a network method for transforming image styles between image reconstruction and image discrimination domains.
  • S101 Obtain a source domain image in a source domain image set, where the image in the source domain image set contains a marked feature area;
  • an image domain discriminating network Ncl for discriminating the domain of the image is established through a training function
  • S102 Using the target domain image as input, perform image reconstruction on the target domain image to obtain a reconstructed network Nre after learning.
  • S104 Repeat the steps of S103, and continuously optimize the reconstruction network Nre until the conditions are set. After optimization, the reconstruction network is the conversion network Ntr.
  • the setting condition is: the loss data of the image domain discrimination network Ncl is less than a preset value.
  • the image P in the target domain can be converted into an image P'that retains the image information but has the style of the source domain.
  • S201 Obtain a source domain image in a source domain image set, where the image in the source domain image set contains a marked feature area;
  • an image domain discriminating network Ncl for discriminating the domain of the image is established through a training function
  • the reconstructed image information includes the shallow information M1 and the deep semantic information M2 corresponding to the image
  • the image reconstruction network Nre includes the shallow information module nm1 corresponding to the image shallow information M1 and the corresponding deep semantic information. Semantic information module nm2.
  • S204 Repeat the steps of S203 to continuously optimize the reconstruction network Nre until the conditions are set. After optimization, the reconstruction network is the conversion network Ntr.
  • a cross-entropy loss function is used as the loss function for training, and the image domain discrimination network Ncl is a residual network.
  • the loss function of the reconstructed network in S202 is the L2Loss function
  • the reconstruction network in S202 can adopt a codec structure
  • the loss data of the image domain discrimination network Ncl in S203 adopts cross-entropy loss
  • the setting condition in S204 is: the loss data of the image domain discrimination network Ncl is less than a preset value.
  • the image P in the target domain can be converted into an image that retains the deep semantic information of the image, but has the source domain style in the shallow features. P'.
  • an unsupervised adaptive image segmentation method is provided.
  • S301 Obtain a source domain image in a source domain image set, where the image in the source domain image set contains a marked feature area;
  • an image domain discriminating network Ncl for discriminating the domain of the image is established through a training function
  • S302 Using the target domain image as input, perform image reconstruction on the target domain image to obtain a reconstructed network Nre after learning.
  • the reconstructed image information includes the shallow information M1 and the deep semantic information M2 corresponding to the image
  • the image reconstruction network Nre includes the shallow information module nm1 corresponding to the image shallow information M1 and the corresponding deep semantic information. Semantic information module nm2.
  • S304 Repeat the steps of S303, continuously optimize the reconstruction network Nre until the conditions are set, and the reconstruction network after optimization is the conversion network Ntr.
  • S305 Based on the source domain image set and its marked feature regions, train an image segmentation network Nse for feature regions and non-feature regions through machine learning.
  • S306 Convert the image P to be analyzed in the target domain image collection through the conversion network Ntr into a converted image P'that has the style of the source domain and retains semantic information.
  • S307 Use the image segmentation network Nse to perform image segmentation on the converted image P'.
  • a cross-entropy loss function is used as a loss function for training, and the image domain discrimination network Ncl is a residual network.
  • the loss function of the reconstructed network in S302 is the L2Loss function
  • the reconstruction network in S302 can adopt a codec structure
  • the loss data of the image domain discrimination network Ncl in S303 adopts cross-entropy loss
  • the setting condition in S304 is: the loss data of the image domain discrimination network Ncl is less than a preset value;
  • the training of the image segmentation network adopts the UNet algorithm
  • the training of the image segmentation network adopts UNet algorithm combined with attention mechanism and/or multi-scale feature expression.
  • the unsupervised adaptation of the image segmentation method from annotated source domain image to an unlabeled target domain image is realized, and the task of unsupervised target domain image segmentation is realized.
  • an adaptive image segmentation method for breast cancer screening is provided.
  • S401 Obtain a source domain image in a source domain image set, the image in the source domain image set contains a marked feature area, the source domain image set is a marked breast MRI image, and the feature area is marked Lump or cancerous tissue area;
  • target domain image in a target domain image set where the target domain image is an unlabeled breast MRI image, and the target domain image may contain an image portion corresponding to a tumor or cancer tissue area;
  • an image domain discriminating network Ncl for discriminating the domain of the image is established through a training function
  • the reconstructed image information includes the shallow information M1 and the deep semantic information M2 corresponding to the image
  • the image reconstruction network Nre includes the shallow information module nm1 corresponding to the image shallow information M1 and the corresponding deep semantic information.
  • S404 Repeat the steps of S403, and continuously optimize the reconstruction network Nre until the conditions are set. After optimization, the reconstruction network is the conversion network Ntr.
  • S405 Based on the source domain image set and its marked feature regions, train an image segmentation network Nse for feature regions and non-feature regions through machine learning.
  • S407 Perform image segmentation on the converted image P'by using the image segmentation network Nse, and the corresponding feature area after image segmentation is the image area of the tumor or cancer tissue suspected of breast cancer screening.
  • a cross-entropy loss function is used as a loss function for training, and the image domain discrimination network Ncl is a residual network.
  • the loss function of the reconstructed network in S402 is the L2Loss function
  • the reconstruction network in S402 can adopt a codec structure
  • the loss data of the image domain discrimination network Ncl in S403 adopts cross-entropy loss
  • the setting condition in S404 is: the loss data of the image domain discrimination network Ncl is less than a preset value;
  • the training of the image segmentation network adopts the UNet algorithm
  • the training of the image segmentation network adopts UNet algorithm combined with attention mechanism and/or multi-scale feature expression.
  • the unsupervised adaptation of the segmentation method of breast lesions from the labeled source domain image to the unlabeled target domain image is realized, and the unsupervised target is completed. Domain image segmentation task.
  • a breast cancer screening device based on adaptive image segmentation including:
  • the acquiring unit is configured to acquire a source domain image in a source image set, the image in the source domain image set contains a marked feature area, the source domain image set is a marked breast MRI image, and the characteristic area is The area of the mass or cancer tissue to be marked;
  • target domain image in a target domain image set, where the target domain image is an unlabeled breast MRI image, and the target domain image may contain an image part corresponding to a tumor or cancer tissue area;
  • the image domain discrimination unit is configured to take the source domain image and the target domain image acquired by the acquisition unit as input, and establish an image domain discrimination network Ncl for discriminating the domain of the image through a training function;
  • the image reconstruction unit takes the target domain image as input, performs image reconstruction on the target domain image, and obtains the reconstructed network Nre after learning.
  • the reconstructed image information includes shallow information and deep semantic information corresponding to the image
  • the image reconstruction network Nre includes a shallow information module nm1 corresponding to the shallow information of the image and a semantic information module corresponding to the deep semantic information. nm2;
  • the image conversion network optimization unit takes the target domain image as input, and the image data obtained after reconstructing the network Nre is discriminated by the image domain discrimination network Ncl, and the loss data of the image domain discrimination network Ncl is determined for the shallow layer
  • the parameters of the information module nm1 are optimized and adjusted, and the parameters of the semantic information module nm2 remain unchanged; the optimization and adjustment process is repeated until the conditions are set, and the optimized network is rebuilt as the conversion network Ntr;
  • the source domain image segmentation network training unit is used for the source domain image set and its marked feature regions, and trains the image segmentation network Nse for feature regions and non-feature regions through machine learning;
  • the target domain image segmentation unit converts the image P to be analyzed in the target domain image collection through the conversion network Ntr into a converted image P′ having the source domain style and retaining semantic information, and uses the image segmentation network Nse to convert the above-mentioned converted image P 'Perform image segmentation; the corresponding feature area after image segmentation is the image area of the tumor or cancer tissue suspected of breast cancer screening.
  • a cross-entropy loss function is used as a loss function for training, and the image domain discrimination network Ncl is a residual network.
  • the setting condition is: the loss data of the image domain discrimination network Ncl is less than a preset value;
  • the present invention proposes an unsupervised field-adaptive breast lesion segmentation method.
  • the new data is forced to be close to the distribution of the existing data set, thereby realizing the unsupervised field adaptive migration of the segmentation network.
  • the breast lesion segmentation network trained on the labeled data set can be directly adapted to the new data set and obtained Very good segmentation effect.
  • the present invention solves the problem that for each set of breast magnetic resonance image data collected using specific experimental parameters in the prior art, doctors need to mark all or part of the data to obtain a segmentation model adapted to the data set to be processed, and the entire process is time-consuming. Disadvantages of labor consuming and high cost. With the aid of a labeled data set, this method can realize unsupervised segmentation of a new data set, which reduces the economic cost of image annotation, and the direct optimization and application of the model can also save time and cost.
  • Figure 1 shows a schematic diagram of an image adaptive conversion method between different fields
  • Figure 2 shows a schematic diagram of an adaptive image segmentation method
  • Figure 3 shows the typical structure of the image segmentation model UNet
  • Figure 4 shows a schematic diagram of a breast cancer screening device based on adaptive image segmentation
  • the present invention proposes an image adaptive conversion method between different fields. Based on this method, even if there is a difference between the new data set and the labeled data set, there is no need to label the images in the new data set, but Perform adaptive learning in two data sets through image conversion.
  • the unlabeled data set is adaptively converted to retain its high-order semantic information, and its image style, texture, brightness and other shallow representations are converted into the features of the labeled data set, so that the trained data in the labeled data set can be directly converted
  • the network model is directly applied to the new data set.
  • a data set containing annotations is needed as the source domain image.
  • This data set can be regarded as an atlas, and all other data sets later use this data set as a template.
  • the image of the target domain for example, an unlabeled image set to be analyzed, is generally an image data set to be analyzed, such as classification or image segmentation.
  • the source domain image and the target domain image should be images with similar deep-level features, such as images taken in the same type of things or similar scenes.
  • the deep-level features of similar nature make this adaptive image conversion
  • the method has practical meaning.
  • the source domain image and the target domain image may present different appearance image characteristics, such as the depth of light, the amount of noise, different textures, or other non-semantic features.
  • the source domain image may contain a series of labeled X-ray, CT or MRI image data, and the target domain image may be corresponding, but may not be collected by the same instrument or Corresponding X-ray, CT or MRI image data collected under the same conditions.
  • the source domain image contains a marked feature area, which may be a mass or cancerous area identified or marked by a professional doctor.
  • S301 establish an image domain discriminant network (S301), use source domain images and target domain images as training sample inputs, establish a discriminant network to classify any new test sample image, and classify it into source domain images and target domain images image.
  • S301 image domain discriminant network
  • the variance, residual, or other losses need to be calculated for subsequent adjustments.
  • a residual network is used.
  • the residual network is a convolutional recognition network, which is easy to optimize and can increase the accuracy by adding considerable depth.
  • the internal residual block uses jump connections to alleviate the problem of gradient disappearance caused by increasing depth in the deep neural network.
  • the calculation of the discrimination error can adopt the cross entropy loss function (CrossEntropy).
  • the cross-entropy loss function is especially suitable for the training process in the two-class model prediction, and it makes the convex optimization problem have good convergence when calculating the loss.
  • a classification label such as the source domain image label is 1, the target domain image label is 0, in the case of dichotomous, the model needs to predict the result in the end there are only two cases, the probability of each category is y and
  • the cross entropy expresses its loss as:
  • the second step is to perform the learning of image reconstruction of the target domain image (S302). That is, the image of the target domain itself is used as input to output the image of the target domain.
  • This step combined with the continuous self-learning and training process, can separate its shallow representation feature information and deep semantic features, where the deep semantic features, such as the area and edge features of the tumor in the medical image, are preserved, and Shallow features such as image style brightness, texture, noise level, etc. can be gradually transformed into the source domain image style through the optimization of the aforementioned discriminant network.
  • the image reconstruction network Nre can use the codec structure SegNet.
  • SegNet's encoder structure and decoder structure are in one-to-one correspondence, that is, a decoder has the same spatial size and number of channels as its corresponding encoder.
  • the location information of the pooling process replaces the direct deconvolution operation.
  • the image loss function can be expressed as L2loss loss, which is also a commonly used loss function for general CNN function loss, because its convergence speed is faster than L1 loss.
  • the third step is to optimize the reconstruction network based on the second step (S302) to obtain the conversion network (S303). Its purpose is to convert the target domain image to the style of the source domain image, and to narrow the distribution of the two domain images for subsequent follow-up Perform operations such as segmentation on the target domain image.
  • the image reconstruction network Nre includes the shallow layer corresponding to the image.
  • the shallow image information refers to the previously mentioned image style brightness, texture, noise level, etc., while the deep information such as the area and edge features of the tumor in the medical image.
  • the source domain image can be obtained.
  • the image domain discrimination network in the first step S301 is used to classify and discriminate the generated source domain image, and judge whether the converted image can be classified as a source domain image. And for the cross entropy function generated in the above classification discrimination, as a loss function to modify and optimize the parameters of the shallow information module nm1 in the reconstruction network Nre, while the parameters of the deep semantic information module nm2 remain unchanged.
  • the first two or three coding module parameters in the network coding part of the reconstructed network are constantly updated, while the remaining parts remain unchanged.
  • the output of the reconstructed network Nre' after the corrected parameters is continuously input into the image domain discrimination network for correction, and the above process is continuously repeated, so that the reconstructed network is continuously optimized to make the reconstructed shallow information of the target domain image more and more approach the source Domain image, while the deep semantic information remains unchanged (S304).
  • a typical example is when the generated source domain image has been identified as the source domain image classification, and the cross-entropy loss is sufficiently small to be less than a certain threshold, it can be considered that the parameters of the reconstruction network Nre have been corrected and Optimized to an acceptable effect.
  • the network obtained at this time is the image conversion network Ntr.
  • another embodiment of the present invention is an image segmentation method for unsupervised field adaptation, which can be specifically applied to computer-aided recognition of MRI images of breast lesions. It is mainly divided into: establishing an image domain discrimination network (S401), performing image reconstruction learning of the target domain image (S402), optimizing based on the reconstruction network to obtain a conversion network (S403, S404), and performing image processing on the labeled source domain image Segmentation network training (S405), the target domain image is transformed through the transformation network (S406), and then the transformed target image is segmented by the image segmentation network (S407).
  • S401 image domain discrimination network
  • S402 performing image reconstruction learning of the target domain image
  • S403 optimizing based on the reconstruction network to obtain a conversion network
  • S405 performing image processing on the labeled source domain image Segmentation network training
  • the target domain image is transformed through the transformation network (S406), and then the transformed target image is segmented by the image segmentation network (S407).
  • Performing image segmentation on the labeled source domain image is a typical problem of performing supervised image segmentation based on the labeled image. In application, it is to train a supervised segmentation network. It can be a widely used medical image segmentation network, such as UNet.
  • UNet is one of the most widely used models in image segmentation projects since its birth.
  • the encoder (down-sampling)-decoder (up-sampling) structure and jump connection adopted by it are a very classic design method.
  • UNet The structure of UNet is shown in Figure 3.
  • the left side can be regarded as an encoder, and the right side can be regarded as a decoder.
  • the encoder has four sub-modules, each of which contains two convolutional layers. After each sub-module, there is a down-sampling layer implemented by max pool.
  • the resolution of the input image is 572x572, and the resolutions of modules 1-5 are 572x572, 284x284, 140x140, 68x68 and 32x32 respectively. Since the convolution uses the valid mode, the resolution of the next sub-module here is equal to (resolution of the previous sub-module-4)/2.
  • the decoder contains four sub-modules, and the resolution is sequentially increased through the up-sampling operation until it is consistent with the resolution of the input image (because the convolution uses the valid mode, the actual output is smaller than the input image).
  • the network also uses a skip connection to connect the up-sampling result with the output of the sub-module with the same resolution in the encoder as the input of the next sub-module in the decoder.
  • the network structure of UNet is especially suitable for the segmentation of medical images.
  • Medical images have fuzzy boundaries and complex gradients, which require more high-resolution information. Deep learning can do this, such as up-sampling, down-sampling and jump connection; at the same time the target to be segmented
  • the shapes are similar and there are rules to follow, for example, the shape is similar to a circle, and the distribution area is in a certain range. Since the structure of the organ itself is fixed and the semantic information is not particularly rich, both high-level semantic information and low-level features appear to be very important. At this time, UNet's jump network and U-shaped structure are very suitable for the above information.
  • Image conversion is performed on the target image to be analyzed (S406): the image P to be analyzed is collected in the target domain image, and converted into a converted image P'having the source domain style and retaining semantic information through the conversion network Ntr.
  • the above-mentioned conversion network and the conversion network that has been trained through S402, S403, and S404, that is, the representation style of the output image tends to be close to the source domain training set image, and can still maintain the semantic information of its own deep features.
  • Step S407 Use the image segmentation network established in step S405 to perform image segmentation on the converted image P', and the corresponding feature regions after image segmentation are used in the screening of breast cancer MRI images This is the image area of the lump or cancer tissue suspected of breast cancer screening.
  • unsupervised field adaptive lesion segmentation of breast magnetic resonance images can be realized. .
  • this embodiment provides a breast cancer screening device based on adaptive image segmentation, including:
  • the acquiring unit is configured to acquire a source domain image in a source image set, the image in the source domain image set contains a marked feature area, the source domain image set is a marked breast MRI image, and the characteristic area is The area of the mass or cancer tissue to be marked;
  • target domain image in a target domain image set, where the target domain image is an unlabeled breast MRI image, and the target domain image may contain an image part corresponding to a tumor or cancer tissue area;
  • the image domain discrimination unit is configured to take the source domain image and the target domain image acquired by the acquisition unit as input, and establish an image domain discrimination network Ncl for discriminating the domain of the image through a training function;
  • the image reconstruction unit takes the target domain image as input, performs image reconstruction on the target domain image, and obtains the reconstructed network Nre after learning.
  • the reconstructed image information includes shallow information and deep semantic information corresponding to the image
  • the image reconstruction network Nre includes a shallow information module nm1 corresponding to the shallow information of the image and a semantic information module corresponding to the deep semantic information. nm2;
  • the image conversion network optimization unit takes the target domain image as input, and the image data obtained after reconstructing the network Nre is discriminated by the image domain discrimination network Ncl, and the loss data of the image domain discrimination network Ncl is determined for the shallow layer
  • the parameters of the information module nm1 are optimized and adjusted, and the parameters of the semantic information module nm2 remain unchanged; the optimization and adjustment process is repeated until the conditions are set, and the optimized network is rebuilt as the conversion network Ntr;
  • the source domain image segmentation network training unit is used for the source domain image set and its marked feature regions, and trains the image segmentation network Nse for feature regions and non-feature regions through machine learning;
  • the target domain image segmentation unit converts the image P to be analyzed in the target domain image collection through the conversion network Ntr into a converted image P′ having the source domain style and retaining semantic information, and uses the image segmentation network Nse to convert the above-mentioned converted image P 'Perform image segmentation; the corresponding feature area after image segmentation is the image area of the tumor or cancer tissue suspected of breast cancer screening.
  • Each unit in the above-mentioned device can be separately or completely combined into one or several other units to form, or some of the units can be further divided into functionally smaller units to form multiple units. This can be The same operation is achieved without affecting the realization of the technical effect of the embodiment of the present invention.
  • the above-mentioned units are divided based on logical functions. In practical applications, the function of one unit may also be realized by multiple units, or the functions of multiple units may be realized by one unit. In other embodiments of the present invention, the model-based training device may also include other units. In practical applications, these functions may also be implemented with the assistance of other units, and may be implemented by multiple units in cooperation.
  • a general-purpose computing device such as a computer including a central processing unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM) and other processing elements and storage elements
  • CPU central processing unit
  • RAM random access storage medium
  • ROM read-only storage medium
  • the computer program may be recorded on, for example, a computer-readable recording medium, and loaded into the above-mentioned computing device through the computer-readable recording medium, and run in it.
  • the fourth embodiment of the present invention provides a computer storage medium, the computer storage medium stores one or more first instructions, and the one or more first instructions are suitable for being loaded by a processor and executed in the foregoing embodiments.
  • Adaptive image segmentation method Adaptive image segmentation method.
  • the program can be stored in a computer-readable storage medium.
  • the storage medium includes read-only Memory (Read-Only Memory, ROM), Random Access Memory (RAM), Programmable Read-only Memory (PROM), Erasable Programmable Read Only Memory, EPROM), One-time Programmable Read-Only Memory (OTPROM), Electronically-Erasable Programmable Read-Only Memory (EEPROM), CD-ROM (Compact Disc) Read-Only Memory, CD-ROM) or other optical disk storage, magnetic disk storage, tape storage, or any other computer-readable medium that can be used to carry or store data.
  • Read-Only Memory ROM
  • RAM Random Access Memory
  • PROM Programmable Read-only Memory
  • EPROM Erasable Programmable Read Only Memory
  • OTPROM One-time Programmable Read-Only Memory
  • EEPROM Electronically-Erasable Programmable Read-Only Memory
  • CD-ROM Compact Disc

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种无监督领域自适应图像分割方法。将目标域图像进行转换,使得其语义信息保留,图像浅层信息以源域图像为特征进行重建;然后利用源域图像建立的分割模型对于转换和重建后的目标域图像进行图像判别,从而在无新数据标注的前提下,实现模型的不同领域数据集间的迁移。

Description

一种无监督自适应乳腺病变分割方法 技术领域
本发明涉及图像处理领域,具体而言,涉及一种基于无监督的自适应性乳腺病变分割方法。
背景技术
乳腺癌是女性发病率最高的癌症,早诊断和早治疗可以有效提升乳腺癌患者的长期生存率。磁共振成像(Magnetic Resonance Imaging,MRI)作为一种多参数、多对比度的成像技术,可以反映组织T1、T2和质子密度等多种特性,具有分辨率和灵敏度高的优点,已经成为乳腺癌早期筛查的重要工具之一。乳腺MRI技术已经越来越多的应用于临床实践,尤其是乳腺癌的早期筛查中。
乳腺癌MRI的筛查中,通过计算机辅助上述图像分析为本领域的发展趋势和核心技术问题。早期的医学图像分割最初采用边缘检测、纹理特征、形态滤波等,但是需要大量的手工标注和针对性分析,其解决深层结构和自适应性能力有限。而近些年间以深度学习为代表的机器学习算法,在图像识别、图像分割等预测方面取得了突破性进展,以深度神经网络(DNN,deep neural network)和卷积神经网络(Convolutional neural networks,CNNs)为代表的多种深度学习算法不断取得重大进展,在医学图像的图像分割包括其他图像分析方法中的应用,也是本领域的发展趋势。
乳腺癌的早期筛查中,病变区域和其它区域的图像分割是后续深入分析的前提,现有的图像分割技术多采用有监督的深度学习方式,即训练集中为已经对于病变区域和健康区域进行标注,通过对已知样本进行训练得到训练模型或训练网络后,再对于目标图像进行判别。但是,即使是同类型图像数据,如磁共振图像,若两个数据集使用的图像采集系统或者参数设置不一致,由于数据分布差异,在一个数据集上训练的深度学习分割网 络很难在另一个数据集上获得很好的分割结果。
具体到乳腺癌MRI筛查领域,不同中心使用的磁共振扫描系统或磁共振成像序列可能不一致,导致采集的数据存在分布差异。该差异使得已经训练好的MRI图像分割模型在另外的系统或参数下不能保证稳定的判别效果。
其中一种解决方案为:针对不同的磁共振扫描系统或者不同的参数得到的成像序列分别进行手工标注,即对于每个新的条件下的数据集进行重新训练和有监督学习,以保证在每个数据集中的效果。此种方法的缺点是图像分割的标注非常耗时,并且医学图像的标注需要很强的专业知识和经验,无法实现低成本批量人工标注,并且标注的标准也很难控制和统一。
另一种解决方案是针对新的目标数据集进行已训练分割网络模型的参数微调,但此种方法需要算法设计人员的参与,且微调依然需要医学专业知识的配合,无法做到已训练好的模型在其它数据集中的无监督应用。
发明内容
为了解决上述背景技术中提到的分割模型在领域间泛化能力差的缺点,本发明提出了一种无监督领域自适应图像分割方法。将目标域图像进行转换,使得其语义信息保留,图像浅层信息以源域图像为特征进行重建;然后利用源域图像建立的分割模型对于转换和重建后的目标域图像进行图像判别,从而实现在无新数据标注的前提下,实现模型的不同领域数据集间的迁移。
根据本发明的第一个方面,提供了一种基于图像重建和图像判别领域间图像风格的转换网络方法。
S101:获得源域图像集中的源域图像,所述源域图像集中的图像包含有已进行标记的特征区域;
获得目标域图像集中的目标域图像,所述目标域图像具有或不具有类似所述源域图像集中特征区域的待分割区域;
以所述源域图像和所述目标域图像为输入,通过训练函数建立用于判别图像所属领域的图像域判别网络Ncl;
S102:以目标域图像为输入,对于目标域图像进行图像重建,得到经学习后的重建网络Nre。
S103:以目标域图像为输入,经重建网络Nre后得到的图像数据,通过所述图像域判别网络Ncl判别,并根据图像域判别网络Ncl的损失数据,来优化和调节重建网络Nre的参数。
S104:重复S103的步骤,对于重建网络Nre进行不断优化直到设定条件为止,优化后重建网络即为转换网络Ntr。
所述S104中,所述设定条件为:所述图像域判别网络Ncl的损失数据小于预设值。
根据本发明提出的转换网络Ntr,可以将目标域中的图像P,转换为保留图像信息但具有源域风格的图像P’。
根据本发明的第二个方面,提出了建立基于浅层语义特征的领域间图像分布自适应模型的方法:
S201:获得源域图像集中的源域图像,所述源域图像集中的图像包含有已进行标记的特征区域;
获得目标域图像集中的目标域图像,所述目标域图像具有或不具有类似所述源域图像集中特征区域的待分割区域;
以所述源域图像和所述目标域图像为输入,通过训练函数建立用于判别图像所属领域的图像域判别网络Ncl;
S202:以目标域图像为输入,对于目标域图像进行图像重建,得到经学习后的重建网络Nre。所述重建后的图像信息中包含对应于图像浅层信息M1及深层语义信息M2,所述图像重建网络Nre中包含对应于图像浅层信息M1的浅层信息模块nm1及对应于深层语义信息的语义信息模块nm2。
S203:以目标域图像为输入,经重建网络Nre后得到的图像数据,通过所述图像域判别网络Ncl进行判别,并根据图像域判别网络Ncl的损失数据,对于所述浅层信息模块nm1的参数进行优化和调节,所述语义信息模块nm2的参数保持不变。
S204:重复S203的步骤,对于重建网络Nre进行不断优化直到设定 条件为止,优化后重建网络即为转换网络Ntr。
优选的,S201中用交叉熵损失函数作为损失函数进行训练,所述图像域判别网络Ncl为残差网络。
优选的,S202中的重建网络的损失函数为L2Loss函数;
优选的,S202中的重建网络可采用编解码结构;
优选的,S203中图像域判别网络Ncl的损失数据采用交叉熵损失;
优选的,S204中所述设定条件为:所述图像域判别网络Ncl的损失数据小于预设值。
根据本发明提出的基于浅层语义特征的领域间图像分布自适应模型的方法,可以将目标域中的图像P,转换为保留图像深层语义信息,但在浅层特征中具有源域风格的图像P’。
根据本发明的第三个方面,提供了一种无监督自适应的图像分割方法。
S301:获得源域图像集中的源域图像,所述源域图像集中的图像包含有已进行标记的特征区域;
获得目标域图像集中的目标域图像,所述目标域图像具有或不具有类似所述源域图像集中特征区域的待分割区域;
以所述源域图像和所述目标域图像为输入,通过训练函数建立用于判别图像所属领域的图像域判别网络Ncl;
S302:以目标域图像为输入,对于目标域图像进行图像重建,得到经学习后的重建网络Nre。所述重建后的图像信息中包含对应于图像浅层信息M1及深层语义信息M2,所述图像重建网络Nre中包含对应于图像浅层信息M1的浅层信息模块nm1及对应于深层语义信息的语义信息模块nm2。
S303:以目标域图像为输入,经重建网络Nre后得到的图像数据,通过所述图像域判别网络Ncl进行判别,并根据图像域判别网络Ncl的损失数据,对于所述浅层信息模块nm1的参数进行优化和调节,所述语义信息模块nm2的参数保持不变。
S304:重复S303的步骤,对于重建网络Nre进行不断优化直到设定条件为止,优化后重建网络即为转换网络Ntr。
S305:基于源域图像集及其已标注过的特征区域,通过机器学习训练出针对于特征区域和非特征区域的图像分割网络Nse。
S306:将目标域图像集中待分析的图像P,通过转换网络Ntr进行转换为具有源域风格且保留语义信息的转换图像P’。
S307:采用所述图像分割网络Nse对上述转换图像P’进行图像分割。
优选的,S301中用交叉熵损失函数作为损失函数进行训练,所述图像域判别网络Ncl为残差网络。
优选的,S302中的重建网络的损失函数为L2Loss函数;
优选的,S302中的重建网络可采用编解码结构;
优选的,S303中图像域判别网络Ncl的损失数据采用交叉熵损失;
优选的,S304中所述设定条件为:所述图像域判别网络Ncl的损失数据小于预设值;
优选的,S305中,其图像分割网络的训练采用UNet算法;
优选的,S305中,其图像分割网络的训练采用UNet算法结合注意力机制和/或多尺度特征表达。
根据本发明提出的上述图像分割方法,实现了从有标注的源域图像到无标注的目标域图像的图像分割方法的无监督自适应,实现了无监督的目标域图像分割的任务。
根据本发明的第四个方面,提供了一种乳腺癌筛查自适应图像分割方法。
S401:获得源域图像集中的源域图像,所述源域图像集中的图像包含有已进行标记的特征区域,所述源域图像集为经过标记的乳腺MRI图像,所述特征区域为进行标记的肿块或癌组织区域;
获得目标域图像集中的目标域图像,所述目标域图像为未经过标记的乳腺MRI图像,所述目标域图像中可能含有对应肿块或癌组织区域的图像部分;
以所述源域图像和所述目标域图像为输入,通过训练函数建立用于判别图像所属领域的图像域判别网络Ncl;
S402:以目标域图像为输入,对于目标域图像进行图像重建,得到经 学习后的重建网络Nre。所述重建后的图像信息中包含对应于图像浅层信息M1及深层语义信息M2,所述图像重建网络Nre中包含对应于图像浅层信息M1的浅层信息模块nm1及对应于深层语义信息的语义信息模块nm2。
S403:以目标域图像为输入,经重建网络Nre后得到的图像数据,通过所述图像域判别网络Ncl进行判别,并根据图像域判别网络Ncl的损失数据,对于所述浅层信息模块nm1的参数进行优化和调节,所述语义信息模块nm2的参数保持不变。
S404:重复S403的步骤,对于重建网络Nre进行不断优化直到设定条件为止,优化后重建网络即为转换网络Ntr。
S405:基于源域图像集及其已标注过的特征区域,通过机器学习训练出针对于特征区域和非特征区域的图像分割网络Nse。
S406:将目标域图像集中待分析的图像P,通过转换网络Ntr进行转换为具有源域风格且保留语义信息的转换图像P’,
S407:采用所述图像分割网络Nse对上述转换图像P’进行图像分割,图像分割后的所对应的特征区域,即为疑似乳腺癌筛查的肿块或癌组织的图像区域。
优选的,S401中用交叉熵损失函数作为损失函数进行训练,所述图像域判别网络Ncl为残差网络。
优选的,S402中的重建网络的损失函数为L2Loss函数;
优选的,S402中的重建网络可采用编解码结构;
优选的,S403中图像域判别网络Ncl的损失数据采用交叉熵损失;
优选的,S404中所述设定条件为:所述图像域判别网络Ncl的损失数据小于预设值;
优选的,S405中,其图像分割网络的训练采用UNet算法;
优选的,S405中,其图像分割网络的训练采用UNet算法结合注意力机制和/或多尺度特征表达。
根据本发明提出的上述乳腺癌筛查自适应图像分割方法,实现了从有标注的源域图像到无标注的目标域图像的乳腺病变区域分割方法的无监督 自适应,完成了无监督的目标域图像分割任务。
根据本发明的第五个方面,提供了一种基于自适应图像分割的乳腺癌筛查装置,包括:
获取单元,用于获取源图像集中的源域图像,所述源域图像集中的图像包含有已进行标记的特征区域,所述源域图像集为经过标记的乳腺MRI图像,所述特征区域为进行标记的肿块或癌组织区域;
还用于获取目标域图像集中的目标域图像,所述目标域图像为未经过标记的乳腺MRI图像,所述目标域图像中可能含有对应肿块或癌组织区域的图像部分;
图像域判别单元,用于以获取单元获取的所述源域图像和所述目标域图像为输入,通过训练函数建立用于判别图像所属领域的图像域判别网络Ncl;
图像重建单元,以目标域图像为输入,对于目标域图像进行图像重建,得到经学习后的重建网络Nre。所述重建后的图像信息中包含对应于图像浅层信息及深层语义信息,所述图像重建网络Nre中包含对应于图像浅层信息的浅层信息模块nm1及对应于深层语义信息的语义信息模块nm2;
图像转换网络优化单元,以目标域图像为输入,经重建网络Nre后得到的图像数据,通过所述图像域判别网络Ncl进行判别,并根据图像域判别网络Ncl的损失数据,对于所述浅层信息模块nm1的参数进行优化和调节,所述语义信息模块nm2的参数保持不变;重复优化和调节过程直到设定条件为止,优化后重建网络作为转换网络Ntr;
源域图像分割网络训练单元,用于源域图像集及其已标注过的特征区域,通过机器学习训练出针对于特征区域和非特征区域的图像分割网络Nse;
目标域图像分割单元,将目标域图像集中待分析的图像P,通过转换网络Ntr进行转换为具有源域风格且保留语义信息的转换图像P’,采用所述图像分割网络Nse对上述转换图像P’进行图像分割;图像分割后的所对应的特征区域,即为疑似乳腺癌筛查的肿块或癌组织的图像区域。
优选的,所述图像域判别单元中,用交叉熵损失函数作为损失函数进 行训练,所述图像域判别网络Ncl为残差网络。
优选的,所述图像转换网络优化单元中,所述设定条件为:所述图像域判别网络Ncl的损失数据小于预设值;
由此,本发明提出无监督的领域自适应乳腺病变分割方法,通过对新数据进行数据域转换,迫使新的数据接近已有数据集分布,从而实现分割网络的无监督领域自适应迁移。基于本方法,即使新数据集与有标注数据集存在差异,也无需对新数据集中图像进行标注,而可以直接将在有标注数据集上训练的乳腺病变分割网络自适应到新数据集并取得很好的分割效果。
因此,本发明解决了现有技术对于每一套使用特定实验参数采集的乳腺磁共振图像数据,都需要医生进行完全或部分数据标注,才能得到适应待处理数据集的分割模型,整套流程耗时耗力,成本高的缺点。本方法在一个有标注的数据集的辅助下,可以实现新数据集的无监督分割,降低了图像标注的经济成本,模型的直接优化应用也可以节约时间成本。
附图说明
图1示出了不同领域间图像自适应转换方法的示意图;
图2示出了自适应图像分割方法的示意图;
图3示意图像分割模型UNet的典型结构;
图4示出了基于自适应图像分割的乳腺癌筛查装置的示意图;
具体实施方式
为了能够更清楚地理解本发明的上述目的、特征和优点,下面结合附图和具体实施方式对本发明进行进一步的详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本发明,但是,本发明还可以采用其他不同于在此描述的方式来实施,因此,本发明的保护范围并不受下面公开的具体实施例的限制。
实施例一
如图1所示,本发明提出的一种不同领域间图像自适应转换方法,基于本方法,即使新的数据集与标注数据集存在差异,也不需对于新数据集中图像进行标注,而是通过图像转换在两个数据集中进行自适应学习。从而使得未标注的数据集通过自适应转换,保留其高阶语义信息,而其图像风格、纹理、亮度等浅层表征转换为标注数据集的特征,从而可以直接将在标注数据集中训练好的网络模型直接应用至新的数据集中。
根据本发明的一个实施例的自适应图像转换方法中,包括如下的步骤:
首先,需要一个含有标注的数据集作为源域图像,该数据集可视为一个图集,后面其他数据集都以此数据集为模板。目标域的图像,例如一个没有标注的待分析的图像集,一般为待图像分析例如分类或图像分割的图像数据集。
需要注意的是,源域图像和目标域图像应当为含有类似性质的深层次特征的图像,例如均是同类型事物或者类似场景下拍摄的图像,类似性质的深层次特征使得本自适应图像转换方法有实际的意义。但是源域图像和目标域图像可能呈现不同的表象图像特征,例如光线的深浅、噪音的大小、不同的纹理或者其他的非语义特征。
例如在对于计算机辅助医学图像分析的应用中,源域图像中可以包含一系列已标注过的X光、CT或MRI图像数据,而目标域图像可能是相应的,但是可能不在同一台仪器采集或者不在相同条件下采集的对应X光、CT或MRI图像数据。而源域图像中包含有已进行标记的特征区域,此特征可能为专业医生进行识别或标记的肿块或癌变区域。
首先,要建立图像域判别网络(S301),利用源域图像和目标域图像作为训练样本输入,建立判别网络对于任意一张新的测试样本图像进行分类,将其分类为源域图像和目标域图像。但是,对于上述分类需要计算其方差、残差或者其他损失以进行后续步骤的调整。
对于上述图像域判别网络训练的分类方法,可以使用深度神经网络中各种经典的分类和判别方法,例如采用残差网络。残差网络是一种卷积识别网络,特点是容易优化,并且能够通过增加相当的深度来提高准确率。 其内部的残差块使用了跳跃连接,缓解了在深度神经网络中增加深度带来的梯度消失问题。
其中在训练过程中,其判别误差的计算可以采用交叉熵损失函数(CrossEntropy)。交叉熵损失函数特别适用于二分类的模型预测中的训练过程,在计算损失时使得凸优化问题有很好的收敛。给不同图像一个分类标签,如源域图像标签为1,目标域图像标签为0,在二分的情况下,模型最后需要预测的结果只有两种情况,对于每个类别得到的概率为y和
Figure PCTCN2019124506-appb-000001
此时交叉熵表达其损失为:
Figure PCTCN2019124506-appb-000002
第二步,进行目标域图像的图像重建的学习(S302)。也就是以目标域自身的图像作为输入,进行目标域图像的输出。这一步以及结合通过不断的自学习和训练的过程,能够对于其浅层表征特征信息和深层语义特征进行分离,其中深层次的语义特征,例如医学图像中肿瘤的区域及边缘特征被保留,而浅层特征如图像风格亮度、纹理、噪声水平等通过前述判别网络的优化,能够逐渐转换为源域图像的风格。
在一个典型的应用中,图像重建网络Nre可以用编解码结构SegNet。SegNet的编码器结构与解码器结构是一一对应的,即一个decoder具有与其对应的encoder相同的空间尺寸和通道数。一个基础SegNet结构,二者各有13个卷积层,其相对于对应的FCN经典图像分割模型来比较,体量要小很多,得益于SegNet中为了权衡计算量而采取的操作:用记录的池化过程的位置信息替代直接的反卷积操作。
图像损失函数可以为L2loss损失进行表达,这也是一般CNN函数损失较为常用的损失函数,因为其收敛速度相较于L1损失更快。
第三步,基于第二步(S302)的重建网络进行优化从而得到转换网络(S303),其目的是将目标域图像转换到源域图像的风格,拉近两个领域图像的分布,以便后续对目标域图像进行分割等操作。
对于第二步(S302)中以编码器结构为例的重建后图像,包括对应于图像浅层信息M1及深层语义信息M2,相对应的,所述图像重建网络 Nre中包含对应于图像浅层信息M1的浅层信息模块nm1及对应于深层语义信息的语义信息模块nm2。可以容易的理解,图像浅层信息即之前提到的图像风格亮度、纹理、噪声水平等,而深层信息例如医学图像中肿瘤的区域及边缘特征。
目标域图像,在经过第二步S302中所得到的经重建网络Nre转换后,可得到生成源域图像。用第一步S301中的图像域判别网络对于所述生成源域图像进行分类判别,判别其转换后的图像是否能够被分类为源域图像。并且针对上述分类判别中所产生的交叉熵函数,作为损失函数来修正和优化上述重建网络Nre中其浅层信息模块nm1的参数,而深层语义信息模块nm2的参数保持不变。在采用编解码结构的实施例中,也就是不断更新重建网络网络编码部分中前两三个编码模块参数,而其余部分保持不变。
将修正参数后的重建网络Nre’的输出继续输入图像域判别网络进行修正,不断重复上述过程,使得重建网络不断被优化为:可以使得目标域图像经重建后的浅层信息越来越逼近源域图像,而深层语义信息不变(S304)。
当达到一定的条件时,一个典型例子是当生成源域图像已经被判别为源域图像的分类,且交叉熵损失已经足够小到小于某阈值时,可以认为重建网络Nre的参数已经被修正和优化到一个可以接受的效果。此时得到的网络即为图像转换网络Ntr。
也就是说,一个未经标记的目标域图像,输入图像转换网络后Ntr后,其输出的图像的表征风格趋于接近源域训练集图像,而能够依然保持自己的深层特征的语义信息。从而可以直接将在标注数据集中训练好的各种网络模型直接应用至新的数据集中。
实施例二
如图2所示,本发明的另一个实施例是用于无监督领域自适应的图像分割方法,具体的可以应用至乳腺病变的MRI图像计算机辅助识别中。其主要分为:建立图像域判别网络(S401),进行目标域图像的图像重建的学习(S402),基于重建网络进行优化从而得到转换网络(S403,S404),对于标注的源域图像进行图像分割网络训练(S405),将目标域 图像通过转换网络进行转换(S406),之后采用图像分割网络对转换后的目标图像进行图像分割(S407)。
在上述步骤中,S401,S402,S403,S404的步骤都与实施例一种相应步骤相同,不再赘述。
对于标注的源域图像进行图像分割(S405),是一个典型的基于标注图像进行有监督的图像分割的问题,具体到应用中,即为训练一个有监督的分割网络。其可为现下使用较广泛的医学图像分割网络,例如UNet。
UNet是自诞生以来在图像分割项目中应用最广的模型之一,其采用的编码器(下采样)-解码器(上采样)结构和跳跃连接是一种非常经典的设计方法。目前已有许多新的卷积神经网络设计方式,但很多仍延续了UNet的核心思想,加入了新的模块或者融入其他设计理念。
UNet的结构如图3所示,左侧可视为一个编码器,右侧可视为一个解码器。编码器有四个子模块,每个子模块包含两个卷积层,每个子模块之后有一个通过max pool实现的下采样层。输入图像的分辨率是572x572,第1-5个模块的分辨率分别是572x572,284x284,140x140,68x68和32x32。由于卷积使用的是valid模式,故这里后一个子模块的分辨率等于(前一个子模块的分辨率-4)/2。解码器包含四个子模块,分辨率通过上采样操作依次上升,直到与输入图像的分辨率一致(由于卷积使用的是valid模式,实际输出比输入图像小一些)。该网络还使用了跳跃连接,将上采样结果与编码器中具有相同分辨率的子模块的输出进行连接,作为解码器中下一个子模块的输入。
UNet的网络结构尤其适合医学图像的分割,医学图像边界模糊、梯度复杂,需要较多的高分辨率信息,深度学习可以做到这一点,比如上采样下采样以及跳跃连接;同时待分割的目标形态相似,有规律可循,比如形状近似圆,分布的区域都在某个范围。由于器官本身结构固定和语义信息没有特别丰富,所以高级语义信息和低级特征都显得很重要,此时UNet的跳跃网络和U型结构均很适合上述信息。
此外,在图像分割过程中,可以引入有效的新模块,如UNet加上注意力机制及多尺度特征表达等。
对于待分析的目标图像进行图像转换(S406):将目标域图像集中待分析的图像P,通过转换网络Ntr进行转换为具有源域风格且保留语义信 息的转换图像P’。上述转换网络及经过S402、S403和S404已经训练好的转换网络,也就是其输出的图像的表征风格趋于接近源域训练集图像,而能够依然保持自己的深层特征的语义信息。
进行图像分割并识别结果(S407):采用S405步骤中建立的图像分割网络对上述转换后的图像P’进行图像分割,图像分割后的所对应的特征区域,在乳腺癌MRI图像的筛查中即为疑似乳腺癌筛查的肿块或癌组织的图像区域。
由以上步骤所提供的方法中,即可实现乳腺磁共振图像的无监督领域自适应病变分割。。
实施例三
参见图4,本实施例提供了一种基于自适应图像分割的乳腺癌筛查装置,包括:
获取单元,用于获取源图像集中的源域图像,所述源域图像集中的图像包含有已进行标记的特征区域,所述源域图像集为经过标记的乳腺MRI图像,所述特征区域为进行标记的肿块或癌组织区域;
还用于获取目标域图像集中的目标域图像,所述目标域图像为未经过标记的乳腺MRI图像,所述目标域图像中可能含有对应肿块或癌组织区域的图像部分;
图像域判别单元,用于以获取单元获取的所述源域图像和所述目标域图像为输入,通过训练函数建立用于判别图像所属领域的图像域判别网络Ncl;
图像重建单元,以目标域图像为输入,对于目标域图像进行图像重建,得到经学习后的重建网络Nre。所述重建后的图像信息中包含对应于图像浅层信息及深层语义信息,所述图像重建网络Nre中包含对应于图像浅层信息的浅层信息模块nm1及对应于深层语义信息的语义信息模块nm2;
图像转换网络优化单元,以目标域图像为输入,经重建网络Nre后得到的图像数据,通过所述图像域判别网络Ncl进行判别,并根据图像域判别网络Ncl的损失数据,对于所述浅层信息模块nm1的参数进行优化和调 节,所述语义信息模块nm2的参数保持不变;重复优化和调节过程直到设定条件为止,优化后重建网络作为转换网络Ntr;
源域图像分割网络训练单元,用于源域图像集及其已标注过的特征区域,通过机器学习训练出针对于特征区域和非特征区域的图像分割网络Nse;
目标域图像分割单元,将目标域图像集中待分析的图像P,通过转换网络Ntr进行转换为具有源域风格且保留语义信息的转换图像P’,采用所述图像分割网络Nse对上述转换图像P’进行图像分割;图像分割后的所对应的特征区域,即为疑似乳腺癌筛查的肿块或癌组织的图像区域。
上述装置中的各个单元可以分别或全部合并为一个或若干个另外的单元来构成,或者其中的某个(些)单元还可以再拆分为功能上更小的多个单元来构成,这可以实现同样的操作,而不影响本发明的实施例的技术效果的实现。上述单元是基于逻辑功能划分的,在实际应用中,一个单元的功能也可以由多个单元来实现,或者多个单元的功能由一个单元实现。在本发明的其它实施例中,基于模型训练装置也可以包括其它单元,在实际应用中,这些功能也可以由其它单元协助实现,并且可以由多个单元协作实现。
根据本发明的另一个实施例,可以通过在包括中央处理单元(CPU)、随机存取存储介质(RAM)、只读存储介质(ROM)等处理元件和存储元件的例如计算机的通用计算设备上运行能够执行实施例二中相应方法所涉及的各步骤的计算机程序(包括程序代码),来构造如附图4中所示的模型训练装置设备,以及来实现本发明实施例的模型训练方法。所述计算机程序可以记载于例如计算机可读记录介质上,并通过计算机可读记录介质装载于上述计算设备中,并在其中运行。
实施例四
本发明的实施例四提供一种计算机存储介质,所述计算机存储介质存储有一条或多条第一指令,所述一条或多条第一指令适于由处理器加载并执行前述实施例中的自适应图像分割方法。
本发明各实施例方法中的步骤可根据实际需要进行顺序调整、合并和删减。
本发明各实施例装置中的单元可根据实际需要进行合并、划分和删减。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质包括只读存储器(Read-Only Memory,ROM)、随机存储器(Random Access Memory,RAM)、可编程只读存储器(Programmable Read-only Memory,PROM)、可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、一次可编程只读存储器(One-time Programmable Read-Only Memory,OTPROM)、电子抹除式可复写只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储器、磁盘存储器、磁带存储器、或者能够用于携带或存储数据的计算机可读的任何其他介质。
以上结合附图详细说明了本发明的技术方案,以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (8)

  1. 一种基于图像重建和图像判别的领域间图像风格的转换网络方法,其特征在于:
    包括如下步骤,
    S101:获得源域图像集中的源域图像,所述源域图像集中的图像包含有已进行标记的特征区域;
    获得目标域图像集中的目标域图像,所述目标域图像具有或不具有类似所述源域图像集中特征区域的待分割区域;
    以所述源域图像和所述目标域图像为输入,通过训练函数建立用于判别图像所属领域的图像域判别网络Ncl;
    S102:以目标域图像为输入,对于目标域图像进行图像重建,得到经学习后的重建网络Nre;
    S103:以目标域图像为输入,经重建网络Nre后得到的图像数据,通过所述图像域判别网络Ncl判别,并根据图像域判别网络Ncl的损失数据,来优化和调节重建网络Nre的参数;
    S104:重复S103的步骤,对于重建网络Nre进行不断优化直到设定条件为止,优化后的重建网络即为转换网络Ntr;
    所述转换网络Ntr,可以将目标域中的图像,转换为保留图像信息但具有源域风格的转换图像。
  2. 一种基于浅层语义特征的建立领域间图像分布自适应模型的方法,其特征在于:
    包括如下步骤,
    S201:获得源域图像集中的源域图像,所述源域图像集中的图像包含有已进行标记的特征区域;
    获得目标域图像集中的目标域图像,所述目标域图像具有或不具有类似所述源域图像集中特征区域的待分割区域;
    以所述源域图像和所述目标域图像为输入,通过训练函数建立用于判别图像所属领域的图像域判别网络Ncl;
    S202:以目标域图像为输入,对于目标域图像进行图像重建,得到经 学习后的重建网络Nre;所述重建后的图像信息中包含对应于图像浅层信息M1及深层语义信息M2,所述图像重建网络Nre中包含对应于图像浅层信息M1的浅层信息模块nm1及对应于深层语义信息的语义信息模块nm2;
    S203:以目标域图像为输入,经重建网络Nre后得到的图像数据,通过所述图像域判别网络Ncl进行判别,并根据图像域判别网络Ncl的损失数据,对于所述浅层信息模块nm1的参数进行优化和调节,所述语义信息模块nm2的参数保持不变;
    S204:重复S203的步骤,对于重建网络Nre进行不断优化直到设定条件为止,优化后重建网络即为转换网络Ntr,通过转换网络Ntr可建立领域间图像分布自适应模型。
  3. 一种如权利要求2所述的建立领域间图像分布自适应模型的方法,其特征在于:S201中采用交叉熵损失函数作为损失函数进行训练。
  4. 一种如权利要求2所述的建立领域间图像分布自适应模型的方法,其特征在于:S202中的重建网络可采用编解码结构。
  5. 一种如权利要求2所述的建立领域间图像分布自适应模型的方法,其特征在于:S204中所述设定条件为:所述图像域判别网络Ncl的损失数据小于预设值。
  6. 一种无监督自适应的图像分割方法,其特征在于:
    包括如下步骤,
    S301:获得源域图像集中的源域图像,所述源域图像集中的图像包含有已进行标记的特征区域;
    获得目标域图像集中的目标域图像,所述目标域图像具有或不具有类似所述源域图像集中特征区域的待分割区域;
    以所述源域图像和所述目标域图像为输入,通过训练函数建立用于判别图像所属领域的图像域判别网络Ncl;
    S302:以目标域图像为输入,对于目标域图像进行图像重建,得到经学习后的重建网络Nre;所述重建后的图像信息中包含对应于图像浅层信息M1及深层语义信息M2,所述图像重建网络Nre中包含对应于图像浅 层信息M1的浅层信息模块nm1及对应于深层语义信息的语义信息模块nm2;
    S303:以目标域图像为输入,经重建网络Nre后得到的图像数据,通过所述图像域判别网络Ncl进行判别,并根据图像域判别网络Ncl的损失数据,对于所述浅层信息模块nm1的参数进行优化和调节,所述语义信息模块nm2的参数保持不变;
    S304:重复S303的步骤,对于重建网络Nre进行不断优化直到设定条件为止,优化后重建网络即为转换网络Ntr;
    S305:基于源域图像集及其已标注过的特征区域,通过机器学习训练出针对于特征区域和非特征区域的图像分割网络Nse;
    S306:将目标域图像集中待分析的图像P,通过转换网络Ntr进行转换为具有源域风格且保留语义信息的转换图像P’;
    S307:采用所述图像分割网络Nse对上述转换图像P’进行图像分割。
  7. 一种如权利要6所述的无监督自适应的图像分割方法,其特征在于:所述源域图像集为经过标记的乳腺MRI图像,所述特征区域为进行标记的肿块或癌组织区域,所述目标域图像为未经过标记的乳腺MRI图像。
  8. 一种基于自适应图像分割的乳腺癌筛查装置,其特征在于:
    装置包括:
    获取单元,用于获取源域图像集中的源域图像,所述源域图像集中的图像包含有已进行标记的特征区域,所述源域图像集为经过标记的乳腺MRI图像,所述特征区域为进行标记的肿块或癌组织区域;
    还用于获取目标域图像集中的目标域图像,所述目标域图像集为未经过标记的乳腺MRI图像,所述目标域图像中可能含有对应肿块或癌组织区域的图像部分;
    图像域判别单元,用于以获取单元获取的所述源域图像和所述目标域图像为输入,通过训练函数建立用于判别图像所属领域的图像域判别网络;
    图像重建单元,以目标域图像为输入,对于目标域图像进行图像重建,得到经学习后的重建网络Nre;所述重建后的图像信息中包含对应于图像浅层信息及深层语义信息,所述图像重建网络Nre中包含对应于图像浅层 信息的浅层信息模块nm1及对应于深层语义信息的语义信息模块nm2;
    图像转换网络优化单元,以目标域图像为输入,经重建网络Nre后得到的图像数据,通过所述图像域判别网络Ncl进行判别,并根据图像域判别网络Ncl的损失数据,对于所述浅层信息模块nm1的参数进行优化和调节,所述语义信息模块nm2的参数保持不变;重复优化和调节过程直到设定条件为止,优化后重建网络作为转换网络Ntr;
    源域图像分割网络训练单元,用于源域图像集及其已标注过的特征区域,通过机器学习训练出针对于特征区域和非特征区域的图像分割网络Nse;
    目标域图像分割单元,将目标域图像集中待分析的图像P,通过转换网络Ntr进行转换为具有源域风格且保留语义信息的转换图像P’,采用所述图像分割网络Nse对上述转换图像P’进行图像分割;图像分割后的所对应的特征区域,即为疑似乳腺癌筛查的肿块或癌组织的图像区域。
PCT/CN2019/124506 2019-12-11 2019-12-11 一种无监督自适应乳腺病变分割方法 WO2021114130A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/124506 WO2021114130A1 (zh) 2019-12-11 2019-12-11 一种无监督自适应乳腺病变分割方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/124506 WO2021114130A1 (zh) 2019-12-11 2019-12-11 一种无监督自适应乳腺病变分割方法

Publications (1)

Publication Number Publication Date
WO2021114130A1 true WO2021114130A1 (zh) 2021-06-17

Family

ID=76329219

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/124506 WO2021114130A1 (zh) 2019-12-11 2019-12-11 一种无监督自适应乳腺病变分割方法

Country Status (1)

Country Link
WO (1) WO2021114130A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591867A (zh) * 2021-07-30 2021-11-02 华中科技大学 一种无监督领域自适应图像分割方法及系统
CN113643269A (zh) * 2021-08-24 2021-11-12 泰安市中心医院 基于无监督学习的乳腺癌分子分型方法、装置及系统
CN113706564A (zh) * 2021-09-23 2021-11-26 苏州大学 基于多种监督方式的睑板腺分割网络的训练方法及装置
CN113792526A (zh) * 2021-09-09 2021-12-14 北京百度网讯科技有限公司 字符生成模型的训练方法、字符生成方法、装置和设备和介质
CN114494804A (zh) * 2022-04-18 2022-05-13 武汉明捷科技有限责任公司 一种基于域特有信息获取的无监督领域适应图像分类方法
CN116503679A (zh) * 2023-06-28 2023-07-28 之江实验室 一种基于迁移性图谱的图像分类方法、装置、设备和介质
CN116630630A (zh) * 2023-07-24 2023-08-22 深圳思谋信息科技有限公司 语义分割方法、装置、计算机设备及计算机可读存储介质
CN116740117A (zh) * 2023-06-09 2023-09-12 华东师范大学 一种基于无监督域适应的胃癌病理图像分割方法
CN117058468A (zh) * 2023-10-11 2023-11-14 青岛金诺德科技有限公司 用于新能源汽车锂电池回收的图像识别与分类系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016151352A1 (en) * 2015-03-26 2016-09-29 Centralesupelec Method for real-time deformable fusion of a source multi-dimensional image and a target multi-dimensional image of an object
US20180174071A1 (en) * 2016-12-20 2018-06-21 Conduent Business Services, Llc Method and system for text classification based on learning of transferable feature representations from a source domain
CN109558901A (zh) * 2018-11-16 2019-04-02 北京市商汤科技开发有限公司 一种语义分割训练方法及装置、电子设备、存储介质
CN110111335A (zh) * 2019-05-08 2019-08-09 南昌航空大学 一种自适应对抗学习的城市交通场景语义分割方法及系统
CN110516202A (zh) * 2019-08-20 2019-11-29 Oppo广东移动通信有限公司 文档生成器的获取方法、文档生成方法、装置及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016151352A1 (en) * 2015-03-26 2016-09-29 Centralesupelec Method for real-time deformable fusion of a source multi-dimensional image and a target multi-dimensional image of an object
US20180174071A1 (en) * 2016-12-20 2018-06-21 Conduent Business Services, Llc Method and system for text classification based on learning of transferable feature representations from a source domain
CN109558901A (zh) * 2018-11-16 2019-04-02 北京市商汤科技开发有限公司 一种语义分割训练方法及装置、电子设备、存储介质
CN110111335A (zh) * 2019-05-08 2019-08-09 南昌航空大学 一种自适应对抗学习的城市交通场景语义分割方法及系统
CN110516202A (zh) * 2019-08-20 2019-11-29 Oppo广东移动通信有限公司 文档生成器的获取方法、文档生成方法、装置及电子设备

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591867B (zh) * 2021-07-30 2024-02-06 华中科技大学 一种无监督领域自适应图像分割方法及系统
CN113591867A (zh) * 2021-07-30 2021-11-02 华中科技大学 一种无监督领域自适应图像分割方法及系统
CN113643269A (zh) * 2021-08-24 2021-11-12 泰安市中心医院 基于无监督学习的乳腺癌分子分型方法、装置及系统
CN113643269B (zh) * 2021-08-24 2023-10-13 泰安市中心医院 基于无监督学习的乳腺癌分子分型方法、装置及系统
CN113792526B (zh) * 2021-09-09 2024-02-09 北京百度网讯科技有限公司 字符生成模型的训练方法、字符生成方法、装置和设备和介质
CN113792526A (zh) * 2021-09-09 2021-12-14 北京百度网讯科技有限公司 字符生成模型的训练方法、字符生成方法、装置和设备和介质
CN113706564B (zh) * 2021-09-23 2023-07-18 苏州大学 基于多种监督方式的睑板腺分割网络的训练方法及装置
CN113706564A (zh) * 2021-09-23 2021-11-26 苏州大学 基于多种监督方式的睑板腺分割网络的训练方法及装置
CN114494804B (zh) * 2022-04-18 2022-10-25 武汉明捷科技有限责任公司 一种基于域特有信息获取的无监督领域适应图像分类方法
CN114494804A (zh) * 2022-04-18 2022-05-13 武汉明捷科技有限责任公司 一种基于域特有信息获取的无监督领域适应图像分类方法
CN116740117A (zh) * 2023-06-09 2023-09-12 华东师范大学 一种基于无监督域适应的胃癌病理图像分割方法
CN116740117B (zh) * 2023-06-09 2024-02-06 华东师范大学 一种基于无监督域适应的胃癌病理图像分割方法
CN116503679A (zh) * 2023-06-28 2023-07-28 之江实验室 一种基于迁移性图谱的图像分类方法、装置、设备和介质
CN116503679B (zh) * 2023-06-28 2023-09-05 之江实验室 一种基于迁移性图谱的图像分类方法、装置、设备和介质
CN116630630A (zh) * 2023-07-24 2023-08-22 深圳思谋信息科技有限公司 语义分割方法、装置、计算机设备及计算机可读存储介质
CN116630630B (zh) * 2023-07-24 2023-12-15 深圳思谋信息科技有限公司 语义分割方法、装置、计算机设备及计算机可读存储介质
CN117058468B (zh) * 2023-10-11 2023-12-19 青岛金诺德科技有限公司 用于新能源汽车锂电池回收的图像识别与分类系统
CN117058468A (zh) * 2023-10-11 2023-11-14 青岛金诺德科技有限公司 用于新能源汽车锂电池回收的图像识别与分类系统

Similar Documents

Publication Publication Date Title
WO2021114130A1 (zh) 一种无监督自适应乳腺病变分割方法
Skourt et al. Lung CT image segmentation using deep neural networks
CN106682435B (zh) 一种多模型融合自动检测医学图像中病变的系统及方法
Zuo et al. R2AU-Net: attention recurrent residual convolutional neural network for multimodal medical image segmentation
CN112102266B (zh) 基于注意力机制的脑梗死医学影像分类模型的训练方法
CN111709485B (zh) 医学影像处理方法、装置和计算机设备
CN107688815B (zh) 医学图像的分析方法和分析系统以及存储介质
CN112862805B (zh) 听神经瘤图像自动化分割方法及系统
CN111179277B (zh) 一种无监督自适应乳腺病变分割方法
CN112991346B (zh) 用于医学图像分析的学习网络的训练方法和训练系统
CN112150472A (zh) 基于cbct的三维颌骨图像分割方法、装置及终端设备
CN112396606B (zh) 一种基于用户交互的医学图像分割方法、系统和装置
CN113298830A (zh) 一种基于自监督的急性颅内ich区域图像分割方法
CN117076655A (zh) 一种手术规划方案生成方法、系统、装置和介质
US11494908B2 (en) Medical image analysis using navigation processing
Tian et al. Radiomics and Its Clinical Application: Artificial Intelligence and Medical Big Data
CN114332910A (zh) 一种面向远红外图像的相似特征计算的人体部位分割方法
Pal et al. A fully connected reproducible SE-UResNet for multiorgan chest radiographs segmentation
Ammari et al. Deep-active-learning approach towards accurate right ventricular segmentation using a two-level uncertainty estimation
Saumiya et al. Unified automated deep learning framework for segmentation and classification of liver tumors
CN116092643A (zh) 一种基于医疗影像的交互式半自动标注方法
CN114612373A (zh) 一种图像识别方法及服务器
Yang et al. Lung Nodule Segmentation and Uncertain Region Prediction with an Uncertainty-Aware Attention Mechanism
Wei et al. Application of U-net with variable fractional order gradient descent method in rectal tumor segmentation
Wibisono et al. Segmentation-based knowledge extraction from chest X-ray images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19955844

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19955844

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19955844

Country of ref document: EP

Kind code of ref document: A1