CN111179277B - Unsupervised self-adaptive breast lesion segmentation method - Google Patents

Unsupervised self-adaptive breast lesion segmentation method Download PDF

Info

Publication number
CN111179277B
CN111179277B CN201911264888.9A CN201911264888A CN111179277B CN 111179277 B CN111179277 B CN 111179277B CN 201911264888 A CN201911264888 A CN 201911264888A CN 111179277 B CN111179277 B CN 111179277B
Authority
CN
China
Prior art keywords
image
network
domain
target domain
nre
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911264888.9A
Other languages
Chinese (zh)
Other versions
CN111179277A (en
Inventor
李程
王珊珊
肖韬辉
郑海荣
刘新
梁栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201911264888.9A priority Critical patent/CN111179277B/en
Publication of CN111179277A publication Critical patent/CN111179277A/en
Application granted granted Critical
Publication of CN111179277B publication Critical patent/CN111179277B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an unsupervised field self-adaptive image segmentation method. Converting the target domain image to ensure that semantic information is reserved, and reconstructing image shallow information by taking the source domain image as a characteristic; and then, carrying out image discrimination on the converted and reconstructed target domain image by utilizing a segmentation model established by the source domain image, thereby realizing migration among data sets in different fields of the model on the premise of no new data annotation.

Description

Unsupervised self-adaptive breast lesion segmentation method
Technical Field
The invention relates to the field of image processing, in particular to an unsupervised adaptive breast lesion segmentation method.
Background
Breast cancer is the cancer with the highest incidence rate of females, and early diagnosis and early treatment can effectively improve the long-term survival rate of breast cancer patients. Magnetic resonance imaging (Magnetic Resonance Imaging, MRI), which is a multiparameter, multiphasic imaging technique, can reflect various characteristics such as tissue T1, T2 and proton density, and has the advantages of high resolution and sensitivity, has become one of the important tools for early screening of breast cancer. Breast MRI techniques have increasingly been applied in clinical practice, especially in the early screening of breast cancer.
In the screening of breast cancer MRI, the development trend and the core technical problem in the field are achieved through the image analysis assisted by a computer. Early medical image segmentation originally employed edge detection, texture features, morphological filtering, etc., but required extensive manual labeling and targeted analysis, which had limited ability to resolve deep structures and adaptivity. In recent years, machine learning algorithms represented by deep learning have made breakthrough progress in prediction such as image recognition and image segmentation, and various deep learning algorithms represented by deep neural networks (DNN, deep neural network) and convolutional neural networks (Convolutional neural networks, CNNs) have made great progress continuously, and the application of the deep learning algorithms in image segmentation of medical images including other image analysis methods has also been a development trend in the art.
In early screening of breast cancer, image segmentation of a lesion area and other areas is a precondition of subsequent deep analysis, and the existing image segmentation technology mostly adopts a supervised deep learning mode, namely, a training set is that the lesion area and the healthy area are marked, a training model or a training network is obtained by training a known sample, and then a target image is judged. However, even with the same type of image data, such as magnetic resonance images, if the two data sets use inconsistent image acquisition systems or parameter settings, it is difficult for a deep learning segmentation network trained on one data set to obtain good segmentation results on the other data set due to differences in data distribution.
In particular to the field of breast cancer MRI screening, magnetic resonance scanning systems or magnetic resonance imaging sequences used in different centers may be inconsistent, resulting in a distribution difference in the acquired data. This difference makes the already trained MRI image segmentation model unable to guarantee a stable discriminating effect under additional systems or parameters.
One solution is: the imaging sequences obtained for different magnetic resonance scanning systems or different parameters are respectively manually labeled, i.e. retraining and supervised learning are performed for each new data set under the condition to ensure the effect in each data set. The method has the defects that the labeling of the image segmentation is very time-consuming, the labeling of the medical image requires strong expertise and experience, low-cost batch manual labeling cannot be realized, and the labeling standard is difficult to control and unify.
Another solution is to perform parameter fine tuning of the trained segmentation network model for the new target data set, but the method requires participation of algorithm designers, and fine tuning still requires cooperation of medical expertise, so that unsupervised application of the trained model in other data sets cannot be achieved.
Disclosure of Invention
In order to solve the defect that the segmentation model mentioned in the background art is poor in generalization capability among fields, the invention provides an unsupervised field self-adaptive image segmentation method. Converting the target domain image to ensure that semantic information is reserved, and reconstructing image shallow information by taking the source domain image as a characteristic; and then, carrying out image discrimination on the converted and reconstructed target domain image by utilizing a segmentation model established by the source domain image, thereby realizing migration among data sets in different fields of the model on the premise of no new data annotation.
According to a first aspect of the present invention, there is provided a method of converting a style of an inter-domain image based on image reconstruction and image discrimination.
S101: obtaining a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region;
obtaining a target domain image in a target domain image set, wherein the target domain image has or does not have a region to be segmented similar to a characteristic region in the source domain image set;
taking the source domain image and the target domain image as inputs, and establishing an image domain judging network Ncl for judging the domain to which the image belongs through a training function;
s102: taking the target domain image as input, performing image reconstruction on the target domain image to obtain a reconstructed network Nre after learning.
S103: taking the target domain image as input, the image data obtained after passing through the reconstruction network Nre is distinguished by the image domain distinguishing network Ncl, and parameters of the reconstruction network Nre are optimized and adjusted according to loss data of the image domain distinguishing network Ncl.
S104: and repeating the step S103, and continuously optimizing the reconstruction network Nre until the conditions are set, wherein the optimized reconstruction network is the conversion network Ntr.
In S104, the setting conditions are as follows: and the loss data of the image domain judging network Ncl is smaller than a preset value.
According to the conversion network Ntr proposed by the present invention, the image P in the target domain can be converted into an image P' which retains image information but has a source domain style.
According to a second aspect of the present invention, a method for establishing an inter-domain image distribution adaptive model based on shallow semantic features is provided:
s201: obtaining a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region;
obtaining a target domain image in a target domain image set, wherein the target domain image has or does not have a region to be segmented similar to a characteristic region in the source domain image set;
taking the source domain image and the target domain image as inputs, and establishing an image domain judging network Ncl for judging the domain to which the image belongs through a training function;
s202: taking the target domain image as input, performing image reconstruction on the target domain image to obtain a reconstructed network Nre after learning. The reconstructed image information includes shallow information M1 and deep semantic information M2, and the image reconstruction network Nre includes a shallow information module nm1 corresponding to the shallow information M1 and a semantic information module nm2 corresponding to the deep semantic information.
S203: taking a target domain image as an input, judging the image data obtained after the image is reconstructed by the network Nre through the image domain judging network Ncl, optimizing and adjusting parameters of the shallow information module nm1 according to loss data of the image domain judging network Ncl, and keeping parameters of the semantic information module nm2 unchanged.
S204: the step S203 is repeated, and the reconstruction network Nre is continuously optimized until the condition is set, and the optimized reconstruction network is the conversion network Ntr.
Preferably, in S201, the cross entropy loss function is used as a loss function for training, and the image domain discrimination network Ncl is a residual network.
Preferably, the Loss function of the reconstructed network in S202 is an L2Loss function;
preferably, the reconstruction network in S202 may employ a codec structure;
preferably, the loss data of the image domain discrimination network Ncl in S203 adopts cross entropy loss;
preferably, the setting conditions in S204 are: and the loss data of the image domain judging network Ncl is smaller than a preset value.
According to the method for the inter-domain image distribution self-adaptive model based on the shallow semantic features, which is provided by the invention, the image P in the target domain can be converted into the image P' which keeps the deep semantic information of the image and has the source domain style in the shallow features.
According to a third aspect of the present invention, there is provided an unsupervised adaptive image segmentation method.
S301: obtaining a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region;
obtaining a target domain image in a target domain image set, wherein the target domain image has or does not have a region to be segmented similar to a characteristic region in the source domain image set;
taking the source domain image and the target domain image as inputs, and establishing an image domain judging network Ncl for judging the domain to which the image belongs through a training function;
s302: taking the target domain image as input, performing image reconstruction on the target domain image to obtain a reconstructed network Nre after learning. The reconstructed image information includes shallow information M1 and deep semantic information M2, and the image reconstruction network Nre includes a shallow information module nm1 corresponding to the shallow information M1 and a semantic information module nm2 corresponding to the deep semantic information.
S303: taking a target domain image as an input, judging the image data obtained after the image is reconstructed by the network Nre through the image domain judging network Ncl, optimizing and adjusting parameters of the shallow information module nm1 according to loss data of the image domain judging network Ncl, and keeping parameters of the semantic information module nm2 unchanged.
S304: the step S303 is repeated, and the reconstruction network Nre is continuously optimized until the condition is set, and the optimized reconstruction network is the conversion network Ntr.
S305: based on the source domain image set and its labeled feature regions, an image segmentation network Nse for the feature regions and non-feature regions is trained by machine learning.
S306: the image P to be analyzed in the target domain image set is converted into a converted image P' which has a source domain style and retains semantic information through a conversion network Ntr.
S307: the above-described converted image P' is subjected to image division using the image division network Nse.
Preferably, in S301, the cross entropy loss function is used as a loss function for training, and the image domain discrimination network Ncl is a residual network.
Preferably, the Loss function of the reconstructed network in S302 is an L2Loss function;
preferably, the reconstruction network in S302 may employ a codec structure;
preferably, the loss data of the image domain discrimination network Ncl in S303 adopts cross entropy loss;
preferably, the setting conditions in S304 are: the loss data of the image domain judging network Ncl is smaller than a preset value;
preferably, in S305, the training of the image segmentation network adopts UNet algorithm;
preferably, in S305, training of the image segmentation network uses UNet algorithm in combination with attention mechanisms and/or multi-scale feature expression.
According to the image segmentation method provided by the invention, the unsupervised self-adaption of the image segmentation method from the marked source domain image to the unmarked target domain image is realized, and the task of unsupervised target domain image segmentation is realized.
According to a fourth aspect of the present invention, there is provided a breast cancer screening adaptive image segmentation method.
S401: obtaining a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region, the source domain image set is a marked breast MRI image, and the characteristic region is a marked tumor or cancer tissue region;
obtaining a target domain image in a target domain image set, wherein the target domain image is an unlabeled breast MRI image, and the target domain image possibly contains an image part corresponding to a tumor or cancer tissue area;
taking the source domain image and the target domain image as inputs, and establishing an image domain judging network Ncl for judging the domain to which the image belongs through a training function;
s402: taking the target domain image as input, performing image reconstruction on the target domain image to obtain a reconstructed network Nre after learning. The reconstructed image information includes shallow information M1 and deep semantic information M2, and the image reconstruction network Nre includes a shallow information module nm1 corresponding to the shallow information M1 and a semantic information module nm2 corresponding to the deep semantic information.
S403: taking a target domain image as an input, judging the image data obtained after the image is reconstructed by the network Nre through the image domain judging network Ncl, optimizing and adjusting parameters of the shallow information module nm1 according to loss data of the image domain judging network Ncl, and keeping parameters of the semantic information module nm2 unchanged.
S404: the step S403 is repeated, and the reconstruction network Nre is continuously optimized until the condition is set, and the optimized reconstruction network is the conversion network Ntr.
S405: based on the source domain image set and its labeled feature regions, an image segmentation network Nse for the feature regions and non-feature regions is trained by machine learning.
S406: converting the image P to be analyzed in the target domain image set into a converted image P' which has a source domain style and retains semantic information through a conversion network Ntr,
s407: the image segmentation network Nse is adopted to carry out image segmentation on the converted image P', and the corresponding characteristic region after image segmentation is the image region of the tumor or cancer tissue screened by the suspected breast cancer.
Preferably, in S401, the cross entropy loss function is used as a loss function for training, and the image domain discrimination network Ncl is a residual network.
Preferably, the Loss function of the reconstructed network in S402 is an L2Loss function;
preferably, the reconstruction network in S402 may employ a codec structure;
preferably, the loss data of the image domain discrimination network Ncl in S403 adopts cross entropy loss;
preferably, the setting conditions in S404 are: the loss data of the image domain judging network Ncl is smaller than a preset value;
preferably, in S405, the training of the image segmentation network adopts UNet algorithm;
preferably, in S405, training of the image segmentation network uses UNet algorithm in combination with attention mechanisms and/or multi-scale feature expression.
According to the breast cancer screening self-adaptive image segmentation method provided by the invention, the unsupervised self-adaptation of the breast lesion area segmentation method from the marked source domain image to the unmarked target domain image is realized, and the unsupervised target domain image segmentation task is completed.
According to a fifth aspect of the present invention, there is provided a breast cancer screening apparatus based on adaptive image segmentation, comprising:
the acquisition unit is used for acquiring a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region, the source domain image set is a marked breast MRI image, and the characteristic region is a marked tumor or cancer tissue region;
the method is also used for acquiring a target domain image in a target domain image set, wherein the target domain image is an unlabeled breast MRI image, and the target domain image possibly contains an image part corresponding to a tumor or cancer tissue area;
the image domain judging unit is used for taking the source domain image and the target domain image which are acquired by the acquiring unit as input, and establishing an image domain judging network Ncl for judging the domain to which the image belongs through a training function;
the image reconstruction unit takes the target domain image as input, and performs image reconstruction on the target domain image to obtain a reconstructed network Nre after learning. The reconstructed image information comprises shallow information and deep semantic information corresponding to the image, and the image reconstruction network Nre comprises a shallow information module nm1 corresponding to the image shallow information and a semantic information module nm2 corresponding to the deep semantic information;
the image conversion network optimizing unit takes a target domain image as input, and the image data obtained after the image is reconstructed by the network Nre is judged by the image domain judging network Ncl, and the parameters of the shallow information module nm1 are optimized and regulated according to the loss data of the image domain judging network Ncl, and the parameters of the semantic information module nm2 are kept unchanged; repeating the optimizing and adjusting processes until the condition is set, and reconstructing the network after optimization as a conversion network Ntr;
the source domain image segmentation network training unit is used for training an image segmentation network Nse aiming at the characteristic region and the non-characteristic region through machine learning by the source domain image set and the marked characteristic region thereof;
the target domain image segmentation unit is used for converting an image P to be analyzed in a target domain image set into a converted image P 'which has a source domain style and retains semantic information through a conversion network Ntr, and performing image segmentation on the converted image P' by adopting the image segmentation network Nse; the corresponding characteristic region after image segmentation is the image region of the tumor or cancer tissue screened by the suspected breast cancer.
Preferably, in the image domain discriminating unit, the cross entropy loss function is used as a loss function for training, and the image domain discriminating network Ncl is a residual network.
Preferably, in the image conversion network optimizing unit, the setting conditions are: the loss data of the image domain judging network Ncl is smaller than a preset value;
therefore, the invention provides an unsupervised field-adaptive breast lesion segmentation method, which forces new data to be distributed close to the existing data set by carrying out data field conversion on the new data, thereby realizing unsupervised field-adaptive migration of a segmentation network. Based on the method, even if the new data set is different from the marked data set, the image in the new data set is not required to be marked, and the mammary gland lesion segmentation network trained on the marked data set can be directly self-adapted to the new data set, so that a good segmentation effect can be obtained.
Therefore, the invention solves the defects that in the prior art, for each set of breast magnetic resonance image data acquired by using specific experimental parameters, a doctor is required to carry out complete or partial data labeling, so as to obtain a segmentation model suitable for a data set to be processed, and the whole process is time-consuming, labor-consuming and high in cost. The method can realize the unsupervised segmentation of the new data set with the assistance of the marked data set, reduces the economic cost of image marking, and can save the time cost by directly optimizing the application of the model.
Drawings
FIG. 1 is a schematic diagram of an image adaptive conversion method between different fields;
FIG. 2 shows a schematic diagram of an adaptive image segmentation method;
fig. 3 illustrates a typical structure of an image segmentation model UNet;
FIG. 4 shows a schematic diagram of a breast cancer screening device based on adaptive image segmentation;
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, in the case of no conflict, the embodiments of the present application and the features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced otherwise than as described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
Example 1
As shown in fig. 1, the image adaptive conversion method between different fields provided by the invention is based on the method, even if a new data set is different from a labeling data set, the image in the new data set is not required to be labeled, and the adaptive learning is performed in the two data sets through image conversion. Therefore, the unlabeled data set retains high-order semantic information through self-adaptive conversion, and shallow layer characterization such as image style, texture, brightness and the like is converted into characteristics of the labeled data set, so that a network model trained in the labeled data set can be directly applied to a new data set.
In an adaptive image conversion method according to an embodiment of the present invention, the method includes the steps of:
first, a dataset containing labels is required as the source domain image, which can be considered an atlas, and later the other datasets are all templates of this dataset. The image of the target field, for example a set of images to be analyzed without labels, is typically a set of image data to be analyzed, for example classified or image segmented.
It should be noted that the source domain image and the target domain image should be images containing deep features with similar properties, for example, images captured under the same type of things or similar scenes, and the deep features with similar properties make the adaptive image conversion method practical. The source domain image and the target domain image may exhibit different apparent image features such as light shades, noise levels, different textures, or other non-semantic features.
For example, in applications for computer-aided medical image analysis, the source domain image may contain a series of annotated X-ray, CT or MRI image data, while the target domain image may be corresponding, but may not be acquired at the same instrument or under the same conditions. The source domain image contains marked characteristic areas which can be identified or marked by a professional doctor as tumor or cancerous areas.
First, an image domain discrimination network is established (S301), and a source domain image and a target domain image are used as training sample inputs, and the image domain discrimination network is established to classify any one new test sample image into a source domain image and a target domain image. However, for the above classification it is necessary to calculate its variance, residual or other loss for adjustment in subsequent steps.
For the classification method of the image domain discrimination network training, various classical classification and discrimination methods in the deep neural network can be used, for example, a residual network is adopted. The residual network is a convolution identification network, is characterized by easy optimization, and can improve accuracy by increasing a considerable depth. The residual blocks inside the deep neural network are connected in a jumping mode, and the gradient disappearance problem caused by depth increase in the deep neural network is relieved.
Wherein the calculation of the discriminant error can be performed using a cross entropy loss function (cross entropy) during training. The cross entropy loss function is particularly suitable for a training process in model prediction of two classifications, and the convex optimization problem is well converged when the loss is calculated. A classification label is given to different images, for example, the source domain image label is 1, the target domain image label is 0, under the condition of two halves, the final result to be predicted of the model is only two cases, and the probability obtained for each category is y and y
Figure BDA0002312528990000102
The cross entropy at this time expresses its loss as:
Figure BDA0002312528990000101
second, learning of image reconstruction of the target domain image is performed (S302). That is, the image of the target domain itself is used as an input, and the output of the image of the target domain is performed. The shallow characteristic feature information and the deep semantic features of the medical image can be separated through continuous self-learning and training processes, wherein the deep semantic features, such as the tumor area and edge features in the medical image, are reserved, and the shallow features, such as the image style brightness, texture, noise level and the like, can be gradually converted into the style of the source domain image through optimization of the discrimination network.
In a typical application, image reconstruction network Nre can employ a codec structure SegNet. The encoder structure of SegNet is one-to-one with the decoder structure, i.e., one decoder has the same spatial size and number of channels as its corresponding encoder. A basic SegNet structure, each of which has 13 convolutional layers, which are much smaller in volume than the corresponding FCN classical image segmentation model, thanks to the operations taken in SegNet to weigh the computational effort: the direct deconvolution operation is replaced with the recorded positional information of the pooling process.
The image loss function can be expressed as an L2loss, which is also a more common loss function for general CNN function losses because of its faster convergence speed compared to L1 losses.
And thirdly, optimizing the reconstruction network based on the second step (S302) to obtain a conversion network (S303), wherein the purpose of the conversion network is to convert the target domain image into the style of the source domain image, and the distribution of the two domain images is pulled up so as to carry out subsequent operations such as segmentation on the target domain image.
For the reconstructed image taking the encoder structure as an example in the second step (S302), the reconstructed image includes shallow information M1 and deep semantic information M2 corresponding to the image, and the image reconstruction network Nre includes a shallow information module nm1 corresponding to the image shallow information M1 and a semantic information module nm2 corresponding to the deep semantic information. It can be readily appreciated that the image shallow information, i.e. the previously mentioned image style brightness, texture, noise level, etc., whereas the deep information, e.g. the region and edge features of the tumor in the medical image.
The target domain image, after being converted by the reconstructed network Nre obtained in the second step S302, may obtain a generated source domain image. The image domain discrimination network in the first step S301 is used to perform classification discrimination on the generated source domain image, and to discriminate whether the converted image can be classified as a source domain image. And the parameters of the shallow information module nm1 and the parameters of the deep semantic information module nm2 in the reconstruction network Nre are corrected and optimized as a loss function according to the cross entropy function generated in the classification discrimination. In the embodiment employing the codec structure, the first two and three coding module parameters in the coding section of the reconstructed network are updated continuously, while the rest remains unchanged.
The output of the reconstruction network Nre' after the parameters are corrected is continuously input into the image domain discrimination network for correction, and the above process is continuously repeated, so that the reconstruction network is continuously optimized as follows: shallow information after the target domain image is reconstructed can be made to approach the source domain image more and more, while deep semantic information is unchanged (S304).
When certain conditions are reached, a typical example is when the generated source domain image has been discriminated as a classification of source domain images and the cross entropy loss has been sufficiently small to be less than a certain threshold, the parameters of the reconstruction network Nre can be considered to have been modified and optimized to an acceptable effect. The network obtained at this time is the image conversion network Ntr.
That is, after an unlabeled target domain image is input into the image conversion network, the representation style of the output image tends to approach the source domain training set image, and semantic information of own deep features can be still maintained. Thus, various network models trained in the annotation dataset can be directly applied to the new dataset.
Example two
Another embodiment of the invention, as shown in fig. 2, is an image segmentation method for unsupervised field adaptation, which can be applied to MRI image computer-aided identification of breast lesions in particular. The method mainly comprises the following steps: an image domain discrimination network is established (S401), learning of image reconstruction of a target domain image is performed (S402), a conversion network is obtained by optimizing based on the reconstruction network (S403, S404), image segmentation network training is performed on a labeled source domain image (S405), the target domain image is converted by the conversion network (S406), and then the converted target image is subjected to image segmentation by the image segmentation network (S407).
In the above steps, the steps of S401, S402, S403, and S404 are the same as the corresponding steps of the embodiment, and will not be described again.
Image segmentation (S405) is a typical problem of supervised image segmentation based on labeled images, specifically in application, training a supervised segmentation network. It may be a medical image segmentation network that is currently in wide use, such as UNet.
UNet is one of the most widely used models in image segmentation projects since birth, and the encoder (downsampling) -decoder (upsampling) structure and skip connection employed are a very classical design approach. At present, many new convolutional neural network design modes exist, but many continue the core ideas of UNet, add new modules or integrate with other design ideas.
The structure of UNet is shown in fig. 3, where the left side can be considered as an encoder and the right side can be considered as a decoder. The encoder has four sub-modules, each comprising two convolutional layers, each followed by a downsampling layer implemented by max pool. The resolution of the input image is 572x572 and the resolutions of the 1 st to 5 th blocks are 572x572,284x284,140x140,68x68 and 32x32, respectively. Since the convolution uses valid mode, the resolution of the next sub-module here is equal to (resolution-4 of the previous sub-module)/2. The decoder contains four sub-modules, the resolution is sequentially increased by an up-sampling operation until it coincides with the resolution of the input image (since the convolution uses valid mode, the actual output is smaller than the input image). The network also uses a skip connection to connect the up-sampled result to the output of a sub-module in the encoder with the same resolution as the input of the next sub-module in the decoder.
The network structure of UNet is particularly suitable for the segmentation of medical images, the boundary of the medical images is fuzzy, the gradient is complex, more high-resolution information is needed, and deep learning can achieve the point, such as up-sampling, down-sampling and jump connection; meanwhile, the target to be segmented is similar in shape and regular and circulated, for example, the shape is approximate to a circle, and the distributed areas are all in a certain range. Since the structure of the organ itself is fixed and the semantic information is not particularly abundant, both the high-level semantic information and the low-level features are important, and both the jumping network of UNet and the U-shaped structure are suitable for the information.
In addition, in the image segmentation process, effective new modules such as UNet plus attention mechanism, multi-scale feature expression and the like can be introduced.
Image conversion is performed on the target image to be analyzed (S406): the image P to be analyzed in the target domain image set is converted into a converted image P' which has a source domain style and retains semantic information through a conversion network Ntr. The above-mentioned conversion network and the conversion network trained by S402, S403 and S404, that is, the representation style of the image output by the conversion network tends to approach the source domain training set image, so that the semantic information of the deep features of the user can still be maintained.
Image segmentation is performed and the result is identified (S407): and (3) performing image segmentation on the converted image P' by adopting an image segmentation network established in the step (S405), wherein the feature area corresponding to the image segmentation is the image area of the tumor or cancer tissue screened by the suspected breast cancer in the screening of the MRI image of the breast cancer.
In the method provided by the steps, the self-adaptive lesion segmentation in the unsupervised field of the mammary gland magnetic resonance image can be realized. .
Example III
Referring to fig. 4, the present embodiment provides a breast cancer screening device based on adaptive image segmentation, including:
the acquisition unit is used for acquiring a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region, the source domain image set is a marked breast MRI image, and the characteristic region is a marked tumor or cancer tissue region;
the method is also used for acquiring a target domain image in a target domain image set, wherein the target domain image is an unlabeled breast MRI image, and the target domain image possibly contains an image part corresponding to a tumor or cancer tissue area;
the image domain judging unit is used for taking the source domain image and the target domain image which are acquired by the acquiring unit as input, and establishing an image domain judging network Ncl for judging the domain to which the image belongs through a training function;
the image reconstruction unit takes the target domain image as input, and performs image reconstruction on the target domain image to obtain a reconstructed network Nre after learning. The reconstructed image information comprises shallow information and deep semantic information corresponding to the image, and the image reconstruction network Nre comprises a shallow information module nm1 corresponding to the image shallow information and a semantic information module nm2 corresponding to the deep semantic information;
the image conversion network optimizing unit takes a target domain image as input, and the image data obtained after the image is reconstructed by the network Nre is judged by the image domain judging network Ncl, and the parameters of the shallow information module nm1 are optimized and regulated according to the loss data of the image domain judging network Ncl, and the parameters of the semantic information module nm2 are kept unchanged; repeating the optimizing and adjusting processes until the condition is set, and reconstructing the network after optimization as a conversion network Ntr;
the source domain image segmentation network training unit is used for training an image segmentation network Nse aiming at the characteristic region and the non-characteristic region through machine learning by the source domain image set and the marked characteristic region thereof;
the target domain image segmentation unit is used for converting an image P to be analyzed in a target domain image set into a converted image P 'which has a source domain style and retains semantic information through a conversion network Ntr, and performing image segmentation on the converted image P' by adopting the image segmentation network Nse; the corresponding characteristic region after image segmentation is the image region of the tumor or cancer tissue screened by the suspected breast cancer.
Each unit in the above device may be separately or all combined into one or several other units, or some (some) units may be further split into a plurality of units with smaller functions to form a structure, which may achieve the same operation without affecting the implementation of the technical effects of the embodiments of the present invention. The above units are divided based on logic functions, and in practical applications, the functions of one unit may be implemented by a plurality of units, or the functions of a plurality of units may be implemented by one unit. In other embodiments of the invention, the model-based training apparatus may also include other units, and in actual practice, these functions may also be facilitated by other units and may be cooperatively implemented by a plurality of units.
According to another embodiment of the present invention, a model training apparatus device as shown in fig. 4 may be constructed by running a computer program (including a program code) capable of executing the steps involved in the respective methods in the second embodiment on a general-purpose computing device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read only storage medium (ROM), and the like, and a storage element, and the model training method of the embodiment of the present invention may be implemented. The computer program may be recorded on, for example, a computer-readable recording medium, and loaded into and executed by the above-described computing device via the computer-readable recording medium.
Example IV
A fourth embodiment of the present invention provides a computer storage medium storing one or more first instructions adapted to be loaded by a processor and to perform the adaptive image segmentation method of the previous embodiment.
The steps in the method of the embodiments of the present invention may be sequentially adjusted, combined, and deleted according to actual needs.
The units in the device of the embodiments of the invention can be combined, divided and deleted according to actual needs.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disk Memory, magnetic disk Memory, tape Memory, or any other medium that can be used for carrying or storing data that is readable by a computer.
The technical solution of the present invention has been described in detail with reference to the accompanying drawings, which are only preferred embodiments of the present invention, and are not intended to limit the present invention, but various modifications and variations can be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A conversion network method of inter-domain image style based on image reconstruction and image discrimination is characterized in that:
comprises the following steps of the method,
s101: obtaining a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region;
obtaining a target domain image in a target domain image set, wherein the target domain image has or does not have a region to be segmented similar to a characteristic region in the source domain image set;
taking the source domain image and the target domain image as inputs, and establishing an image domain judging network Ncl for judging the domain to which the image belongs through a training function, wherein the training function is a cross entropy loss function;
s102: taking a target domain image as input and output, performing image reconstruction learning on the target domain image to obtain a learned reconstruction network Nre, wherein the reconstructed image information comprises image shallow information M1 and deep semantic information M2, the reconstruction network Nre comprises a shallow information module nm1 corresponding to the image shallow information M1 and a semantic information module nm2 corresponding to the deep semantic information M2, the reconstruction network Nre adopts a coding and decoding structure, and a loss function adopted in the learning process of the reconstruction network Nre is an L2loss function;
s103: taking a target domain image as input, judging by the image domain judging network Ncl through image data obtained after the target domain image is subjected to reconstruction network Nre, optimizing and adjusting parameters of the shallow information module nm1 according to cross entropy loss data of the image domain judging network Ncl, and maintaining parameters of the semantic information module nm2 unchanged so as to optimize and adjust parameters of the reconstruction network Nre;
s104: repeating the step S103, and continuously optimizing the reconstruction network Nre until the condition is set, wherein the optimized reconstruction network is used as a conversion network Ntr;
the conversion network Ntr is used for converting the image in the target domain into a converted image which retains image information and has a source domain style.
2. A method for establishing an inter-domain image distribution self-adaptive model based on shallow semantic features is characterized by comprising the following steps:
comprises the following steps of the method,
s201: obtaining a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region;
obtaining a target domain image in a target domain image set, wherein the target domain image has or does not have a region to be segmented similar to a characteristic region in the source domain image set;
taking the source domain image and the target domain image as inputs, and establishing an image domain judging network Ncl for judging the domain to which the image belongs through a training function, wherein the training function is a cross entropy loss function;
s202: taking a target domain image as input and output, performing image reconstruction learning on the target domain image to obtain a learned reconstruction network Nre, wherein the reconstructed image information comprises image shallow information M1 and deep semantic information M2, the reconstruction network Nre comprises a shallow information module nm1 corresponding to the image shallow information M1 and a semantic information module nm2 corresponding to the deep semantic information, the reconstruction network Nre adopts a coding and decoding structure, and a loss function adopted in the learning process of the reconstruction network Nre is an L2loss function;
s203: taking a target domain image as input, judging by the image domain judging network Ncl after the image data is obtained by the reconstruction network Nre, optimizing and adjusting parameters of the shallow information module nm1 according to cross entropy loss data of the image domain judging network Ncl, and keeping parameters of the semantic information module nm2 unchanged;
s204: the step S203 is repeated, the reconstruction network Nre is continuously optimized until the condition is set, the optimized reconstruction network is used as a conversion network Ntr, and an inter-domain image distribution adaptive model is built through the conversion network Ntr.
3. A method of building an inter-domain image distribution adaptive model as claimed in claim 2, wherein: the setting conditions in S204 are: and the cross entropy loss data of the image domain judging network Ncl is smaller than a preset value.
4. An unsupervised adaptive image segmentation method is characterized in that:
comprises the following steps of the method,
s301: obtaining a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region;
obtaining a target domain image in a target domain image set, wherein the target domain image has or does not have a region to be segmented similar to a characteristic region in the source domain image set;
taking the source domain image and the target domain image as inputs, and establishing an image domain judging network Ncl for judging the domain to which the image belongs through a training function, wherein the training function is a cross entropy loss function;
s302: taking a target domain image as input, performing image reconstruction learning on the target domain image to obtain a learned reconstruction network Nre, wherein reconstructed image information comprises image shallow information M1 and deep semantic information M2, the reconstruction network Nre comprises a shallow information module nm1 corresponding to the image shallow information M1 and a semantic information module nm2 corresponding to the deep semantic information M2, the reconstruction network Nre adopts a coding and decoding structure, and a loss function adopted in the learning process of the reconstruction network Nre is an L2loss function;
s303: taking a target domain image as input, judging by the image domain judging network Ncl after the image data is obtained by the reconstruction network Nre, optimizing and adjusting parameters of the shallow information module nm1 according to cross entropy loss data of the image domain judging network Ncl, and keeping parameters of the semantic information module nm2 unchanged;
s304: repeating the step of S303, performing continuous optimization on the reconstruction network Nre until the condition is set, and using the optimized reconstruction network as a conversion network Ntr;
s305: training an image segmentation network Nse aiming at the characteristic region and the non-characteristic region through machine learning based on the source domain image set and the marked characteristic region;
s306: converting an image P to be analyzed in the target domain image set into a converted image P' which has a source domain style and retains target domain semantic information through a conversion network Ntr;
s307: the above-described converted image P' is subjected to image division using the image division network Nse.
5. An unsupervised adaptive image segmentation method according to claim 4, characterized in that: the source domain image set is a marked breast MRI image, the characteristic region is a tumor or cancer tissue region marked, and the target domain image is an unmarked breast MRI image.
6. A breast cancer screening device based on self-adaptive image segmentation is characterized in that:
the device comprises:
the acquisition unit is used for acquiring a source domain image in a source domain image set, wherein the image in the source domain image set comprises a marked characteristic region, the source domain image set is a marked breast MRI image, and the characteristic region is a marked tumor or cancer tissue region;
the method is also used for acquiring a target domain image in a target domain image set, wherein the target domain image set is an unlabeled breast MRI image, and the target domain image contains an image part corresponding to a tumor or cancer tissue area;
the image domain judging unit is used for taking the source domain image and the target domain image which are acquired by the acquiring unit as input, and establishing an image domain judging network Ncl for judging the domain to which the image belongs through a training function, wherein the training function is a cross entropy loss function;
the image reconstruction unit takes a target domain image as input and output, performs image reconstruction learning on the target domain image to obtain a learned reconstruction network Nre, wherein the reconstructed image information comprises image shallow information M1 and deep semantic information M2, the reconstruction network Nre comprises a shallow information module nm1 corresponding to the image shallow information M1 and a semantic information module nm2 corresponding to the deep semantic information M2, the reconstruction network Nre adopts a coding and decoding structure, and a loss function adopted in the learning process of the reconstruction network Nre is an L2loss function;
the image conversion network optimizing unit takes a target domain image as input, and the image data obtained after the image is reconstructed by the network Nre is judged by the image domain judging network Ncl, and the parameters of the shallow information module nm1 are optimized and adjusted according to the cross entropy loss data of the image domain judging network Ncl, and the parameters of the semantic information module nm2 are kept unchanged; repeating the optimizing and adjusting processes until the condition is set, and reconstructing the network after optimization as a conversion network Ntr;
the source domain image segmentation network training unit is used for training an image segmentation network Nse aiming at the characteristic region and the non-characteristic region through machine learning by the source domain image set and the marked characteristic region thereof;
the target domain image segmentation unit is used for converting an image P to be analyzed in a target domain image set into a converted image P 'with a source domain style and retaining target domain semantic information through a conversion network Ntr, and performing image segmentation on the converted image P' by adopting the image segmentation network Nse; the corresponding characteristic region after image segmentation is used as the image region of tumor or cancer tissue for suspected breast cancer screening.
CN201911264888.9A 2019-12-11 2019-12-11 Unsupervised self-adaptive breast lesion segmentation method Active CN111179277B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911264888.9A CN111179277B (en) 2019-12-11 2019-12-11 Unsupervised self-adaptive breast lesion segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911264888.9A CN111179277B (en) 2019-12-11 2019-12-11 Unsupervised self-adaptive breast lesion segmentation method

Publications (2)

Publication Number Publication Date
CN111179277A CN111179277A (en) 2020-05-19
CN111179277B true CN111179277B (en) 2023-05-02

Family

ID=70657198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911264888.9A Active CN111179277B (en) 2019-12-11 2019-12-11 Unsupervised self-adaptive breast lesion segmentation method

Country Status (1)

Country Link
CN (1) CN111179277B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001398B (en) * 2020-08-26 2024-04-12 科大讯飞股份有限公司 Domain adaptation method, device, apparatus, image processing method, and storage medium
CN112686906B (en) * 2020-12-25 2022-06-14 山东大学 Image segmentation method and system based on uniform distribution migration guidance
CN112784879A (en) * 2020-12-31 2021-05-11 前线智能科技(南京)有限公司 Medical image segmentation or classification method based on small sample domain self-adaption

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062753A (en) * 2017-12-29 2018-05-22 重庆理工大学 The adaptive brain tumor semantic segmentation method in unsupervised domain based on depth confrontation study
CN110322446A (en) * 2019-07-01 2019-10-11 华中科技大学 A kind of domain adaptive semantic dividing method based on similarity space alignment
CN110533044A (en) * 2019-05-29 2019-12-03 广东工业大学 A kind of domain adaptation image, semantic dividing method based on GAN

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062753A (en) * 2017-12-29 2018-05-22 重庆理工大学 The adaptive brain tumor semantic segmentation method in unsupervised domain based on depth confrontation study
CN110533044A (en) * 2019-05-29 2019-12-03 广东工业大学 A kind of domain adaptation image, semantic dividing method based on GAN
CN110322446A (en) * 2019-07-01 2019-10-11 华中科技大学 A kind of domain adaptive semantic dividing method based on similarity space alignment

Also Published As

Publication number Publication date
CN111179277A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
WO2021114130A1 (en) Unsupervised self-adaptive mammary gland lesion segmentation method
CN110443813B (en) Segmentation method, device and equipment for blood vessel and fundus image and readable storage medium
Pinaya et al. Unsupervised brain imaging 3D anomaly detection and segmentation with transformers
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
CN111179277B (en) Unsupervised self-adaptive breast lesion segmentation method
US11340324B2 (en) Systems, methods and media for automatically segmenting and diagnosing prostate lesions using multi-parametric magnetic resonance imaging data
WO2018057714A1 (en) Systems, methods and media for automatically generating a bone age assessment from a radiograph
CN112365980A (en) Brain tumor multi-target point auxiliary diagnosis and prospective treatment evolution visualization method and system
Popescu et al. Retinal blood vessel segmentation using pix2pix gan
CN112132815A (en) Pulmonary nodule detection model training method, detection method and device
Zhou et al. Evolutionary neural architecture search for automatic esophageal lesion identification and segmentation
Teng et al. Improving radiomic model reliability using robust features from perturbations for head-and-neck carcinoma
CN116452851A (en) Training method and device for disease classification model, terminal and readable storage medium
Tian et al. Radiomics and Its Clinical Application: Artificial Intelligence and Medical Big Data
Khan et al. An effective approach to address processing time and computational complexity employing modified CCT for lung disease classification
Qin et al. Application of artificial intelligence in diagnosis of craniopharyngioma
CN116434950B (en) Diagnosis system for autism spectrum disorder based on data clustering and ensemble learning
Pal et al. A fully connected reproducible SE-UResNet for multiorgan chest radiographs segmentation
CN114463320B (en) Magnetic resonance imaging brain glioma IDH gene prediction method and system
Chauhan et al. DNN-Based Brain MRI Classification Using Fuzzy Clustering and Autoencoder Features
CN116091412A (en) Method for segmenting tumor from PET/CT image
US20210256315A1 (en) Co-heterogeneous and adaptive 3d pathological abdominal organ segmentation using multi-source and multi-phase clinical image datasets
CN115187512A (en) Hepatocellular carcinoma great vessel invasion risk prediction method, system, device and medium
Zhang et al. Research on brain glioma segmentation algorithm
Divya Shree Building Efficient Neural Networks For Brain Tumor Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant