CN115861250B - Semi-supervised medical image organ segmentation method and system for self-adaptive data set - Google Patents

Semi-supervised medical image organ segmentation method and system for self-adaptive data set Download PDF

Info

Publication number
CN115861250B
CN115861250B CN202211607575.0A CN202211607575A CN115861250B CN 115861250 B CN115861250 B CN 115861250B CN 202211607575 A CN202211607575 A CN 202211607575A CN 115861250 B CN115861250 B CN 115861250B
Authority
CN
China
Prior art keywords
medical image
dataset
segmentation
semi
supervised
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211607575.0A
Other languages
Chinese (zh)
Other versions
CN115861250A (en
Inventor
黄炳顶
黄永志
张瀚文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Technology University
Original Assignee
Shenzhen Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Technology University filed Critical Shenzhen Technology University
Priority to CN202211607575.0A priority Critical patent/CN115861250B/en
Publication of CN115861250A publication Critical patent/CN115861250A/en
Application granted granted Critical
Publication of CN115861250B publication Critical patent/CN115861250B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a semi-supervised medical image organ segmentation method and a system of a self-adaptive data set, comprising the following steps: acquiring a medical image dataset comprising a labeled dataset and an unlabeled dataset; carrying out statistical analysis on the medical image dataset to obtain statistical information; preprocessing the medical image dataset based on the statistical information; constructing a semi-supervised learning framework and a segmentation network according to a semi-supervised learning method and nnU-Net, wherein the semi-supervised learning framework is used for guiding nnU-Net to adaptively design a preprocessing method, a structure of the segmentation network and super parameters; training the segmentation network according to the preprocessed label data set and the preprocessed non-label data set by adopting a five-fold cross validation method to obtain a trained segmentation network; and carrying out organ segmentation on the medical image according to the trained segmentation network to obtain a segmentation result graph. The generalization performance is strong, the segmentation precision is high, and the medical image organ segmentation task can be realized.

Description

Semi-supervised medical image organ segmentation method and system for self-adaptive data set
Technical Field
The invention relates to the technical field of medical image segmentation, in particular to a semi-supervised medical image organ segmentation method and system of a self-adaptive data set.
Background
The segmentation of medical image organs has important research significance and application value, for example, in medical auxiliary diagnosis systems such as lesion, operation planning and disease diagnosis, the structure outline of the corresponding organ is required to be acquired first and then the subsequent work can be carried out.
Although related models and algorithms for semi-supervised semantic segmentation exist at present, the performance of the existing segmentation model is affected by the differences of the attributes such as medical imaging equipment, imaging modes, voxel spacing, image resolution and the like, so that the generalization performance of the existing segmentation model is poor, and the segmentation precision of medical images is not high.
Accordingly, there is a need for improvement and advancement in the art.
Disclosure of Invention
The invention mainly aims to provide a semi-supervised medical image organ segmentation method, a semi-supervised medical image organ segmentation system, an intelligent terminal and a computer readable storage medium for a self-adaptive data set, and aims to solve the problems that an existing segmentation model is poor in generalization performance and low in segmentation precision of medical images.
In order to achieve the above object, the present invention provides a semi-supervised medical image organ segmentation method of an adaptive dataset, the segmentation method comprising:
Acquiring a medical image dataset comprising a labeled dataset and an unlabeled dataset;
carrying out statistical analysis on the medical image dataset to obtain statistical information;
preprocessing the medical image dataset based on the statistical information;
constructing a semi-supervised learning framework and a segmentation network according to a semi-supervised learning method and nnU-Net, wherein the semi-supervised learning framework is used for guiding nnU-Net to adaptively design a preprocessing method, a structure of the segmentation network and super parameters;
training the segmentation network according to the preprocessed label data set and the preprocessed non-label data set by adopting a five-fold cross validation method to obtain a trained segmentation network;
and inputting the acquired medical image dataset into a trained segmentation network for organ segmentation, and outputting a segmented result graph.
Optionally, the statistical information includes a mean value of pixel intensities, and the preprocessing the medical image dataset based on the statistical information includes:
resampling each image sample in the medical image dataset;
and carrying out pixel intensity standardization on the pixel intensity of each pixel in each image sample in turn according to the pixel intensity average value.
Optionally, the statistical information further includes a first median value of voxel intervals of the image samples in an XY plane and a second median value of the voxel intervals in a Z-axis direction, and resampling each image sample in the medical image dataset includes:
setting the first median to be the voxel interval of an XY plane during resampling;
and if the ratio of the second median to the first median does not exceed a set threshold, setting the second median as the voxel interval in the Z direction during resampling, otherwise, setting the lower ten digits of the second median as the voxel interval in the Z direction during resampling.
Optionally, two parallel splitting networks are provided, and when the image samples in the tag data set are input into the splitting networks, the loss value during training includes: loss between the predicted result of the split network and the real label, loss between the predicted result of the split network and the predicted result of another split network; when the image samples in the unlabeled dataset are input into the segmentation network, the loss values during training include: a loss between the predicted outcome of the split network and the predicted outcome of the other split network.
Optionally, the training the splitting network according to the preprocessed label dataset and the preprocessed unlabeled dataset by adopting a five-fold cross-validation method includes:
Sampling from the preprocessed tag data set by adopting a five-fold cross validation method to obtain a sample set;
and training the segmentation network by taking the sample set and the preprocessed unlabeled data set as training samples.
Optionally, the method further comprises testing the trained segmentation network, and the testing method comprises the following steps:
acquiring test set data and preprocessing the test set data;
dividing the image samples in the preprocessed test set data into blocks by adopting a sliding window with a preset step length, and inputting the blocks into a trained segmentation network;
and removing false positive areas in the prediction result of the segmentation network according to the non-maximum suppression algorithm to obtain a test result.
In order to achieve the above object, the present invention also provides a semi-supervised medical image organ segmentation system of an adaptive dataset, the segmentation system comprising:
a dataset acquisition module for acquiring a medical image dataset comprising a labeled dataset and a non-labeled dataset;
the statistics module is used for carrying out statistical analysis on the medical image data set to obtain statistical information;
a preprocessing module for preprocessing the medical image dataset based on the statistical information;
The construction module is used for constructing a semi-supervised learning framework and a segmentation network according to the semi-supervised learning method and nnU-Net, wherein the semi-supervised learning framework is used for guiding nnU-Net to adaptively design a preprocessing method, a structure of the segmentation network and super parameters;
and the optimization module is used for training the segmentation network according to the preprocessed label data set and the preprocessed label-free data set by adopting a five-fold cross validation method to obtain a trained segmentation network.
Optionally, the system further comprises a test module for acquiring test set data and preprocessing the test set data; dividing the image samples in the preprocessed test set data into blocks by adopting a sliding window with a preset step length, and inputting the blocks into a trained segmentation network; and removing false positive areas in the prediction result of the segmentation network according to the non-maximum suppression algorithm to obtain a test result.
In order to achieve the above object, the present invention also provides an intelligent terminal, which includes a memory, a processor, and a semi-supervised medical image organ segmentation program of an adaptive dataset stored on the memory and executable on the processor, wherein the semi-supervised medical image organ segmentation program of the adaptive dataset implements the steps of any one of the semi-supervised medical image organ segmentation methods of the adaptive dataset when executed by the processor.
In order to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a semi-supervised medical image organ segmentation procedure of an adaptive dataset, which when executed by a processor, implements the steps of any one of the above-mentioned semi-supervised medical image organ segmentation methods of the adaptive dataset.
From the above, the invention firstly acquires the statistical information of the medical image dataset, then preprocesses the medical image dataset based on the statistical information, constructs a semi-supervised learning framework and a segmentation network according to a semi-supervised learning method and nnU-Net, and trains the segmentation network in a semi-supervised manner by using a small amount of preprocessed tag data and a large amount of unlabeled data. Compared with the prior art, the method is self-adaptive to various data sets through nnU-Net, and does not need any manual parameter adjusting process; the preprocessing method, the structure of the split network and the super parameters are adaptively designed through the semi-supervision framework guiding nnU-Net. The trained segmentation network has strong generalization performance and high segmentation precision, and can realize the medical image organ segmentation task.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an embodiment of a semi-supervised medical image organ segmentation method for an adaptive dataset provided by the present invention;
FIG. 2 is a flowchart of step S300 in the embodiment of FIG. 1;
FIG. 3 is a schematic diagram of a network architecture of the semi-supervised learning framework of the embodiment of FIG. 1;
FIG. 4 is a schematic diagram showing a comparison of the segmentation results in the embodiment of FIG. 1;
FIG. 5 is a flow diagram of an embodiment of testing a trained segmentation network;
FIG. 6 is a schematic structural diagram of a semi-supervised medical image organ segmentation system for adaptive dataset provided by an embodiment of the present invention;
fig. 7 is a schematic block diagram of an internal structure of an intelligent terminal according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted in context as "when …" or "upon" or "in response to a determination" or "in response to detection. Similarly, the phrase "if a condition or event described is determined" or "if a condition or event described is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a condition or event described" or "in response to detection of a condition or event described".
The following description of the embodiments of the present invention will be made more fully hereinafter with reference to the accompanying drawings, in which embodiments of the invention are shown, it being evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
Due to the specificity of the medical image, the data volume is small, the difference between different data is large, and the data labeling cost is high. Particularly, in three-dimensional medical imaging, the differences of the properties such as imaging equipment, imaging modes, voxel spacing, image resolution and the like can influence the performance of a segmentation model, so that when the existing segmentation model is migrated to a three-dimensional medical image, the generalization performance is poor, the segmentation precision is not high, and the segmentation model is difficult to apply to the segmentation of medical image organs.
In order to solve the problems, the invention provides a data set self-adaptive automatic medical image organ segmentation method based on a semi-supervised learning method by taking a small number of labeled medical images and a large number of unlabeled medical images as data sets. The preprocessing method, the structure of the segmentation network and relevant super parameters can be designed adaptively aiming at the data set, any manual parameter adjustment process is not needed, and the model can be automatically deployed and trained in the tasks of a small amount of tag data and a large amount of non-tag data, so that the medical image organ segmentation task is realized.
Exemplary method
As shown in fig. 1, an embodiment of the present invention provides a method for segmenting a semi-supervised medical image organ with a self-adaptive data set, which is deployed on electronic devices such as a mobile terminal, a computer, a server, etc., to realize an organ segmentation task for a three-dimensional CT image. Specifically, the above-mentioned segmentation method includes the following steps:
step S100: acquiring a medical image dataset comprising a labeled dataset and an unlabeled dataset;
specifically, the medical images include ultrasound images, CT images, nuclear magnetic resonance images, and the like, and a series of medical images can be acquired through interfaces of respective medical imaging devices to generate a medical image dataset; the medical image dataset may also be acquired from a background server. The specific acquisition mode of the medical image dataset is not limited.
In order to reduce the labeling workload, three deep learning methods are mainly used at present: self-supervised learning methods, semi-supervised learning, and weakly supervised learning. The self-supervision learning method is to train a model in a supervision mode by using unlabeled data, learn basic knowledge and then carry out knowledge migration; the semi-supervised learning method directly learns from limited labeled data and a large amount of unlabeled data to obtain a high-quality segmentation result; instead of using pixel-level labeling, weak supervised learning approaches learn image segmentation from borders, graffiti, or image-level labels. Since weakly supervised learning and self-supervised learning are not ideal enough in medical image segmentation tasks, especially in segmentation of three-dimensional medical images, the embodiment adopts a semi-supervised learning method to process organ segmentation tasks of three-dimensional CT images. Accordingly, the medical image dataset includes a label dataset composed of a small number of label CT images and an unlabeled dataset composed of a large number of unlabeled CT images.
Step S200: carrying out statistical analysis on the medical image dataset to obtain statistical information;
specifically, the statistical analysis is performed on all the tag images and the non-tag images in the tag data set and the non-tag data set, the specific method of the statistical analysis is not limited, and the item of the statistical analysis can be specifically determined according to the application scenario, for example: and counting the image resolution, image mode, voxel interval and pixel intensity of all image samples in the medical image data set to obtain statistical information such as the median value of the voxel interval, the mean value, variance, median, quantile and the like of the pixel intensity. These statistics can be used in subsequent data preprocessing and super parameter training processes. In the statistical analysis performed in this embodiment, a first median value of the voxel intervals of all the image samples in the XY plane (the average value of all the voxel intervals in the XY plane) and a second median value in the Z-axis direction (the average value of all the voxel intervals in the Z-axis direction) are also calculated and stored in the statistical information.
The CT image holds data in the nifi format, where in addition to the image data, some additional information is held, such as: distance between each pixel, origin coordinates, direction, etc. Wherein the distance between pixels is also called voxel spacing. The pixel value in a CT image is HU (Hounsfield) and is linearly related to the gray value of the pixel. The magnitude of the HU value of a pixel is also referred to as the pixel intensity. For three-channel images, each channel can be processed separately to obtain the pixel intensity of the pixel under each channel.
Step S300: preprocessing the medical image dataset based on the statistical information;
specifically, the pretreatment process mainly comprises: resampling and pixel intensity normalization (normalization) of image samples in medical image data sets, it should be noted that preprocessing does not change the format of medical image data, and the data format in the preprocessed labeled data set and the unlabeled data set is still nifi.
Optionally, the preprocessing may also include other operations, such as: a clipping operation to clip all data to non-zero value regions.
In this embodiment, as shown in fig. 2, the preprocessing specifically includes the following steps:
step S310: resampling each image sample in the medical image dataset;
specifically, before resampling an image sample, a first median value (voxel interval median value of an XY plane) in statistical information is set as a voxel interval of the XY plane when the image sample is resampled; when the ratio of the second median value (the voxel interval median value in the Z-axis direction) to the first median value does not exceed the set threshold value (3 in the present embodiment), the second median value is set as the voxel interval in the Z-axis direction at the time of resampling the image sample, otherwise, the lower ten bits of the second median value are set as the voxel interval in the Z-axis direction at the time of resampling the image sample (provided that the second median value is 120, the lower ten bits of the second median value is 12). And after the voxel interval during resampling is determined, resampling each image sample in the medical image data set in sequence to adjust the voxel interval between each pixel in each image sample.
Step S320: and carrying out pixel intensity normalization on the pixel intensity of each pixel in each image sample in turn according to the pixel intensity mean value.
Specifically, pixel intensity normalization is also referred to as normalization. The foreground pixels may be normalized for pixel intensity and the background pixels may be normalized for pixel intensity, respectively, according to the classification of the pixels. The unlabeled dataset in this embodiment cannot account for the pixel intensity distribution of the foreground pixels due to the lack of labels. Therefore, the standardization is completed by uniformly adopting panoramic pixels for the label data set and the label-free data set, and the specific method is as follows: taking each image sample (the image sample in the unlabeled data set or the image sample in the labeled data set) as an individual, subtracting the pixel intensity mean value stored in the statistical information from the pixel intensity of each pixel point, and dividing by the pixel intensity variance stored in the statistical information to realize the pixel intensity standardization of each image sample. For example: for the unlabeled sample A, the pixel intensity of each pixel point in the unlabeled sample A is divided by the pixel intensity mean value in the statistical information and then divided by the pixel intensity variance in the statistical information, so that the pixel intensity standardization of each pixel point in the unlabeled sample A is completed.
Step S400: constructing a semi-supervised learning framework and a segmentation network according to the semi-supervised learning method and nnU-Net, wherein the semi-supervised learning framework is used for guiding nnU-Net to adaptively design a preprocessing method, a structure of the segmentation network and super parameters;
in particular, the model structure of the split network is automatically generated depending on heuristic rules of nnU-Net (a network framework of adaptive datasets). nnU-Net is a framework based on 2D U-Net,3D U-Net and U-Net Cascade of adaptive any dataset, capable of automatically adjusting all hyper-parameters without human intervention.
While nnU-Net is a framework for automatically deploying medical image segmentation tasks, nnU-Net can only be applied to labeled datasets at present, and tags to datasets can be used multiple times during the automatic deployment process. According to the invention, the nnU-Net is combined with the semi-supervised learning framework, and the semi-supervised learning framework can guide nnU-Net to perform data analysis, super-parameter setting and design of a segmented network structure aiming at a specific data set, namely, a preprocessing method, a segmented network structure and related super-parameters are adaptively designed, so that tasks such as the preprocessing method, the network structure and the related super-parameters can be completed without using data labels, and automatic deployment can be completed in semi-supervised learning.
In this embodiment, the semi-supervised learning framework is constructed by CPS (Cross Pseudo-label Supervision: cross Pseudo-tag supervision), and FIG. 3 is a schematic diagram of the semi-supervised learning framework. The semi-supervised learning framework includes two parallel split networks T1 and T2, with the split networks T1 and T2 having the same network architecture, except initialized with different weights θ1 and θ2. The inputs of the segmentation networks T1 and T2 may be 2D images or 3D images, and may be from a labeled data set or a non-labeled data set, and the two segmentation networks output respective segmentation prediction results for the input images.
The existing CPS strategy uses two networks, but both networks are inputting the same image, and the output of each network is used as the supervision signal of the other network. The present embodiment improves on the existing CPS strategy by inputting the tagged data set and the untagged data set into the two split networks, respectively.
Step S500: training the segmentation network according to the label data set and the label-free data set by adopting a five-fold cross verification method to obtain a trained segmentation network;
specifically, the five-fold cross validation method refers to dividing all data sets into 5 parts, taking one different part as a validation set each time during training, and taking the rest as a training set.
In order to make full use of the information of the unlabeled dataset as much as possible for each fold experiment, in this embodiment, only the labeled dataset is divided into a training set and a verification set, while the unlabeled dataset is not divided, i.e., all the unlabeled datasets are used in each fold training process. During training, cross entropy and Dice Loss are used as Loss functions of the segmentation network, the batch size of each training is 4 (namely 2 label data and 2 label-free data are loaded each time), the training termination condition is 1000 cycles, and 250 times of sampling are carried out in a label data set and a label-free data set. And finally, reserving the result with the best effect of the model on the verification set to obtain a trained segmentation network.
During training, when the image samples in the tag data set are input into the segmentation network, the loss values during training comprise: loss between the predicted result of the split network and the real label, loss between the predicted result of the split network and the predicted result of another split network; when the image samples in the unlabeled dataset are input into the segmentation network, the loss values during training include: a loss between the predicted outcome of the split network and the predicted outcome of the other split network. Referring to fig. 3, the pseudo tag Y1 output from the split network T1 is taken as the real tag of the split network T2, and the pseudo tag Y2 output from the split network T2 is taken as the real tag of the split network T1, so as to calculate the loss between the split networks. The loss between the partition network and the real label and the calculation method of the loss value between the partition network and the partition network are the same, and the calculation method is the accumulation of the cross entropy loss of 1/2 and the Dice loss of 1/2.
Optionally, the training set can be further subjected to data enhancement during training, and the data enhancement method can be a data enhancement method commonly used in nnU-Net, such as flipping, rotation, scaling, gaussian noise, and the like.
Step S600: and inputting the acquired medical image dataset into a trained segmentation network for organ segmentation, and outputting a segmented result graph.
Specifically, after the trained segmentation network is obtained, the medical image is input into the segmentation network to carry out organ segmentation, so that a prediction result of the trained segmentation network, namely a segmentation result, can be obtained. Fig. 4 shows a comparison of the segmentation result (baseine 3d+cps in the figure) of the present embodiment with the segmentation results obtained by other segmentation methods.
As described above, the present embodiment instructs nnU-Net to adaptively design a preprocessing method, a network structure and relevant super parameters thereof for a data set through a semi-supervised learning framework, and does not need any manual parameter adjustment process, so that a model can be automatically deployed and trained in tasks with a small amount of tag data and a large amount of non-tag data, thereby realizing an organ segmentation task based on CT images.
The segmentation method of the invention can be used for medical images with data formats such as Computed Tomography (CT) images or MRI images, and can segment the liver, spleen, left kidney, right kidney, pancreas, aorta, inferior vena cava, left adrenal gland, right adrenal gland, gall bladder, esophagus, duodenum, stomach and other abdominal organs in the medical images to complete the segmentation task. The medical image to be segmented may be 3D medical image data or 2D slice data. Of course, the organ that can be segmented is not limited to the above-mentioned organ, and any organ can automatically deploy training and automatically realize segmentation tasks as long as a small amount of tag data is in the data set.
In addition, although the network structure adopted in the embodiment is U-Net, the network structure can be replaced by other network structures such as FCN, deepLab, V-Net based on CNNs, or by network structures such as Trans U-Net based on transformers, swin U-Net based on transformers, or modules such as attention mechanisms and residual error networks can be added on the basis of U-Net to improve the feature extraction and characterization capability.
Semi-supervised learning frameworks can rely on self-learning (self-learning), collaborative learning (co-learning), model-based approaches, and the like. The semi-supervised learning framework proposed in this embodiment is a specific example algorithm in collaborative learning, which can be replaced by other semi-supervised strategies. For example, semi-supervised learning, also based on collaborative learning, by means of multitasking, multiview, multibranching, multiple data enhancement; or other self-learning-based, model-based methods.
In one embodiment, as shown in fig. 5, the method further includes testing the trained segmentation network, and the specific testing steps include:
step S700: acquiring test set data and preprocessing the test set data;
step S800: dividing the preprocessed test set data into blocks by adopting a sliding window with a preset step length, and inputting the blocks into a trained segmentation network;
Step S900: and removing false positive areas in the prediction result of the segmentation network according to the non-maximum suppression algorithm to obtain a test result.
Specifically, the step of acquiring the test set data is the same as step S100, and the test set data is preprocessed by the same preprocessing method as step S200. The size of a typical CT image is (512, 512, 400), the width and height are 512 x 512, 400 being the number of slices. Because of the memory/storage limitations, the entire three-dimensional CT image cannot be fed into the segmentation network, i.e., the segmentation network accepts less input than the original image, e.g., (192, 192, 160). Therefore, the original CT image is segmented by blocks with the sizes (192, 192, 160) for a plurality of times, and then sent to a segmentation network, so as to obtain a segmentation result predicted for each block, and then the blocks are fused by adopting methods such as a mean value, a gaussian weight and the like, so as to obtain a test result.
When the original CT image is segmented, the embodiment uses a sliding window with a preset step length of 0.7 to segment the data of the test set after pretreatment (the step length of 0.7 represents that the overlapping of three dimensions of HWD among all blocks accounts for 0.7 of the whole block), then the data are sent into a trained segmentation network, and then false positive areas of a prediction result of the segmentation network are removed according to a non-maximum suppression algorithm, so that the test result is obtained. The split network can be evaluated and optimized based on the test results.
Exemplary apparatus
As shown in fig. 6, the embodiment of the present invention further provides a semi-supervised medical image organ segmentation system of an adaptive dataset, corresponding to the semi-supervised medical image organ segmentation method of the adaptive dataset, specifically, the segmentation system includes:
a dataset acquisition module 600 for acquiring a medical image dataset comprising a labeled dataset and an unlabeled dataset;
a statistics module 610, configured to perform a statistical analysis on the medical image dataset to obtain statistical information;
a preprocessing module 620 for preprocessing the medical image dataset based on the statistical information;
the construction module 630 is configured to construct a semi-supervised learning framework and a segmentation network according to a semi-supervised learning method and nnU-Net, where the semi-supervised learning framework is configured to instruct the segmentation network to adaptively design a preprocessing method, a network structure and super parameters;
and an optimization module 640, configured to train the segmentation network according to the preprocessed labeled dataset and the preprocessed unlabeled dataset by adopting a five-fold cross-validation method, so as to obtain a trained segmentation network.
Optionally, the segmentation system further comprises a test module, which is used for acquiring test set data and preprocessing the test set data; dividing the preprocessed test set data into blocks by adopting a sliding window with a preset step length, and inputting the blocks into a trained segmentation network; and removing false positive areas in the prediction result of the segmentation network according to the non-maximum suppression algorithm to obtain a test result.
In this embodiment, the above-mentioned semi-supervised medical image organ segmentation system of the adaptive dataset may refer to corresponding descriptions in the above-mentioned semi-supervised medical image organ segmentation method of the adaptive dataset, which are not described herein again.
Based on the above embodiment, the present invention further provides an intelligent terminal, and a functional block diagram thereof may be shown in fig. 7. The intelligent terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. The processor of the intelligent terminal is used for providing computing and control capabilities. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores a semi-supervised medical image organ segmentation program of an operating system and an adaptive dataset. The internal memory provides an environment for the operation of the operating system and the semi-supervised medical image organ segmentation program of the adaptive dataset in the non-volatile storage medium. The network interface of the intelligent terminal is used for communicating with an external terminal through network connection. The semi-supervised medical image organ segmentation procedure of the adaptive dataset, when executed by the processor, implements the steps of the semi-supervised medical image organ segmentation method of any one of the adaptive datasets described above. The display screen of the intelligent terminal can be a liquid crystal display screen or an electronic ink display screen.
It will be appreciated by those skilled in the art that the schematic block diagram shown in fig. 7 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the smart terminal to which the present inventive arrangements are applied, and that a particular smart terminal may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a smart terminal is provided, the smart terminal comprising a memory, a processor, and a semi-supervised medical image organ segmentation program of an adaptive dataset stored on the memory and executable on the processor, the semi-supervised medical image organ segmentation program of the adaptive dataset being executed by the processor with instructions for:
acquiring a medical image dataset comprising a labeled dataset and an unlabeled dataset;
carrying out statistical analysis on the medical image dataset to obtain statistical information;
preprocessing the medical image dataset based on the statistical information;
constructing a semi-supervised learning framework and a segmentation network according to a semi-supervised learning method and nnU-Net, wherein the semi-supervised learning framework is used for guiding nnU-Net to adaptively design a preprocessing method, a structure of the segmentation network and super parameters;
Training the segmentation network according to the preprocessed label data set and the preprocessed non-label data set by adopting a five-fold cross validation method to obtain a trained segmentation network;
and inputting the acquired medical image dataset into a trained segmentation network for organ segmentation, and outputting a segmented result graph.
Optionally, the statistical information includes a mean value of pixel intensities, and the preprocessing the medical image dataset based on the statistical information includes:
resampling each image sample in the medical image dataset;
and carrying out pixel intensity standardization on the pixel intensity of each pixel in each image sample in turn according to the pixel intensity average value.
Optionally, the statistical information further includes a first median value of voxel intervals of the image samples in an XY plane and a second median value of the voxel intervals in a Z-axis direction, and resampling each image sample in the medical image dataset includes:
setting the first median to be the voxel interval of an XY plane during resampling;
and if the ratio of the second median to the first median does not exceed a set threshold, setting the second median as the voxel interval in the Z direction during resampling, otherwise, setting the lower ten digits of the second median as the voxel interval in the Z direction during resampling.
Optionally, two parallel splitting networks are provided, and when the image samples in the tag data set are input into the splitting networks, the loss value during training includes: loss between the predicted result of the split network and the real label, loss between the predicted result of the split network and the predicted result of another split network; when the image samples in the unlabeled dataset are input into the segmentation network, the loss values during training include: a loss between the predicted outcome of the split network and the predicted outcome of the other split network.
Optionally, the training the splitting network according to the preprocessed label dataset and the preprocessed unlabeled dataset by adopting a five-fold cross-validation method includes:
sampling from the preprocessed tag data set by adopting a five-fold cross validation method to obtain a sample set;
and training the segmentation network by taking the sample set and the preprocessed unlabeled data set as training samples.
Optionally, the method further comprises testing the trained segmentation network, and the testing method comprises the following steps:
acquiring test set data and preprocessing the test set data;
dividing the image samples in the preprocessed test set data into blocks by adopting a sliding window with a preset step length, and inputting the blocks into a trained segmentation network;
And removing false positive areas in the prediction result of the segmentation network according to the non-maximum suppression algorithm to obtain a test result.
The embodiment of the invention also provides a computer readable storage medium, on which a semi-supervised medical image organ segmentation program of the adaptive data set is stored, and when the semi-supervised medical image organ segmentation program of the adaptive data set is executed by a processor, the steps of any one of the semi-supervised medical image organ segmentation methods of the adaptive data set provided by the embodiment of the invention are realized.
It should be understood that the sequence number of each step in the above embodiment does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not be construed as limiting the implementation process of the embodiment of the present invention.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units described above is merely a logical function division, and may be implemented in other manners, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of each method embodiment may be implemented. The computer program comprises computer program code, and the computer program code can be in a source code form, an object code form, an executable file or some intermediate form and the like. The computer readable medium may include: any entity or device capable of carrying the computer program code described above, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. The content of the computer readable storage medium can be appropriately increased or decreased according to the requirements of the legislation and the patent practice in the jurisdiction.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions are not intended to depart from the spirit and scope of the various embodiments of the invention, which are also within the spirit and scope of the invention.

Claims (8)

1. A method of semi-supervised medical image organ segmentation of an adaptive dataset, the segmentation method comprising:
acquiring a medical image dataset comprising a labeled dataset and an unlabeled dataset;
carrying out statistical analysis on the medical image dataset to obtain statistical information;
preprocessing the medical image dataset based on the statistical information;
constructing a semi-supervised learning framework and a segmentation network according to a semi-supervised learning method and nnU-Net, wherein the semi-supervised learning framework is used for guiding nnU-Net to adaptively design a preprocessing method, a structure of the segmentation network and super parameters;
Training the segmentation network according to the preprocessed label data set and the preprocessed non-label data set by adopting a five-fold cross validation method to obtain a trained segmentation network;
inputting the acquired medical image data set into a trained segmentation network for organ segmentation, and outputting a segmented result graph;
the statistical information comprises a pixel intensity mean value, and the preprocessing of the medical image dataset based on the statistical information comprises the following steps:
resampling each image sample in the medical image dataset;
sequentially carrying out pixel intensity standardization on the pixel intensity of each pixel in each image sample according to the pixel intensity average value;
the statistical information also comprises a first median value of voxel intervals of the image samples in an XY plane and a second median value of the voxel intervals in a Z-axis direction, and the resampling of each image sample in the medical image dataset comprises the following steps:
setting the first median to be the voxel interval of an XY plane during resampling;
if the ratio of the second median to the first median does not exceed a set threshold, setting the second median to be the voxel interval in the Z direction during resampling, otherwise, setting the lower tenth of the second median to be the voxel interval in the Z direction during resampling;
The step of sequentially normalizing the pixel intensity of each pixel in each image sample according to the pixel intensity mean value includes:
the pixel intensity of each pixel is subtracted by the pixel intensity mean and divided by the pixel intensity variance.
2. The method for semi-supervised medical image organ segmentation of an adaptive dataset of claim 1, wherein there are two parallel said segmentation networks, and wherein the loss values during training when inputting the image samples in the labeled dataset into the segmentation network include: loss between the predicted result of the split network and the real label, loss between the predicted result of the split network and the predicted result of another split network; when the image samples in the unlabeled dataset are input into the segmentation network, the loss values during training include: a loss between the predicted outcome of the split network and the predicted outcome of the other split network.
3. The method for semi-supervised medical image organ segmentation of adaptive datasets of claim 1, wherein the training of the segmentation network from the pre-processed labeled datasets, the pre-processed unlabeled datasets using a five-fold cross-validation method includes:
Sampling from the preprocessed tag data set by adopting a five-fold cross validation method to obtain a sample set;
and training the segmentation network by taking the sample set and the preprocessed unlabeled data set as training samples.
4. The method for semi-supervised medical image organ segmentation of an adaptive dataset of claim 1, further comprising testing the trained segmentation network, the testing method comprising:
acquiring test set data and preprocessing the test set data;
dividing the image samples in the preprocessed test set data into blocks by adopting a sliding window with a preset step length, and inputting the blocks into a trained segmentation network;
and removing false positive areas in the prediction result of the segmentation network according to the non-maximum suppression algorithm to obtain a test result.
5. A semi-supervised medical image organ segmentation system of an adaptive dataset, the segmentation system comprising:
a dataset acquisition module for acquiring a medical image dataset comprising a labeled dataset and a non-labeled dataset;
the statistics module is used for carrying out statistical analysis on the medical image data set to obtain statistical information;
a preprocessing module for preprocessing the medical image dataset based on the statistical information;
The construction module is used for constructing a semi-supervised learning framework and a segmentation network according to the semi-supervised learning method and nnU-Net, wherein the semi-supervised learning framework is used for guiding nnU-Net to adaptively design a preprocessing method, a structure of the segmentation network and super parameters;
the optimizing module is used for training the segmentation network according to the preprocessed label data set and the preprocessed non-label data set by adopting a five-fold cross verification method to obtain a trained segmentation network;
the statistical information comprises a pixel intensity mean value, and the preprocessing of the medical image dataset based on the statistical information comprises the following steps:
resampling each image sample in the medical image dataset;
sequentially carrying out pixel intensity standardization on the pixel intensity of each pixel in each image sample according to the pixel intensity average value;
the statistical information also comprises a first median value of voxel intervals of the image samples in an XY plane and a second median value of the voxel intervals in a Z-axis direction, and the resampling of each image sample in the medical image dataset comprises the following steps:
setting the first median to be the voxel interval of an XY plane during resampling;
if the ratio of the second median to the first median does not exceed a set threshold, setting the second median to be the voxel interval in the Z direction during resampling, otherwise, setting the lower tenth of the second median to be the voxel interval in the Z direction during resampling;
The step of sequentially normalizing the pixel intensity of each pixel in each image sample according to the pixel intensity mean value includes:
the pixel intensity of each pixel is subtracted by the pixel intensity mean and divided by the pixel intensity variance.
6. The adaptive dataset semi-supervised medical image organ segmentation system as set forth in claim 5, further comprising a test module for acquiring and preprocessing test set data; dividing the image samples in the preprocessed test set data into blocks by adopting a sliding window with a preset step length, and inputting the blocks into a trained segmentation network; and removing false positive areas in the prediction result of the segmentation network according to the non-maximum suppression algorithm to obtain a test result.
7. A smart terminal comprising a memory, a processor and a semi-supervised medical image organ segmentation procedure of an adaptive dataset stored on the memory and executable on the processor, which semi-supervised medical image organ segmentation procedure of the adaptive dataset, when executed by the processor, implements the steps of the semi-supervised medical image organ segmentation method of the adaptive dataset as set forth in any of claims 1-4.
8. Computer readable storage medium, characterized in that it has stored thereon a semi-supervised medical image organ segmentation procedure of an adaptive dataset, which when executed by a processor, implements the steps of the semi-supervised medical image organ segmentation method of an adaptive dataset according to any of claims 1-4.
CN202211607575.0A 2022-12-14 2022-12-14 Semi-supervised medical image organ segmentation method and system for self-adaptive data set Active CN115861250B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211607575.0A CN115861250B (en) 2022-12-14 2022-12-14 Semi-supervised medical image organ segmentation method and system for self-adaptive data set

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211607575.0A CN115861250B (en) 2022-12-14 2022-12-14 Semi-supervised medical image organ segmentation method and system for self-adaptive data set

Publications (2)

Publication Number Publication Date
CN115861250A CN115861250A (en) 2023-03-28
CN115861250B true CN115861250B (en) 2023-09-22

Family

ID=85672929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211607575.0A Active CN115861250B (en) 2022-12-14 2022-12-14 Semi-supervised medical image organ segmentation method and system for self-adaptive data set

Country Status (1)

Country Link
CN (1) CN115861250B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824146B (en) * 2023-07-05 2024-06-07 深圳技术大学 Small sample CT image segmentation method, system, terminal and storage medium
CN118212490B (en) * 2024-05-15 2024-09-10 腾讯科技(深圳)有限公司 Training method, device, equipment and storage medium for image segmentation model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3459461A1 (en) * 2017-09-25 2019-03-27 Koninklijke Philips N.V. X-ray imaging reference scan
CN113793304A (en) * 2021-08-23 2021-12-14 天津大学 Intelligent segmentation method for lung cancer target area and organs at risk
CN114581628A (en) * 2022-03-03 2022-06-03 北京银河方圆科技有限公司 Cerebral cortex surface reconstruction method and readable storage medium
CN114612721A (en) * 2022-03-15 2022-06-10 南京大学 Image classification method based on multilevel adaptive feature fusion type increment learning
US11526994B1 (en) * 2021-09-10 2022-12-13 Neosoma, Inc. Labeling, visualization, and volumetric quantification of high-grade brain glioma from MRI images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6682291B2 (en) * 2016-02-12 2020-04-15 キヤノン株式会社 Image processing apparatus, image processing method and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3459461A1 (en) * 2017-09-25 2019-03-27 Koninklijke Philips N.V. X-ray imaging reference scan
CN111107787A (en) * 2017-09-25 2020-05-05 皇家飞利浦有限公司 X-ray imaging reference scan
CN113793304A (en) * 2021-08-23 2021-12-14 天津大学 Intelligent segmentation method for lung cancer target area and organs at risk
US11526994B1 (en) * 2021-09-10 2022-12-13 Neosoma, Inc. Labeling, visualization, and volumetric quantification of high-grade brain glioma from MRI images
CN114581628A (en) * 2022-03-03 2022-06-03 北京银河方圆科技有限公司 Cerebral cortex surface reconstruction method and readable storage medium
CN114612721A (en) * 2022-03-15 2022-06-10 南京大学 Image classification method based on multilevel adaptive feature fusion type increment learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"3D Cross Pseudo Supervision (3D-CPS): A semi-supervised nnU-Net architecture for abdominal organ segmentation";Yongzhi Huang 等;《arXiv》;第1-13页 *

Also Published As

Publication number Publication date
CN115861250A (en) 2023-03-28

Similar Documents

Publication Publication Date Title
US11748879B2 (en) Method and system for intracerebral hemorrhage detection and segmentation based on a multi-task fully convolutional network
CN115861250B (en) Semi-supervised medical image organ segmentation method and system for self-adaptive data set
Ito et al. Semi-supervised deep learning of brain tissue segmentation
US20200364570A1 (en) Machine learning method and apparatus, program, learned model, and discrimination apparatus
CN111932529B (en) Image classification and segmentation method, device and system
Chi et al. X-Net: Multi-branch UNet-like network for liver and tumor segmentation from 3D abdominal CT scans
US20220051404A1 (en) Pathological section image processing method and apparatus, system, and storage medium
CN107688783B (en) 3D image detection method and device, electronic equipment and computer readable medium
CN110110723B (en) Method and device for automatically extracting target area in image
JP2023540910A (en) Connected Machine Learning Model with Collaborative Training for Lesion Detection
CN110956632B (en) Method and device for automatically detecting pectoralis major region in molybdenum target image
CN110223300A (en) CT image abdominal multivisceral organ dividing method and device
Zhao et al. Versatile framework for medical image processing and analysis with application to automatic bone age assessment
CN113077419A (en) Information processing method and device for hip joint CT image recognition
CN113724185B (en) Model processing method, device and storage medium for image classification
CN113256592A (en) Training method, system and device of image feature extraction model
CN115546270A (en) Image registration method, model training method and equipment for multi-scale feature fusion
CN113096080A (en) Image analysis method and system
CN114972382A (en) Brain tumor segmentation algorithm based on lightweight UNet + + network
Nanthini et al. A survey on data augmentation techniques
Shi et al. Dual dense context-aware network for hippocampal segmentation
Annavarapu et al. An adaptive watershed segmentation based medical image denoising using deep convolutional neural networks
CN113850796A (en) Lung disease identification method and device based on CT data, medium and electronic equipment
Jimi et al. Automated skin lesion segmentation using vgg-unet
CN110363762A (en) Cell detection method, device, intelligent microscope system and readable storage medium storing program for executing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant