CN113344896A - Breast CT image focus segmentation model training method and system - Google Patents

Breast CT image focus segmentation model training method and system Download PDF

Info

Publication number
CN113344896A
CN113344896A CN202110705914.8A CN202110705914A CN113344896A CN 113344896 A CN113344896 A CN 113344896A CN 202110705914 A CN202110705914 A CN 202110705914A CN 113344896 A CN113344896 A CN 113344896A
Authority
CN
China
Prior art keywords
image
sample
images
training
segmentation model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110705914.8A
Other languages
Chinese (zh)
Other versions
CN113344896B (en
Inventor
高志强
陈杰
乔鹏冲
田永鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peng Cheng Laboratory
Original Assignee
Peng Cheng Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peng Cheng Laboratory filed Critical Peng Cheng Laboratory
Priority to CN202110705914.8A priority Critical patent/CN113344896B/en
Publication of CN113344896A publication Critical patent/CN113344896A/en
Application granted granted Critical
Publication of CN113344896B publication Critical patent/CN113344896B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a semi-supervised chest CT image focus segmentation model training method and a semi-supervised chest CT image focus segmentation model training system, wherein the method comprises the following steps: acquiring a sample CT image and calculating the reliability of the sample CT image, wherein the sample CT image comprises an annotated image and an annotated-free image; according to the reliability of the sample CT image, pairing the CT images and generating an augmented CT image, wherein the CT image is an image only containing a focus; training a deep learning network according to the augmented CT image to obtain a focus segmentation model; and optimizing the segmentation precision of the focus segmentation model based on a self-learning strategy of a teacher-student model. The invention provides a semi-supervised chest CT image lesion segmentation model training method, which retains lesion information, integrates the advantages of a fully supervised loss function and a common semi-supervised loss function, saves a large amount of labor cost and avoids the requirement of massive marking data.

Description

Breast CT image focus segmentation model training method and system
Technical Field
The invention relates to the technical field of lung CT image focus segmentation, in particular to a training method and a system of a breast CT image focus segmentation model.
Background
The focus segmentation of chest CT has important significance for diagnosing and treating new coronary pneumonia. Although the fully supervised algorithm can achieve a good segmentation effect, massive labeled data is required. However, lesion marking for breast CT requires a professional radiologist to do so, but such physicians are typically busy and have little off-hours. In addition, because the size and shape of the focus are different and the number of CT layers is large, which results in great labeling difficulty, a method for training a segmentation model according to a small amount of CT data for labeling the focus is urgently needed.
Thus, there is a need for improvements and enhancements in the art.
Disclosure of Invention
The invention provides a method and a system for training a breast CT image focus segmentation model, and aims to solve the problem that a method for training a segmentation model according to a small amount of CT data for marking focuses is lacked in the prior art.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
in a first aspect, the present invention provides a method for training a breast CT image lesion segmentation model, wherein the method includes:
acquiring a sample CT image and calculating the reliability of the sample CT image, wherein the sample CT image comprises an annotated image and an annotated-free image;
according to the reliability of the sample CT image, pairing the CT images and generating an augmented CT image, wherein the CT image is an image only containing a focus;
training a deep learning network according to the augmented CT image to obtain a focus segmentation model;
and optimizing the self-learning strategy based on the teacher-student model to obtain the segmentation precision of the focus segmentation model.
In one implementation, the acquiring a sample CT image and calculating a confidence level of the sample CT image includes:
acquiring the sample CT image, and extracting semantic features of the sample CT image by using an encoder;
introducing noise features and disturbance factors into the semantic features to obtain the semantic features with noise and a corresponding probability map;
and calculating the average value of the probability map, and calculating the reliability of the CT image of each sample according to the average value of the probability map.
In one implementation, the acquiring a sample CT image and calculating the confidence level of the sample CT image further includes:
and obtaining a credibility mask according to the average value of the probability map and based on a threshold value.
In one implementation, the pairing CT images and generating an augmented CT image according to the confidence level of the sample CT image includes:
sequencing the sample CT images according to the credibility of the sample CT images;
matching the image with the highest reliability in the sample CT images with the image with the lowest reliability, matching the image with the second highest reliability in the sample CT images with the image with the second lowest reliability, and so on to complete the matching of the sample CT images;
and mixing the images containing the focuses by using the credibility mask to obtain the augmented CT image.
In one implementation, in the process of training the deep learning network according to the augmented CT image, the method for calculating the loss function includes:
Lseg=Lsup+βLsem
wherein, β is a dynamic parameter changing along with the training process, and the calculation mode is as follows:
Figure BDA0003131252420000031
wherein, b1And b2Two hyper-parameters are respectively set to be 0.1 and-5, and the value of beta is 0 to 0.1; l issupIs a conventional loss function, LsemAs a function of semi-supervised loss, enRepresenting the number of steps of the current training, EmaxRepresenting the total number of steps of the training, the value being related to the data set, EmaxValue 200 and value 300.
In one implementation, the conventional penalty function LsupThe calculation method is as follows:
Figure BDA0003131252420000032
wherein L isceRepresenting a cross-entropy function, FsegFor the encoder, I is the sample CT image, I is the batch of sample CT images, MIN is a mask image of the sample CT images and indicates the number of sample CT images in the batch.
In one implementation, the conventional penalty function LsemThe calculation method is as follows:
Figure BDA0003131252420000033
wherein K represents the logarithm of the image pair in the training batch;
Figure BDA0003131252420000034
a blended image representing a high confidence image in the kth pair of images,
Figure BDA0003131252420000035
a blended image mask representing a high confidence image in the kth pair of images,
Figure BDA0003131252420000036
a blended image representing low confidence images in the kth pair of images,
Figure BDA0003131252420000037
mixed image mask, L, representing low confidence images in the k-th pair of imagesceRepresenting a cross entropy function.
In a second aspect, an embodiment of the present invention provides a training system for a semi-supervised breast CT image lesion segmentation model, where the system includes:
the uncertainty evaluation module is used for acquiring a sample CT image and calculating the reliability of the sample CT image, wherein the sample CT image comprises an annotated image and an annotated-free image;
the image generation module is used for pairing the CT images according to the reliability of the sample CT image and generating an augmented CT image, wherein the CT image is an image only containing a focus;
the repairing network module is used for training a deep learning network according to the augmented CT image to obtain a focus segmentation model;
and the self-learning module is used for optimizing the segmentation precision of the focus segmentation model based on the self-learning strategy of the teacher-student model.
In a third aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes a memory, a processor, and a training program of a breast CT image lesion segmentation model stored in the memory and executable on the processor, and when the processor executes the training program of the breast CT image lesion segmentation model, the step of implementing the method for training a semi-supervised breast CT image lesion segmentation model according to any one of the above schemes is implemented.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a training program of a breast CT image lesion segmentation model is stored, where the training program of the breast CT image lesion segmentation model is executed by a processor, and the steps of the method for training a semi-supervised breast CT image lesion segmentation model according to any one of the above schemes are implemented.
Has the advantages that: compared with the prior art, the invention provides a semi-supervised breast CT image lesion segmentation model training method. Firstly, a sample CT image is obtained, and the reliability of the sample CT image is calculated, wherein the sample CT image comprises an annotated image and an annotated image. And then according to the reliability of the sample CT image, pairing the CT images and generating an augmented CT image, wherein the CT image is an image only containing a focus. And then, training a deep learning network according to the augmented CT image to obtain a focus segmentation model. And finally, optimizing the segmentation precision of the focus segmentation model based on a self-learning strategy of a teacher-student model. The invention provides a semi-supervised chest CT image lesion segmentation model training method, which retains lesion information, integrates the advantages of a fully supervised loss function and a common semi-supervised loss function, saves a large amount of labor cost and avoids the requirement of massive marking data.
Drawings
Fig. 1 is a flowchart of a method for training a lesion segmentation model of a semi-supervised chest CT image according to an embodiment of the present invention.
Fig. 2 is a schematic flowchart of a method for training a lesion segmentation model of a semi-supervised chest CT image according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of obtaining reliability in a training method of a semi-supervised breast CT image lesion segmentation model according to an embodiment of the present invention.
Fig. 4 is a schematic block diagram of a training system of a lesion segmentation model of a semi-supervised chest CT image according to an embodiment of the present invention.
Fig. 5 is a schematic block diagram of an internal structure of a terminal device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Research shows that the focus segmentation of chest CT has important significance for diagnosing and treating new coronary pneumonia. Although the fully supervised algorithm can achieve a good segmentation effect, massive labeled data is required. However, lesion marking for breast CT requires a professional radiologist to do so, but such physicians are typically busy and have little off-hours. In addition, because the size and shape of the focus are different and the number of CT layers is large, which results in great labeling difficulty, a method for training a segmentation model according to a small amount of CT data for labeling the focus is urgently needed.
In order to solve the above problem, the present embodiment provides a method for training a lesion segmentation model of a semi-supervised chest CT image. Firstly, a sample CT image is obtained, and the reliability of the sample CT image is calculated, wherein the sample CT image comprises an annotated image and an annotated image. And then according to the reliability of the sample CT image, pairing the CT images and generating an augmented CT image, wherein the CT image is an image only containing a focus. And then, training a deep learning network according to the augmented CT image to obtain a focus segmentation model. And finally, optimizing the segmentation precision of the focus segmentation model based on a self-learning strategy of a teacher-student model. The invention provides a semi-supervised chest CT image focus segmentation model training method, which can train a focus segmentation model under the condition that the data volume labeled by a professional doctor is less, obtain a segmentation effect equivalent to that when a large amount of labeled data is used, retain focus information, integrate the advantages of a fully supervised loss function and a common semi-supervised loss function, save a large amount of labor cost and avoid the requirement of massive labeled data.
Exemplary method
As shown in fig. 1 and fig. 2, the method for training a semi-supervised breast CT image lesion segmentation model provided in this embodiment specifically includes the following steps:
s100, obtaining a sample CT image, and calculating the reliability of the sample CT image, wherein the sample CT image comprises an annotated image and a non-annotated image.
In this embodiment, the sample CT images, i.e. a batch of images in fig. 2, are obtained first, wherein the batch of images includes n unlabeled images and n labeled images. Semantic features of the sample CT image are then extracted using an encoder. And then introducing noise features and disturbance factors into the semantic features to obtain the semantic features with noise and a corresponding probability map. And finally, calculating the average value of the probability map, calculating the reliability of each sample CT image according to the probability map, and obtaining a reliability mask according to the average value of the probability map and based on a threshold value. Specifically, as shown in fig. 3, the present embodiment first uses the encoder FsegProviding semantic features f e R of input sample CT image IC×H×WThen introduce noise zi∈RC×H×WWhere i ∈ {1,2, …, T }, where T is a hyperparameter, indicating the number of noise perturbations added. Here, the noise is randomly collected from a uniform distribution U [ -0.3, +0.3]. After noise is introduced, noisy semantic features can be obtained
Figure BDA0003131252420000061
Figure BDA0003131252420000062
Wherein
Figure BDA0003131252420000063
Representing the product of the elements. T different perturbations [ z ]1,z2,…,zT]Will obtain T probability maps P1,P2,…,PT]In which P isi=Fseg(I;zt). Next, the average of these probability maps is calculated
Figure BDA0003131252420000064
Obtaining confidence mask M by threshold taucon
Figure BDA0003131252420000065
Wherein the threshold tau is a dynamic parameter related to the number of training steps, and is calculated as follows,
Figure BDA0003131252420000071
wherein gamma is1、γ2And gamma3For the over-parameters, here set to 0.3, -5 and 0.5, respectively. And then calculating the credibility score s of each image to obtain the credibility of each sample CT image, wherein the calculation mode is as follows:
Figure BDA0003131252420000072
wherein, x represents the pixel in the sample CT image, | I | represents the number of pixels in the sample CT image I, the H function is an entropy calculation function, and the calculation formula is as follows: h (p) ═ plog2p-(1-p)log2(1-p)。
And S200, matching the CT images according to the reliability of the sample CT image, and generating an augmented CT image, wherein the CT image is an image only containing a focus.
In this embodiment, the sample CT images are first sorted according to their reliability. And then, matching the image with the highest reliability in the sample CT images with the image with the lowest reliability, matching the image with the second highest reliability in the sample CT images with the image with the second lowest reliability, and so on to complete the matching of the sample CT images. And finally, mixing the images containing the focuses by using the credibility mask to obtain the augmented CT image. And forming an image pair by using the CT with low reliability and the CT with high reliability, and reserving the focus information of the image pair to generate a new mixed image.
In specific implementation, the reliability s of all images in each batch of CT sample images is calculated, then the sample CT images of the batch are sorted from small to large according to s, and the image with the highest reliability and the image with the lowest reliability form a pair. For example, the image with the highest confidence level s and the image with the lowest confidence level s are paired, and the image with the second highest confidence level s and the image with the second lowest confidence level s are paired. Followed by confidence mask MconObtaining images of only mixed lesions
Figure BDA0003131252420000073
Figure BDA0003131252420000074
Where n denotes the two image indices in each pair of images.
Figure BDA0003131252420000081
Indicating that an image of only the mixed lesion is obtained from image n. Alpha is alphanRepresents the blending weight of the image n, here randomly collected from a uniform distribution U (0, 1). A blended image is then generated according to the following formula:
Figure BDA0003131252420000082
where m and n represent the subscripts of the two images in a pair, respectively, and if m is 1, then n is 2, and vice versa.
Figure BDA0003131252420000083
Representing the generated blended image of image n.
Then, a mask of the mixed image is generated according to the following formula,
Figure BDA0003131252420000084
wherein the content of the first and second substances,
Figure BDA0003131252420000085
a mask representing the generated blended image for image n.
And step S300, training a deep learning network according to the augmented CT image to obtain a focus segmentation model.
In this embodiment, the deep learning network is trained according to the obtained augmented data. For this purpose, we have designed a special loss function Lseg
Lseg=Lsup+βLsem
Wherein, β is a dynamic parameter changing along with the training process, and the calculation mode is as follows:
Figure BDA0003131252420000086
wherein, b1And b2For two hyper-parameters, set separately hereIs 0.1 and-5, and the value of beta changes from 0 to 0.1 along with the increase of the training steps. L issupIs a conventional loss function, LsemAs a function of semi-supervised loss, enRepresenting the number of steps of the current training, EmaxRepresenting the total number of steps of the training, the value being related to the data set, EmaxValue 200 and value 300. L issupThe calculation method of (c) is as follows:
Figure BDA0003131252420000087
wherein L isceRepresenting a cross-entropy function, FsegFor the encoder, I is the sample CT image, I is the batch of sample CT images, MIN is a mask image of the sample CT images and indicates the number of sample CT images in the batch.
LsemThe calculation method of (c) is as follows:
Figure BDA0003131252420000091
where K represents the logarithm of the image pair in the training batch.
Figure BDA0003131252420000092
A blended image representing a high confidence image in the kth pair of images,
Figure BDA0003131252420000093
a blended image mask representing a high confidence image in the kth pair of images,
Figure BDA0003131252420000094
a blended image representing low confidence images in the kth pair of images,
Figure BDA0003131252420000095
mixed image mask, L, representing low confidence images in the k-th pair of imagesceRepresenting a cross entropy function.
And S400, optimizing the segmentation precision of the focus segmentation model based on a self-learning strategy of a teacher-student model.
In the embodiment, in order to obtain a high-quality pseudo label during training, a Teacher-Student-based self-training process is introduced. The specific process is as follows:
inputting:
Dl: labeled CT images;
Du: a CT image without annotation;
Gl: an annotation mask for annotating the CT image;
Gu: pseudo-label mask of the label-free CT image;
Ft(·|θt): a Teacher model;
Fs(·|θs): a Student model;
s1: initializing parameters, and enabling the parameter theta of the Teacher modeltParameter θ of Student modelsStep e is made to be 0;
s2: with a training set CT image DlAnd doctor annotated mask GlTraining Teacher model
Figure BDA0003131252420000096
S3: updating parameters of the Teacher model
Figure BDA0003131252420000097
S4: if the step number e is a multiple of K, executing S5-S10;
s5: according to the parameters of the Teacher model
Figure BDA0003131252420000098
Updating Student model parameters
Figure BDA0003131252420000099
S6: from label-free CT data DuMiddle sampling n CT images
Figure BDA0003131252420000101
S7: computing pseudo-annotation masks for non-annotated images
Figure BDA0003131252420000102
S8: updating annotation masks for tagged images
Figure BDA0003131252420000103
S9: updating labeled CT data images
Figure BDA0003131252420000104
S10: from label-free CT data sets DuTo eliminate selected CT image
Figure BDA0003131252420000105
S11: S2-S10 max times are performed in sequence, unless there is no labeled CT data set DuIf there is no more data in the data, the process is terminated.
Through the self-training process of the Teacher-Student, the segmentation effect of the focus segmentation model is improved.
In summary, in the present embodiment, a sample CT image is first obtained, and the reliability of the sample CT image is calculated, where the sample CT image includes an annotated image and an annotated-free image. And then according to the reliability of the sample CT image, pairing the CT images and generating an augmented CT image, wherein the CT image is an image only containing a focus. And then, training a deep learning network according to the augmented CT image to obtain a focus segmentation model. And finally, optimizing the segmentation precision of the focus segmentation model based on a self-learning strategy of a teacher-student model. The embodiment provides a training method of a breast CT image lesion segmentation model for semi-supervision, which retains lesion information, integrates the advantages of a fully-supervised loss function and a common semi-supervised loss function, saves a large amount of labor cost, and avoids the requirement of massive labeling data.
Exemplary device
An embodiment of the present invention further provides a training system for a lesion segmentation model of a semi-supervised chest CT image, as shown in fig. 4, the system includes: uncertainty evaluation module 10, image generation module 20, repair network module 30, self-learning module 40.
In specific implementation, the uncertainty evaluation module 10 is configured to acquire a sample CT image and calculate the reliability of the sample CT image, where the sample CT image includes an annotated image and an annotated-free image. Specifically, in the uncertainty evaluation module 10, the sample CT image is first acquired, and a semantic feature of the sample CT image is extracted using an encoder. And then introducing noise features and disturbance factors into the semantic features to obtain the semantic features with noise and a corresponding probability map. Then, the average value of the probability map is calculated, and the credibility of each sample CT image is calculated according to the probability map.
The image generation module 20 is configured to pair CT images according to the reliability of the sample CT image, and generate an augmented CT image, where the CT image is an image containing only a lesion. Specifically, the sample CT images are first sorted according to their confidence level. And then, matching the image with the highest reliability in the sample CT images with the image with the lowest reliability, matching the image with the second highest reliability in the sample CT images with the image with the second lowest reliability, and so on to complete the matching of the sample CT images. And mixing the images containing the focuses by using the credibility mask to obtain the augmented CT image.
The repairing network module 30 is configured to train a deep learning network according to the augmented CT image to obtain a focus segmentation model. The self-learning module 40 is used for optimizing the segmentation precision of the focus segmentation model based on a self-learning strategy of a teacher-student model.
Based on the above embodiments, the present invention further provides a terminal device, and a schematic block diagram thereof may be as shown in fig. 5. The terminal equipment comprises a processor, a memory, a network interface, a display screen and a temperature sensor which are connected through a system bus. Wherein the processor of the terminal device is configured to provide computing and control capabilities. The memory of the terminal equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the terminal device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a method for training a breast CT image lesion segmentation model. The display screen of the terminal equipment can be a liquid crystal display screen or an electronic ink display screen, and the temperature sensor of the terminal equipment is arranged in the terminal equipment in advance and used for detecting the operating temperature of the internal equipment.
It will be understood by those skilled in the art that the block diagram shown in fig. 5 is only a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation to the terminal device to which the solution of the present invention is applied, and a specific terminal device may include more or less components than those shown in the figure, or combine some components, or have a different arrangement of components.
In one embodiment, a terminal device is provided, where the terminal device includes a memory, a processor, and a program of a training method for a breast CT image lesion segmentation model stored in the memory and executable on the processor, and when the processor executes the program of the training method for the breast CT image lesion segmentation model, the following operation instructions are implemented:
acquiring a sample CT image and calculating the reliability of the sample CT image, wherein the sample CT image comprises an annotated image and an annotated-free image;
according to the reliability of the sample CT image, pairing the CT images and generating an augmented CT image, wherein the CT image is an image only containing a focus;
training a deep learning network according to the augmented CT image to obtain a focus segmentation model;
and optimizing the segmentation precision of the focus segmentation model based on a self-learning strategy of a teacher-student model.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, the computer program can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the invention discloses a method and a system for training a lesion segmentation model of a semi-supervised chest CT image, wherein the method comprises the following steps: acquiring a sample CT image and calculating the reliability of the sample CT image, wherein the sample CT image comprises an annotated image and an annotated-free image; according to the reliability of the sample CT image, pairing the CT images and generating an augmented CT image, wherein the CT image is an image only containing a focus; training a deep learning network according to the augmented CT image; and optimizing the segmentation precision based on a self-learning strategy of a teacher-student model to obtain a focus segmentation model. The invention provides a semi-supervised chest CT image lesion segmentation model training method, which retains lesion information, integrates the advantages of a fully supervised loss function and a common semi-supervised loss function, saves a large amount of labor cost and avoids the requirement of massive marking data.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for training a semi-supervised breast CT image lesion segmentation model, which is characterized by comprising the following steps:
acquiring a sample CT image and calculating the reliability of the sample CT image, wherein the sample CT image comprises an annotated image and an annotated-free image;
according to the reliability of the sample CT image, pairing the CT images and generating an augmented CT image, wherein the CT image is an image only containing a focus;
training a deep learning network according to the augmented CT image to obtain a focus segmentation model;
and optimizing the segmentation precision of the focus segmentation model based on a self-learning strategy of a teacher-student model.
2. The method for training a lesion segmentation model based on a chest CT image as claimed in claim 1, wherein the obtaining a sample CT image and calculating the confidence level of the sample CT image comprises:
acquiring the sample CT images, and extracting semantic features of each sample CT image by using an encoder;
introducing noise features and disturbance factors into the semantic features to obtain the semantic features with noise and a corresponding probability map;
and calculating the average value of the probability map, and calculating the reliability of the CT image of each sample according to the average value of the probability map.
3. The method for training a lesion segmentation model based on a chest CT image as claimed in claim 2, wherein the obtaining a sample CT image and calculating the confidence level of the sample CT image further comprises:
and obtaining a credibility mask according to the average value of the probability map and based on a threshold value.
4. The method for training a lesion segmentation model based on a chest CT image as claimed in claim 3, wherein the step of performing the pairing of CT images and generating the augmented CT image according to the confidence level of the sample CT image comprises:
sequencing the sample CT images according to the credibility of the sample CT images;
matching the image with the highest reliability in the sample CT images with the image with the lowest reliability, matching the image with the second highest reliability in the sample CT images with the image with the second lowest reliability, and so on to complete the matching of the sample CT images;
and mixing the images containing the focuses by using the credibility mask to obtain the augmented CT image.
5. The method for training the breast CT image lesion segmentation model according to claim 1, wherein in the process of training the deep learning network according to the augmented CT image, the method for calculating the loss function comprises the following steps:
Lseg=Lsup+βLsem
wherein, β is a dynamic parameter changing along with the training process, and the calculation mode is as follows:
Figure FDA0003131252410000021
wherein, b1 and b2 are two hyper-parameters which are respectively set to be 0.1 and-5, and the value of beta is 0 to 0.1; l issupIs a conventional loss function, LsemAs a function of semi-supervised loss, enRepresenting the number of steps of the current training, EmaxRepresenting the total number of steps of the training, the value being related to the data set, EmaxValue 200 and value 300.
6. The method for training a breast CT image lesion segmentation model as claimed in claim 5, wherein the conventional loss function L issupThe calculation method is as follows:
Figure FDA0003131252410000022
wherein L isceRepresenting a cross-entropy function, FsegFor the encoder, I is the sample CT image, I is the batch of sample CT images, MIN is a mask image of the sample CT images and indicates the number of sample CT images in the batch.
7. The method for training a breast CT image lesion segmentation model as claimed in claim 5, wherein the conventional loss function L issemThe calculation method is as follows:
Figure FDA0003131252410000023
wherein K represents the logarithm of the image pair in the training batch;
Figure FDA0003131252410000024
a blended image representing a high confidence image in the kth pair of images,
Figure FDA0003131252410000025
a blended image mask representing a high confidence image in the kth pair of images,
Figure FDA0003131252410000026
a blended image representing low confidence images in the kth pair of images,
Figure FDA0003131252410000027
mixed image mask, L, representing low confidence images in the k-th pair of imagesceRepresenting a cross entropy function.
8. A system for training a lesion segmentation model of a semi-supervised chest CT image, the system comprising:
the uncertainty evaluation module is used for acquiring a sample CT image and calculating the reliability of the sample CT image, wherein the sample CT image comprises an annotated image and an annotated-free image;
the image generation module is used for pairing the CT images according to the reliability of the sample CT image and generating an augmented CT image, wherein the CT image is an image only containing a focus;
the repairing network module is used for training a deep learning network according to the augmented CT image to obtain a focus segmentation model;
and the self-learning module is used for optimizing the segmentation precision of the focus segmentation model based on a self-learning strategy of a teacher-student model.
9. A terminal device, characterized in that the terminal device comprises a memory, a processor and a training program of a breast CT image lesion segmentation model stored in the memory and operable on the processor, and when the processor executes the training program of the breast CT image lesion segmentation model, the steps of the training method of the breast CT image lesion segmentation model according to any one of claims 1 to 7 are implemented.
10. A computer-readable storage medium, on which a training program of a breast CT image lesion segmentation model is stored, which, when executed by a processor, implements the steps of the training method of the breast CT image lesion segmentation model according to any one of claims 1 to 7.
CN202110705914.8A 2021-06-24 2021-06-24 Breast CT image focus segmentation model training method and system Active CN113344896B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110705914.8A CN113344896B (en) 2021-06-24 2021-06-24 Breast CT image focus segmentation model training method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110705914.8A CN113344896B (en) 2021-06-24 2021-06-24 Breast CT image focus segmentation model training method and system

Publications (2)

Publication Number Publication Date
CN113344896A true CN113344896A (en) 2021-09-03
CN113344896B CN113344896B (en) 2023-01-17

Family

ID=77478477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110705914.8A Active CN113344896B (en) 2021-06-24 2021-06-24 Breast CT image focus segmentation model training method and system

Country Status (1)

Country Link
CN (1) CN113344896B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763413A (en) * 2021-09-30 2021-12-07 深圳大学 Training method of image segmentation model, image segmentation method and storage medium
CN114066871A (en) * 2021-11-19 2022-02-18 江苏科技大学 Method for training new coronary pneumonia focus region segmentation model
CN114140639A (en) * 2021-11-04 2022-03-04 杭州医派智能科技有限公司 Deep learning-based renal blood vessel extreme urine pole classification method in image, computer equipment and computer readable storage medium
CN114926471A (en) * 2022-05-24 2022-08-19 北京医准智能科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN115147426A (en) * 2022-09-06 2022-10-04 北京大学 Model training and image segmentation method and system based on semi-supervised learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10430946B1 (en) * 2019-03-14 2019-10-01 Inception Institute of Artificial Intelligence, Ltd. Medical image segmentation and severity grading using neural network architectures with semi-supervised learning techniques
WO2020143309A1 (en) * 2019-01-09 2020-07-16 平安科技(深圳)有限公司 Segmentation model training method, oct image segmentation method and apparatus, device and medium
CN112036335A (en) * 2020-09-03 2020-12-04 南京农业大学 Deconvolution-guided semi-supervised plant leaf disease identification and segmentation method
CN112150478A (en) * 2020-08-31 2020-12-29 温州医科大学 Method and system for constructing semi-supervised image segmentation framework
CN112465111A (en) * 2020-11-17 2021-03-09 大连理工大学 Three-dimensional voxel image segmentation method based on knowledge distillation and countertraining
CN112560964A (en) * 2020-12-18 2021-03-26 深圳赛安特技术服务有限公司 Method and system for training Chinese herbal medicine pest and disease identification model based on semi-supervised learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020143309A1 (en) * 2019-01-09 2020-07-16 平安科技(深圳)有限公司 Segmentation model training method, oct image segmentation method and apparatus, device and medium
US10430946B1 (en) * 2019-03-14 2019-10-01 Inception Institute of Artificial Intelligence, Ltd. Medical image segmentation and severity grading using neural network architectures with semi-supervised learning techniques
CN112150478A (en) * 2020-08-31 2020-12-29 温州医科大学 Method and system for constructing semi-supervised image segmentation framework
CN112036335A (en) * 2020-09-03 2020-12-04 南京农业大学 Deconvolution-guided semi-supervised plant leaf disease identification and segmentation method
CN112465111A (en) * 2020-11-17 2021-03-09 大连理工大学 Three-dimensional voxel image segmentation method based on knowledge distillation and countertraining
CN112560964A (en) * 2020-12-18 2021-03-26 深圳赛安特技术服务有限公司 Method and system for training Chinese herbal medicine pest and disease identification model based on semi-supervised learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SUMAN SEDAI ET AL: "Uncertainty guided semi-supervised segmentation of retinal layers in OCT images", 《ARXIV:2103.02083V1》 *
金兰依等: "基于半监督阶梯网络的肝脏CT影像分割", 《吉林大学学报(信息科学版)》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763413A (en) * 2021-09-30 2021-12-07 深圳大学 Training method of image segmentation model, image segmentation method and storage medium
CN113763413B (en) * 2021-09-30 2023-11-24 深圳大学 Training method of image segmentation model, image segmentation method and storage medium
CN114140639A (en) * 2021-11-04 2022-03-04 杭州医派智能科技有限公司 Deep learning-based renal blood vessel extreme urine pole classification method in image, computer equipment and computer readable storage medium
CN114066871A (en) * 2021-11-19 2022-02-18 江苏科技大学 Method for training new coronary pneumonia focus region segmentation model
CN114926471A (en) * 2022-05-24 2022-08-19 北京医准智能科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN114926471B (en) * 2022-05-24 2023-03-28 北京医准智能科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN115147426A (en) * 2022-09-06 2022-10-04 北京大学 Model training and image segmentation method and system based on semi-supervised learning

Also Published As

Publication number Publication date
CN113344896B (en) 2023-01-17

Similar Documents

Publication Publication Date Title
CN113344896B (en) Breast CT image focus segmentation model training method and system
WO2020168934A1 (en) Medical image segmentation method, apparatus, computer device, and storage medium
CN111932561A (en) Real-time enteroscopy image segmentation method and device based on integrated knowledge distillation
Zhang et al. Learn from all: Erasing attention consistency for noisy label facial expression recognition
CN109741346B (en) Region-of-interest extraction method, device, equipment and storage medium
US11922654B2 (en) Mammographic image processing method and apparatus, system and medium
CN113129309B (en) Medical image semi-supervised segmentation system based on object context consistency constraint
CN110189323B (en) Breast ultrasound image lesion segmentation method based on semi-supervised learning
CN110797101B (en) Medical data processing method, medical data processing device, readable storage medium and computer equipment
Zhang et al. Robust medical image segmentation from non-expert annotations with tri-network
CN110660055B (en) Disease data prediction method and device, readable storage medium and electronic equipment
CN111275118B (en) Chest film multi-label classification method based on self-correction type label generation network
CN116402838B (en) Semi-supervised image segmentation method and system for intracranial hemorrhage
CN112232407A (en) Neural network model training method and device for pathological image sample
CN114549470B (en) Hand bone critical area acquisition method based on convolutional neural network and multi-granularity attention
CN113658721A (en) Alzheimer disease process prediction method
CN112768065A (en) Facial paralysis grading diagnosis method and device based on artificial intelligence
CN113724359A (en) CT report generation method based on Transformer
CN115294086A (en) Medical image segmentation method, segmentation model training method, medium, and electronic device
Hu et al. Unsupervised computed tomography and cone-beam computed tomography image registration using a dual attention network
CN114067233B (en) Cross-mode matching method and system
CN109785940B (en) Method for typesetting medical image film
Song et al. Learning 3d features with 2d cnns via surface projection for ct volume segmentation
Guo Statistical inference for maximin effects: Identifying stable associations across multiple studies
CN117437423A (en) Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant