CN117036384A - Medical image segmentation method, device, equipment and storage medium - Google Patents

Medical image segmentation method, device, equipment and storage medium Download PDF

Info

Publication number
CN117036384A
CN117036384A CN202311059057.4A CN202311059057A CN117036384A CN 117036384 A CN117036384 A CN 117036384A CN 202311059057 A CN202311059057 A CN 202311059057A CN 117036384 A CN117036384 A CN 117036384A
Authority
CN
China
Prior art keywords
image segmentation
medical image
medical
model
segmentation model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311059057.4A
Other languages
Chinese (zh)
Inventor
吴少智
李寒
彭伟航
林锦峰
刘欣刚
苏涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze River Delta Research Institute of UESTC Huzhou
Original Assignee
Yangtze River Delta Research Institute of UESTC Huzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze River Delta Research Institute of UESTC Huzhou filed Critical Yangtze River Delta Research Institute of UESTC Huzhou
Priority to CN202311059057.4A priority Critical patent/CN117036384A/en
Publication of CN117036384A publication Critical patent/CN117036384A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a medical image segmentation method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring a medical image to be segmented; inputting the medical image to be segmented into a target medical image segmentation model to obtain a segmentation result of the medical image to be segmented; the target medical image segmentation model is a model obtained through training in a self-supervision and supervised mode. The technical scheme of the embodiment of the invention solves the problems that the existing medical image segmentation model is mostly obtained by adopting sample image training with labels, the model training efficiency is insufficient, the robustness of the medical image segmentation model is insufficient, and the accuracy of image segmentation is insufficient, and the medical image segmentation model can be obtained by training in a self-supervision and supervised mode, so that the model training efficiency is improved, the robustness of the medical image segmentation model is improved, and the accuracy of image segmentation is further improved.

Description

Medical image segmentation method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a medical image segmentation method, a device, equipment and a storage medium.
Background
The existing medical image segmentation model is mostly obtained by adopting a sample image with a label through supervised training, but the medical image is annotated manually and takes a lot of time to obtain, which leads to the problems of lower training efficiency of the medical image segmentation model, insufficient robustness of the trained medical image segmentation model and insufficient accuracy of image segmentation when model training is carried out based on fewer sample images with labels,
disclosure of Invention
The embodiment of the invention provides a medical image segmentation method, a device, equipment and a storage medium, which can obtain a medical image segmentation model through self-supervision and supervised training, and improve the robustness of the medical image segmentation model and the accuracy of image segmentation while improving the model training efficiency.
In a first aspect, an embodiment of the present invention provides a medical image segmentation method, including:
acquiring a medical image to be segmented;
inputting the medical image to be segmented into a target medical image segmentation model to obtain a segmentation result of the medical image to be segmented;
the target medical image segmentation model is a model obtained through training in a self-supervision and supervised mode.
In a second aspect, an embodiment of the present invention provides a medical image segmentation apparatus, including:
the medical image acquisition module is used for acquiring medical images to be segmented;
the medical image segmentation module is used for inputting the medical image to be segmented into a target medical image segmentation model to obtain a segmentation result of the medical image to be segmented;
the target medical image segmentation model is a model obtained through training in a self-supervision and supervised mode.
In a third aspect, an embodiment of the present invention provides a computer apparatus, including:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the medical image segmentation method as set forth in any one of the embodiments.
In a fourth aspect, an embodiment of the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the medical image segmentation method according to any of the embodiments.
According to the technical scheme provided by the embodiment of the invention, the medical image to be segmented is obtained; inputting the medical image to be segmented into a target medical image segmentation model to obtain a segmentation result of the medical image to be segmented; the target medical image segmentation model is a model obtained through training in a self-supervision and supervised mode. The technical scheme of the embodiment of the invention solves the problems that the existing medical image segmentation model is mostly obtained by adopting sample image training with labels, the model training efficiency is insufficient, the robustness of the medical image segmentation model is insufficient, and the accuracy of image segmentation is insufficient, and the medical image segmentation model can be obtained by training in a self-supervision and supervised mode, so that the model training efficiency is improved, the robustness of the medical image segmentation model is improved, and the accuracy of image segmentation is further improved.
Drawings
FIG. 1 is a flow chart of a medical image segmentation method provided by an embodiment of the present invention;
FIG. 2 is a flow chart of yet another medical image segmentation method provided by an embodiment of the present invention;
FIG. 3 is a flowchart of another medical image segmentation method provided by an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a medical image segmentation apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a flowchart of a medical image segmentation method according to an embodiment of the present invention, where the embodiment of the present invention is applicable to a scene of image segmentation of a medical image, the method may be performed by a medical image segmentation apparatus, and the apparatus may be implemented in software and/or hardware.
As shown in fig. 1, the medical image segmentation method comprises the steps of:
s110, acquiring a medical image to be segmented.
The medical image to be segmented may be a medical image that needs to be segmented, for example, the medical image to be segmented may be an X-ray image of a certain part of a patient.
S120, inputting the medical image to be segmented into a target medical image segmentation model to obtain a segmentation result of the medical image to be segmented.
The target medical image segmentation model may be a medical image segmentation model finally obtained through model training. The target medical image segmentation model is a model trained in a self-supervised and supervised manner. After the medical image to be segmented is acquired, the medical image to be segmented can be input into a target medical image segmentation model to obtain a segmentation result of the medical image to be segmented, and image segmentation of the medical image to be segmented is realized.
When the medical image segmentation model is trained, the model training efficiency based on the supervised training mode is low. On the one hand, because patient privacy and labeling medical images rely on professional field medical knowledge, labeled image data are difficult to collect, and on the other hand, research statistics outline a medical image takes twenty times longer to label a classified image, resulting in time-consuming pixel-level labeling that further exacerbates the scarcity of labeled medical images. Therefore, by performing self-supervision training on the initial medical image segmentation model, the self-supervision training on the initial medical image segmentation model can be performed on the basis of the characteristic that the self-supervision does not need to annotate medical images, so that the model training efficiency is improved.
Further, since the preliminary medical image segmentation model is obtained only through a self-supervised training manner, the medical image segmentation model may have a problem of insufficient robustness. For this case, we can consider that on this basis, we perform supervised model training on the preliminary image segmentation model. The robustness of the model can be improved, and the segmentation accuracy of the image segmentation of the medical image based on the medical image segmentation model is further improved.
The preliminary medical image segmentation model is considered to have undergone self-supervision training, so that the model has a certain image segmentation capability. Therefore, in the supervised model training process of the preliminary learning image segmentation model, a small amount of annotated medical images can be selected for training of the medical image segmentation model. By performing self-supervision training on the medical image segmentation model, and then performing supervision training, the robustness of the medical image segmentation model can be improved while the model training efficiency is improved, and the accuracy of image segmentation is further improved.
According to the technical scheme provided by the embodiment of the invention, the medical image to be segmented is obtained; inputting the medical image to be segmented into a target medical image segmentation model to obtain a segmentation result of the medical image to be segmented; the target medical image segmentation model is a model obtained through training in a self-supervision and supervised mode. The technical scheme of the embodiment of the invention solves the problems that the existing medical image segmentation model is mostly obtained by adopting sample image training with labels, the model training efficiency is insufficient, the robustness of the medical image segmentation model is insufficient, and the accuracy of image segmentation is insufficient, and the medical image segmentation model can be obtained by training in a self-supervision and supervised mode, so that the model training efficiency is improved, the robustness of the medical image segmentation model is improved, and the accuracy of image segmentation is further improved.
Fig. 2 is a flowchart of another medical image segmentation method according to an embodiment of the present invention, where the embodiment of the present invention is applicable to a scenario in which an image of a medical image is segmented, and further illustrates how, based on the above embodiment, a target medical image segmentation model is obtained through training in a self-supervision and supervised manner.
As shown in fig. 2, the medical image segmentation method comprises the steps of:
s210, performing self-supervision training on the initial medical image segmentation model based on the unlabeled medical sample image to obtain the initial medical image segmentation model.
The unlabeled medical sample image may be a preset medical image without an image segmentation label. The unlabeled medical sample image can be used as a training sample of the medical image segmentation model to perform self-supervision training on the medical image segmentation model. The preliminary medical image segmentation model may be a model obtained by training the preliminary medical image segmentation model based on a self-supervision training mode.
Specifically, in the process of performing self-supervision training on an initial medical image segmentation model based on an unlabeled medical sample image, the unlabeled medical sample image can be subjected to image enhancement based on two different image enhancement modes to obtain two images, then the two images can be respectively input into the initial medical image segmentation model to perform image segmentation, then the similarity between image segmentation results of the two images is compared, a model loss function is determined according to the similarity, and finally parameters of the initial medical image segmentation model are adjusted based on the model loss function to obtain the initial medical image segmentation model.
When the medical image segmentation model is trained, the model training efficiency based on the supervised training mode is low. On the one hand, because patient privacy and labeling medical images rely on professional field medical knowledge, labeled image data are difficult to collect, and on the other hand, research statistics outline a medical image takes twenty times longer to label a classified image, resulting in time-consuming pixel-level labeling that further exacerbates the scarcity of labeled medical images. Therefore, by performing self-supervision training on the initial medical image segmentation model, the self-supervision training on the initial medical image segmentation model can be performed on the basis of the characteristic that the self-supervision does not need to annotate medical images, so that the model training efficiency is improved.
Further, since the preliminary medical image segmentation model is obtained only through a self-supervised training manner, the medical image segmentation model may have a problem of insufficient robustness. For this case, we can consider that on this basis, we perform supervised model training on the preliminary image segmentation model. The robustness of the model can be improved, and the segmentation accuracy of the image segmentation of the medical image based on the medical image segmentation model is further improved.
The preliminary medical image segmentation model is considered to have undergone self-supervision training, so that the model has a certain image segmentation capability. Therefore, in the supervised model training process of the preliminary learning image segmentation model, a small amount of annotated medical images can be selected for training of the medical image segmentation model. By performing self-supervision training on the medical image segmentation model, and then performing supervision training, the robustness of the medical image segmentation model can be improved while the model training efficiency is improved, and the accuracy of image segmentation is further improved.
S220, performing supervised training on the initial medical image segmentation model based on the labeled medical sample image to obtain a target medical image segmentation model.
The labeled medical sample image may be a preset medical image with an image segmentation label. The labeled medical sample image can be used as a training sample of the medical image segmentation model to perform supervised training on the medical segmentation model. The target medical image segmentation model may be a model obtained by training the preliminary medical image segmentation model. Specifically, the initial medical image segmentation model is subjected to supervised training based on the labeled medical sample image, and a target medical image segmentation model is obtained.
Specifically, in the process of performing supervised training on an initial medical image segmentation model based on a labeled medical sample image, the labeled medical sample image can be input into the initial medical image segmentation model to obtain a corresponding image segmentation result, then the image segmentation result is compared with an image segmentation label of the image, a model loss function is determined according to the comparison result, and finally parameters of the initial medical image segmentation model are adjusted according to the model loss function to obtain a target medical image segmentation model.
S230, acquiring a medical image to be segmented.
The medical image to be segmented may be a medical image that needs to be segmented, for example, the medical image to be segmented may be an X-ray image of a certain part of a patient.
S240, inputting the medical image to be segmented into the target medical image segmentation model to obtain a segmentation result of the medical image to be segmented.
The target medical image segmentation model is obtained through a supervised and self-supervised training mode, so that the target medical image segmentation model can keep high robustness, and high image segmentation accuracy can be kept when the medical image is segmented based on the model.
After the medical image to be segmented is acquired, the medical image to be segmented can be input into a target medical image segmentation model, and a segmentation result of the medical image to be segmented is obtained. Image segmentation is carried out on the medical image to be segmented based on the target medical image segmentation model, so that the accuracy of image segmentation can be improved.
In an alternative embodiment, the target model training round may be determined in response to the model training round setting operation on the preset model training interactive interface; and training the initial medical image segmentation model based on the training turns of the target model to obtain the target medical image segmentation model.
The preset model training interactive interface may be a preset interactive interface related to training of the medical image segmentation model. The target model training pass may be a final determined pass that trains the initial medical image segmentation model. Specifically, the user can set the training round of the medical image segmentation model on the preset model training interactive interface, and after the model training round setting operation is obtained, the machine can respond to the model training round setting operation to determine the target model training round. As the learning efficiency of the model is continuously reduced along with the increase of the training times of the model, corresponding target model training rounds can be set for the training of the model, and the medical image segmentation model is trained based on the target model training rounds, so that the training efficiency of the model can be improved. Meanwhile, the training turn of the medical image segmentation model is limited, so that the possibility of overfitting of the medical image segmentation model can be reduced.
For example, the medical image segmentation model may be trained under the pytorch framework, with an adam (adaptive moment estimation ) optimizer for training, a learning rate of 0.005, a StepLR interval to adjust the learning rate, a 10-fold decrease in 4000 learning rate per iteration, a weight offset of 0.00001, a batch size of 8, and a segmentation stop after 120 rounds.
In an alternative embodiment, after obtaining the target medical image segmentation model, further comprising: inputting a preset calibration medical image into a target medical image segmentation model; and calculating the loss of image segmentation of the preset calibration medical image by the target medical image segmentation model based on a preset similarity loss function, and obtaining a similarity loss result.
The preset verification medical image may be a medical image preset for verifying an image segmentation capability of the target medical image segmentation model. Specifically, the preset verification medical image may be a medical image that is different from both the unlabeled medical sample image and the labeled medical sample image. Because the target medical image segmentation model is a model trained based on the unlabeled medical sample image and the labeled medical sample image, in order to improve the effectiveness of the image segmentation capability of the verification model, images which are different from the unlabeled medical sample image and the labeled medical sample image can be selected for verifying the image segmentation capability of the target medical image segmentation model.
For example, a dice score similarity loss function may be used to calculate a loss of image segmentation of the preset calibration medical image by the target medical image segmentation model, so as to obtain a similarity loss result.
The preset similarity loss function may be a function of a preset error generated by the calculation model for image segmentation. The similarity loss result may be an error between a result of image segmentation of a preset calibration medical image based on a target medical image segmentation model and a correct image segmentation result of the image. Specifically, the loss of image segmentation of the preset calibration medical image by the target medical image segmentation model can be calculated based on a preset similarity loss function, and a similarity loss result is obtained.
After the similarity loss result is obtained, whether the image segmentation capability of the target medical image segmentation model meets the preset requirement can be judged based on the similarity loss result. If so, the training process of the model may be stopped. If the image segmentation capability is not met, the image segmentation capability of the medical image segmentation model can be improved through retraining and the like, so that the image segmentation capability meets the preset requirement.
According to the technical scheme provided by the embodiment of the invention, the initial medical image segmentation model is obtained by performing self-supervision training on the initial medical image segmentation model based on the unlabeled medical sample image; performing supervised training on the initial medical image segmentation model based on the labeled medical sample image to obtain a target medical image segmentation model; acquiring a medical image to be segmented; inputting the medical image to be segmented into a target medical image segmentation model to obtain a segmentation result of the medical image to be segmented. The technical scheme of the embodiment of the invention solves the problems that the existing medical image segmentation model is mostly obtained by adopting sample image training with labels, the model training efficiency is insufficient, the robustness of the medical image segmentation model is insufficient, and the accuracy of image segmentation is insufficient, and the medical image segmentation model can be obtained by training in a self-supervision and supervised mode, so that the model training efficiency is improved, the robustness of the medical image segmentation model is improved, and the accuracy of image segmentation is further improved.
FIG. 3 is a flowchart of another medical image segmentation method according to an embodiment of the present invention, where the embodiment of the present invention is applicable to a scene where an image is segmented, and on the basis of the above embodiment, how to perform self-supervision training on an initial medical image segmentation model based on an unlabeled medical sample image to obtain an initial medical image segmentation model is further described; and performing supervised training on the initial medical image segmentation model based on the labeled medical sample image to obtain a target medical image segmentation model. The apparatus may be implemented in software and/or hardware, and integrated into a computer device having application development functionality.
As shown in fig. 3, the medical image segmentation method comprises the steps of:
s310, respectively carrying out image enhancement on the non-label medical sample image based on two different image enhancement modes to obtain a first non-label medical sample image and a second non-label medical sample image.
The image enhancement mode may be a preset processing mode for performing image enhancement on the medical image. By way of example, the image enhancement mode may include the following modes: (1) clipping and filling: the method specifically comprises center cutting and random cutting, and then interpolation filling is carried out on the cut image, so that the size of the image is kept unchanged; (2) rotation and scaling: performing horizontal rotation and vertical rotation on the image, and enlarging and reducing; (3) contrast enhancement based on histogram equalization; (4) gamma (gamma) conversion makes the output image and the input image gray scale exponentially. (5) Adding Gaussian noise to the image with 15% probability, the noise variance being between [0,0.1 ]; (6) The gaussian blur is performed with 20% probability, with gaussian kernels between [ b,1 ].
The unlabeled medical sample image may be a preset medical image without an image segmentation label present. The unlabeled medical sample image can be used as a training sample of the medical image segmentation model to perform self-supervision training on the medical image segmentation model. The first and second non-labeled medical sample images may be medical sample images obtained by image-enhancing the non-labeled medical sample images, respectively, based on an image-enhancing method. The first unlabeled medical sample image and the second unlabeled medical sample image may be used as a control group for training the medical image segmentation model.
For example, given an unlabeled medical sample image u, an image block containing two semantically related overlapping areas is cut out for u, two combinations are randomly selected from the given six image enhancement modes, two different enhancement is respectively carried out on the two image blocks extracted by u, and the enhanced image blocks are u1 and u2 respectively and are expressed as a group of positive sample pairs.
S320, respectively inputting the first untagged medical sample image and the second untagged medical sample image into the initial medical image segmentation model to obtain an untagged image segmentation result.
Wherein the initial medical image segmentation model may be an original, untrained medical image segmentation model. Specifically, the initial medical image segmentation model includes two twin network models, namely an online learning network and a target network. The online learning network and the target network each include: a feature encoder f, a mapper header g, and a predictor header q.
The unlabeled image segmentation result may be a result of an image segmentation of the unlabeled medical sample image by the initial medical image segmentation model. Specifically, the first unlabeled medical sample image and the second unlabeled medical sample image may be respectively input to the initial medical image segmentation model, so as to obtain an unlabeled image segmentation result.
Specifically, when determining the label-free image segmentation result, a first label-free medical sample image and a second label-free medical sample image can be respectively input into an initial medical image segmentation model to obtain a first feature vector and a second feature vector; and calculating the similarity of the first feature vector and the second feature vector to obtain feature vector similarity, and finally taking the feature vector similarity as a label-free image segmentation result.
The first feature vector may be a result of image segmentation of the first unlabeled medical sample image. Specifically, a first unlabeled medical sample image may be input into an initial medical image segmentation model to obtain a first feature vector. Accordingly, the second feature vector may be a result of image segmentation of the second unlabeled medical sample image. Specifically, a second unlabeled medical sample image may be input into the initial medical image segmentation model to obtain a second feature vector.
Illustratively, the feature encoder f employs a resnet50 (depth residual network) to extract features, which are output as the result of the last averaged pooling layer. The obtained features are mapped to a feature space in a mapper head, and g adopts a structure with a hidden layer multi-layer perceptron, specifically a convolution layer-a feature normalization layer-a ReLu activation function layer-a convolution layer. A 128-dimensional feature vector z is finally obtained.
Further, the feature vector similarity may be a value reflecting the degree of similarity between the first feature vector and the second feature vector. The similarity of the feature vectors can be obtained by calculating the similarity of the first feature vector and the second feature vector. Specifically, cosine similarity may be used as a parameter for comparing similarity between the first feature vector and the second feature vector. And calculating cosine similarity between the first feature vector and the second feature vector, and taking the cosine similarity between the first feature vector and the second feature vector as the feature vector similarity. The similarity of the feature vectors is obtained through calculation, so that the errors of the initial medical image segmentation model after image segmentation of medical images in different image enhancement modes can be known, the initial medical image segmentation model can be adjusted correspondingly through the errors, and further self-supervision training of the model is achieved.
S330, adjusting parameters of the initial medical image segmentation model based on the label-free image segmentation result to obtain the initial medical image segmentation model.
The preliminary medical image segmentation model may be a model obtained by training the preliminary medical image segmentation model based on a self-supervision training mode. Specifically, parameters of the initial medical image segmentation model can be adjusted based on the label-free image segmentation result to obtain the initial medical image segmentation model.
Optionally, when parameters of the initial medical image segmentation model are adjusted based on the unlabeled image segmentation result, the unlabeled segmentation loss function may be determined according to the feature vector similarity; and adjusting parameters of the initial medical image segmentation model according to the label-free segmentation loss function to obtain the initial medical image segmentation model.
The label-free segmentation loss function may be a loss function of the initial medical image segmentation model for performing image segmentation on a preset label-free medical sample image. By determining the unlabeled segmentation loss function, parameters of the initial medical image model segmentation model can be conveniently adjusted according to the unlabeled segmentation loss function, and the initial medical image segmentation model is obtained. Specifically, the unlabeled segmentation loss function may be determined by the above feature vector similarity, and the formula for determining the unlabeled segmentation loss function Ii, j according to the feature vector similarity is as follows:
Wherein the total number of samples of N samples, zi represents the feature vector of the target network, zj represents the feature vector of the online learning network, z is the feature vector obtained by the mapping head, sim represents the similarity, and τ is the temperature weight. The above-mentioned losses should satisfy symmetry, i.e., u1 and u2 input to the target learning network and the online learning network, respectively, should also satisfy consistency losses, so the overall losses should be the sum of both.
Since the preliminary medical image segmentation model is obtained only by a self-supervised training method, the medical image segmentation model may have a problem of insufficient robustness. For this case, we can consider that on this basis, we perform supervised model training on the preliminary image segmentation model. The robustness of the model can be improved, and the segmentation accuracy of the image segmentation of the medical image based on the medical image segmentation model is further improved.
S340, inputting the labeled medical sample image into the preliminary medical image segmentation model to obtain a labeled image segmentation result.
The labeled medical sample image may be a preset medical image with an image segmentation label. The labeled medical sample image can be used as a training sample of the medical image segmentation model to perform supervised training on the medical segmentation model. The labeled image segmentation result may be a result of image segmentation of the labeled medical sample image based on the preliminary medical image segmentation model. Specifically, the labeled medical sample image may be input to a preliminary medical image segmentation model to obtain a labeled image segmentation result.
S350, comparing the labeled image segmentation result with the image segmentation labels of the labeled medical sample image, and determining label segmentation loss according to the comparison result.
The image segmentation label can be a preset label of an image segmentation result annotated in advance for the labeled medical sample image. The labeled segmentation penalty may be an error of the preliminary medical image segmentation model in image segmentation of the labeled medical sample image. Specifically, the labeled image segmentation result and the image segmentation label of the labeled medical sample image can be compared, and the labeled segmentation loss is determined according to the comparison result. Specifically, the formula for calculating the label segmentation loss (ai) is as follows:
ai represents the input image, Ω represents all pixels of the image, (up, vp) represents positive samples, (u ', v') represents negative samples,and the positive sample set is all the same samples consistent with the semantics of the positive sample set under the condition that the label data is introduced, and if the label sample is not introduced, the positive sample is only the characteristic of the corresponding coordinate of the other enhanced sample. The negative samples are then divided by the positive sample set remaining samples.
It should be noted that the background elements of a feature map occupy a vast majority of positive sample sets, and that these background elements provide very little information for downstream segmentation. We will define a pixel point that contains only non-background elements. The average loss li over the final batch is:
where A represents the number of samples in a batch.
Optionally, when comparing the image segmentation result with the image segmentation label of the labeled medical sample image and determining the labeled segmentation loss according to the comparison result, the labeled medical sample image may be downsampled based on a preset step length to obtain a plurality of labeled medical sample image areas; comparing the image region segmentation result of each labeled medical sample image region with the image segmentation labels, and determining label segmentation loss according to the comparison result.
Wherein the tagged medical sample image region may be a partial image region of the tagged medical sample image. Specifically, the labeled medical sample image may be downsampled based on a preset step size to obtain a plurality of labeled medical sample image regions. After the tagged medical sample image regions are determined, the image region segmentation results of each tagged medical sample image region and the image segmentation tags may be compared, and tag segmentation losses may be determined based on the comparison results.
Because the image segmentation result of the labeled image is directly compared with the image segmentation label of the labeled medical sample image, a large calculation amount is generated. Therefore, the labeled medical sample image can be downsampled based on the preset step length to obtain a plurality of labeled medical sample image areas; comparing the image region segmentation result of each tagged medical sample image region with the image segmentation tags, and determining the tagged segmentation loss according to the comparison result, so that the calculated amount of the tagged segmentation loss can be reduced, and the training efficiency of the medical image segmentation model is improved.
Illustratively, under the direction of the tags, the supervised contrast loss provides additional information about the similarity of features of the same class and the dissimilarity of features between classes. However, the complexity of the label data is increased, so that the loss of each pixel point is calculated as O (n 2 ) The loss of one picture is O (n 4 )。
In order to solve the problem of large calculation amount, it is considered to downsample the data with a fixed step size, and to avoid losing the original information as much as possible, the step size is set to 4. Each feature map is divided into a small block of n 'x n', the local contrast loss is calculated only in each block, and then all blocks are averaged.
S360, adjusting parameters of the preliminary medical image segmentation model according to the labeled segmentation loss to obtain the target medical image segmentation model.
The target medical image segmentation model may be a model obtained by training the preliminary medical image segmentation model. Specifically, parameters of the preliminary medical image segmentation model can be adjusted according to the labeled segmentation loss, and the target medical image segmentation model is obtained.
S370, acquiring a medical image to be segmented.
The medical image to be segmented may be a medical image that needs to be segmented, for example, the medical image to be segmented may be an X-ray image of a certain part of a patient.
S380, inputting the medical image to be segmented into the target medical image segmentation model to obtain a segmentation result of the medical image to be segmented.
The target medical image segmentation model is obtained through a supervised and self-supervised training mode, so that the target medical image segmentation model can keep high robustness, and high image segmentation accuracy can be kept when the medical image is segmented based on the model.
After the medical image to be segmented is acquired, the medical image to be segmented can be input into a target medical image segmentation model, and a segmentation result of the medical image to be segmented is obtained. Image segmentation is carried out on the medical image to be segmented based on the target medical image segmentation model, so that the accuracy of image segmentation can be improved.
According to the technical scheme provided by the embodiment of the invention, the first and second non-label medical sample images are obtained by respectively carrying out image enhancement on the non-label medical sample images based on two different image enhancement modes; respectively inputting the first non-label medical sample image and the second non-label medical sample image into an initial medical image segmentation model to obtain a non-label image segmentation result; parameters of an initial medical image segmentation model are adjusted based on the label-free image segmentation result, and the initial medical image segmentation model is obtained; inputting the labeled medical sample image into a preliminary medical image segmentation model to obtain a labeled image segmentation result; comparing the labeled image segmentation result with the image segmentation label of the labeled medical sample image, and determining labeled segmentation loss according to the comparison result; according to the labeled segmentation loss, parameters of the preliminary medical image segmentation model are adjusted, and a target medical image segmentation model is obtained; acquiring a medical image to be segmented; inputting the medical image to be segmented into a target medical image segmentation model to obtain a segmentation result of the medical image to be segmented.
The technical scheme of the embodiment of the invention solves the problems that the existing medical image segmentation model is mostly obtained by adopting sample image training with labels, the model training efficiency is insufficient, the robustness of the medical image segmentation model is insufficient, and the accuracy of image segmentation is insufficient, and the medical image segmentation model can be obtained by training in a self-supervision and supervised mode, so that the model training efficiency is improved, the robustness of the medical image segmentation model is improved, and the accuracy of image segmentation is further improved.
Fig. 4 is a schematic structural diagram of a medical image segmentation apparatus according to an embodiment of the present invention, where the embodiment of the present invention is applicable to a scenario in which a medical image is segmented, and the apparatus may be implemented in software and/or hardware, and integrated into a computer device with an application development function.
As shown in fig. 4, the medical image segmentation apparatus includes: a medical image acquisition module 310 and a medical image segmentation module 320.
The medical image acquisition module is used for acquiring medical images to be segmented; the medical image segmentation module is used for inputting the medical image to be segmented into a target medical image segmentation model to obtain a segmentation result of the medical image to be segmented; the target medical image segmentation model is a model obtained through training in a self-supervision and supervised mode.
According to the technical scheme provided by the embodiment of the invention, the medical image to be segmented is obtained; inputting the medical image to be segmented into a target medical image segmentation model to obtain a segmentation result of the medical image to be segmented; the target medical image segmentation model is a model obtained through training in a self-supervision and supervised mode. The technical scheme of the embodiment of the invention solves the problems that the existing medical image segmentation model is mostly obtained by adopting sample image training with labels, the model training efficiency is insufficient, the robustness of the medical image segmentation model is insufficient, and the accuracy of image segmentation is insufficient, and the medical image segmentation model can be obtained by training in a self-supervision and supervised mode, so that the model training efficiency is improved, the robustness of the medical image segmentation model is improved, and the accuracy of image segmentation is further improved.
In an alternative embodiment, the medical image segmentation apparatus further comprises: the medical image segmentation model training module is used for: performing self-supervision training on the initial medical image segmentation model based on the unlabeled medical sample image to obtain a preliminary medical image segmentation model; and performing supervised training on the initial medical image segmentation model based on the labeled medical sample image to obtain a target medical image segmentation model.
In an alternative embodiment, the medical image segmentation model training module includes: the self-supervision training unit is used for: respectively carrying out image enhancement on the non-label medical sample image based on two different image enhancement modes to obtain a first non-label medical sample image and a second non-label medical sample image; respectively inputting the first non-labeled medical sample image and the second non-labeled medical sample image into the initial medical image segmentation model to obtain a non-labeled image segmentation result; and adjusting parameters of the initial medical image segmentation model based on the label-free image segmentation result to obtain the initial medical image segmentation model.
In an alternative embodiment, the self-supervising training unit comprises: a self-supervising medical image segmentation subunit for: respectively inputting the first untagged medical sample image and the second untagged medical sample image into the initial medical image segmentation model to obtain a first feature vector and a second feature vector; and calculating the similarity of the first feature vector and the second feature vector to obtain feature vector similarity.
In an alternative embodiment, the self-supervising training unit further comprises: a self-supervising medical image model adjustment subunit for: determining a label-free segmentation loss function according to the feature vector similarity; and adjusting parameters of the initial medical image segmentation model according to the label-free segmentation loss function to obtain the initial medical image segmentation model.
In an alternative embodiment, the medical image segmentation model training module includes: a supervised training unit for: inputting the labeled medical sample image into the preliminary medical image segmentation model to obtain a labeled image segmentation result; comparing the labeled image segmentation result with the image segmentation label of the labeled medical sample image, and determining label segmentation loss according to the comparison result; and adjusting parameters of the preliminary medical image segmentation model according to the labeled segmentation loss to obtain the target medical image segmentation model.
In an alternative embodiment, the supervised training unit is specifically configured to: downsampling the labeled medical sample image based on a preset step length to obtain a plurality of labeled medical sample image areas; comparing the image region segmentation result of each labeled medical sample image region with the image segmentation labels, and determining the labeled segmentation loss according to the comparison result.
In an alternative embodiment, the medical image segmentation apparatus further comprises: a medical image segmentation model verification module for: after the target medical image segmentation model is obtained, a preset check medical image is input into the target medical image segmentation model; and calculating the loss of image segmentation of the preset calibration medical image by the target medical image segmentation model based on a preset similarity loss function, and obtaining a similarity loss result.
In an alternative embodiment, the medical image segmentation apparatus further comprises: the model training round setting module is used for: on a preset model training interactive interface, determining a target model training round in response to model training round setting operation; and training the initial medical image segmentation model based on the target model training turns to obtain the target medical image segmentation model.
The medical image segmentation device provided by the embodiment of the invention can execute the medical image segmentation method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present invention. Fig. 5 illustrates a block diagram of an exemplary computer device 12 suitable for use in implementing embodiments of the present invention. The computer device 12 shown in fig. 5 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention. The computer device 12 may be any terminal device with computing capabilities and may be configured in a medical image segmentation device.
As shown in FIG. 5, the computer device 12 is in the form of a general purpose computing device. Components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 may be one or more of several types of bus structures including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. The computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard disk drive"). Although not shown in fig. 5, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. The system memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
The computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the computer device 12, and/or any devices (e.g., network card, modem, etc.) that enable the computer device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Moreover, computer device 12 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through network adapter 20. As shown in fig. 5, the network adapter 20 communicates with other modules of the computer device 12 via the bus 18. It should be appreciated that although not shown in fig. 5, other hardware and/or software modules may be used in connection with computer device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, for example, implementing a medical image segmentation method provided by the present embodiment, the method including:
acquiring a medical image to be segmented;
inputting the medical image to be segmented into a target medical image segmentation model to obtain a segmentation result of the medical image to be segmented;
the target medical image segmentation model is a model obtained through training in a self-supervision and supervised mode.
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the medical image segmentation method as provided by any embodiment of the present invention, comprising:
acquiring a medical image to be segmented;
inputting the medical image to be segmented into a target medical image segmentation model to obtain a segmentation result of the medical image to be segmented;
the target medical image segmentation model is a model obtained through training in a self-supervision and supervised mode.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium may be, for example, but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
It will be appreciated by those of ordinary skill in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be centralized on a single computing device, or distributed over a network of computing devices, or they may alternatively be implemented in program code executable by a computer device, such that they are stored in a memory device and executed by the computing device, or they may be separately fabricated as individual integrated circuit modules, or multiple modules or steps within them may be fabricated as a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (10)

1. A medical image segmentation method, comprising:
acquiring a medical image to be segmented;
inputting the medical image to be segmented into a target medical image segmentation model to obtain a segmentation result of the medical image to be segmented;
the target medical image segmentation model is a model obtained through training in a self-supervision and supervised mode.
2. The method according to claim 1, wherein training the target medical image segmentation model by means of self-supervision and supervision comprises:
performing self-supervision training on the initial medical image segmentation model based on the unlabeled medical sample image to obtain a preliminary medical image segmentation model;
and performing supervised training on the initial medical image segmentation model based on the labeled medical sample image to obtain a target medical image segmentation model.
3. The method of claim 2, wherein the self-supervised training of the initial medical image segmentation model based on the unlabeled medical sample image results in a preliminary medical image segmentation model comprising:
respectively carrying out image enhancement on the non-label medical sample image based on two different image enhancement modes to obtain a first non-label medical sample image and a second non-label medical sample image;
Respectively inputting the first non-labeled medical sample image and the second non-labeled medical sample image into the initial medical image segmentation model to obtain a non-labeled image segmentation result;
and adjusting parameters of the initial medical image segmentation model based on the label-free image segmentation result to obtain the initial medical image segmentation model.
4. A method according to claim 3, wherein the inputting the first and second unlabeled medical sample images into the initial medical image segmentation model, respectively, results in unlabeled image segmentation results, comprises:
respectively inputting the first untagged medical sample image and the second untagged medical sample image into the initial medical image segmentation model to obtain a first feature vector and a second feature vector;
and calculating the similarity of the first feature vector and the second feature vector to obtain feature vector similarity.
5. The method of claim 4, wherein adjusting parameters of the initial medical image segmentation model based on the unlabeled image segmentation result to obtain the initial medical image segmentation model comprises:
Determining a label-free segmentation loss function according to the feature vector similarity;
and adjusting parameters of the initial medical image segmentation model according to the label-free segmentation loss function to obtain the initial medical image segmentation model.
6. The method of claim 2, wherein the performing supervised training of the initial medical image segmentation model based on the labeled medical sample images to obtain the target medical image segmentation model comprises:
inputting the labeled medical sample image into the preliminary medical image segmentation model to obtain a labeled image segmentation result;
comparing the labeled image segmentation result with the image segmentation label of the labeled medical sample image, and determining label segmentation loss according to the comparison result;
and adjusting parameters of the preliminary medical image segmentation model according to the labeled segmentation loss to obtain the target medical image segmentation model.
7. The method of claim 6, wherein the determining a labeled segmentation penalty based on comparing the labeled image segmentation result to the image segmentation label of the labeled medical specimen image comprises:
Downsampling the labeled medical sample image based on a preset step length to obtain a plurality of labeled medical sample image areas;
comparing the image region segmentation result of each labeled medical sample image region with the image segmentation labels, and determining the labeled segmentation loss according to the comparison result.
8. The method according to claim 2, further comprising, after said deriving a target medical image segmentation model:
inputting a preset calibration medical image into the target medical image segmentation model;
and calculating the loss of image segmentation of the preset calibration medical image by the target medical image segmentation model based on a preset similarity loss function, and obtaining a similarity loss result.
9. The method according to claim 2, wherein the method further comprises:
on a preset model training interactive interface, determining a target model training round in response to model training round setting operation;
and training the initial medical image segmentation model based on the target model training turns to obtain the target medical image segmentation model.
10. A medical image segmentation apparatus, the apparatus comprising:
The medical image acquisition module is used for acquiring medical images to be segmented;
the medical image segmentation module is used for inputting the medical image to be segmented into a target medical image segmentation model to obtain a segmentation result of the medical image to be segmented;
the target medical image segmentation model is a model obtained through training in a self-supervision and supervised mode.
CN202311059057.4A 2023-08-21 2023-08-21 Medical image segmentation method, device, equipment and storage medium Pending CN117036384A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311059057.4A CN117036384A (en) 2023-08-21 2023-08-21 Medical image segmentation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311059057.4A CN117036384A (en) 2023-08-21 2023-08-21 Medical image segmentation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117036384A true CN117036384A (en) 2023-11-10

Family

ID=88627922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311059057.4A Pending CN117036384A (en) 2023-08-21 2023-08-21 Medical image segmentation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117036384A (en)

Similar Documents

Publication Publication Date Title
CN109117831B (en) Training method and device of object detection network
CN111754596B (en) Editing model generation method, device, equipment and medium for editing face image
WO2018108129A1 (en) Method and apparatus for use in identifying object type, and electronic device
WO2023035531A1 (en) Super-resolution reconstruction method for text image and related device thereof
CN111368878B (en) Optimization method based on SSD target detection, computer equipment and medium
CN110188766B (en) Image main target detection method and device based on convolutional neural network
WO2019128979A1 (en) Keyframe scheduling method and apparatus, electronic device, program and medium
US11967125B2 (en) Image processing method and system
CN111311613A (en) Image segmentation model training method, image segmentation method and device
CN111639766A (en) Sample data generation method and device
CN113888541A (en) Image identification method, device and storage medium for laparoscopic surgery stage
CN110796108B (en) Method, device and equipment for detecting face quality and storage medium
CN115861462A (en) Training method and device for image generation model, electronic equipment and storage medium
WO2020103462A1 (en) Video search method and apparatus, computer device, and storage medium
CN110781849A (en) Image processing method, device, equipment and storage medium
CN114299366A (en) Image detection method and device, electronic equipment and storage medium
CN113762303B (en) Image classification method, device, electronic equipment and storage medium
CN117437423A (en) Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement
CN110768864B (en) Method and device for generating images in batches through network traffic
US20230021551A1 (en) Using training images and scaled training images to train an image segmentation model
WO2023109086A1 (en) Character recognition method, apparatus and device, and storage medium
CN114241411B (en) Counting model processing method and device based on target detection and computer equipment
CN115049546A (en) Sample data processing method and device, electronic equipment and storage medium
CN113807354B (en) Image semantic segmentation method, device, equipment and storage medium
CN117036384A (en) Medical image segmentation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination