WO2023092959A1 - 图像分割方法及其模型的训练方法及相关装置、电子设备 - Google Patents

图像分割方法及其模型的训练方法及相关装置、电子设备 Download PDF

Info

Publication number
WO2023092959A1
WO2023092959A1 PCT/CN2022/093353 CN2022093353W WO2023092959A1 WO 2023092959 A1 WO2023092959 A1 WO 2023092959A1 CN 2022093353 W CN2022093353 W CN 2022093353W WO 2023092959 A1 WO2023092959 A1 WO 2023092959A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
segmentation model
image segmentation
loss function
segmentation
Prior art date
Application number
PCT/CN2022/093353
Other languages
English (en)
French (fr)
Inventor
叶宇翔
陈翼男
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023092959A1 publication Critical patent/WO2023092959A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • the present application relates to the technical field of artificial intelligence, in particular to an image segmentation method and its model training method, related devices, electronic equipment, and computer program products.
  • CT Computer Tomography
  • MRI Magnetic Resonance Imaging, nuclear magnetic resonance scanning
  • high-precision, high-robust multi-organ segmentation in the abdomen is beneficial to computer-aided diagnosis and computer-aided surgical planning.
  • the present application provides an image segmentation method and its model training method, related devices, and electronic equipment.
  • the first aspect of the present application provides a training method for an image segmentation model, which is applied to a first image segmentation model and a second image segmentation model, and the second image segmentation model is obtained by setting the network parameters of the first image segmentation model to Obtained by sliding average;
  • the training method includes: obtaining medical image samples; wherein, the medical image samples include a first medical image set marked with at least one target object and a second medical image set without labeling; using the first medical image set An image segmentation model performs segmentation processing on the first medical image set to obtain a first segmentation result corresponding to the first medical image set; based on the first segmentation result, determine a first segmentation result of the first image segmentation model A loss function; using the first image segmentation model and the second image segmentation model to segment the second medical image set respectively, to obtain the correspondence between the second medical image set segmented by the first image segmentation model The second segmentation result of the second image segmentation model, and the third segmentation result corresponding to the second medical image set segmented by the second
  • the second image segmentation model is used for the second medical image set Segmentation processing to obtain the second segmentation result corresponding to the second medical image set segmented by the first image segmentation model, and the third segmentation result corresponding to the second medical image set segmented by the second image segmentation model.
  • the model is obtained by sliding the average of the network parameters of the first image segmentation model, so the second loss function of the first image segmentation model can be determined based on the second segmentation result and the third segmentation result, so there will be annotated first medical image Set and the unlabeled second medical image set are combined, and then jointly trained in a semi-supervised manner, the target object segmentation results with high robustness can be obtained.
  • the acquisition of medical image samples includes: performing balancing processing on each collected medical image set, so that the proportions of medical images of each phase in the first medical image set are the same, and/or the second medical image set The proportion of medical images in each phase of the image set is the same.
  • the parameters of the scanning machine used and the quality of the medical images are also different, by analyzing each medical image collected
  • the set is balanced so that the proportions of the medical images of each period in the first medical image set are the same, and/or the proportions of the medical images of each period in the second medical image set are the same, and the balanced medical image samples are used for training , which can ensure that the trained image segmentation model can be applied to medical image datasets with multiple centers and non-uniform labels, and has good versatility.
  • the method before determining the first loss function of the first image segmentation model based on the first segmentation result, includes: based on the annotation information of the first medical image set and the The volume prior information generates the weight of the loss function of each target object; wherein, the volume prior information of each target object includes the volume mean value of all labels corresponding to the target object; based on the first segmentation result, determining the The first loss function of the first image segmentation model includes: calculating and obtaining the first loss function based on the first segmentation result and the weight of the loss function of each target object.
  • the weight of each target object in the loss function can be balanced and controlled, so that the simultaneous training of multi-center medical images with partial annotations can be achieved. set, which makes the image segmentation model easy to converge.
  • the first loss function includes a three-dimensional segmentation loss function
  • the calculation of the first loss function based on the first segmentation result and the loss function weight of each target object includes: analyzing the first difference information between the segmentation result and the annotation information in the first medical image set, and calculate the 3D segmentation loss function of the first image segmentation model according to the loss function weight of each target object.
  • the 3D segmentation loss function of the first image segmentation model can be calculated, so in the 3D
  • the segmentation loss function can be calculated spatially, which makes the image segmentation model have high segmentation performance.
  • the first loss function also includes a two-dimensional projection boundary loss function; the calculation of the first loss function based on the first segmentation result and the weight of the loss function of each target object further includes: The first segmentation result is projected onto a two-dimensional plane, and the segmentation boundary information of each target object is obtained according to the projection result; the boundary loss of each target object on the two-dimensional projection is calculated through the segmentation boundary information, and the described A 2D projected boundary loss function for the first image segmentation model.
  • the segmentation boundary information of each target object is obtained according to the projection result, and then the boundary loss of each target object on the two-dimensional projection is calculated through the segmentation boundary information, and the second A two-dimensional projection boundary loss function of an image segmentation model, so in addition to calculating the segmentation loss function on the three-dimensional space, the boundary loss of each target object on the two-dimensional projection is also simultaneously calculated on the two-dimensional level, without introducing too much calculation
  • the spatial position constraints of the segmentation results can be better constrained, thereby improving the accuracy of the boundary segmentation of each target object, greatly improving the overall robustness of image segmentation, and can accelerate the optimization of the image segmentation model and improve the accuracy of image segmentation.
  • the segmentation performance and generalization ability of the model can be better constrained, thereby improving the accuracy of the boundary segmentation of each target object, greatly improving the overall robustness of image segmentation, and can accelerate the optimization of the image segmentation model and improve the accuracy of image segmentation.
  • the first image segmentation model and the second image segmentation model are used to segment the second medical image set respectively to obtain the second medical image set segmented by the first image segmentation model
  • the corresponding second segmentation result, and the third segmentation result corresponding to the second medical image set segmented by the second image segmentation model include: adding the first random noise to the second medical image set and then inputting the A first image segmentation model to obtain the second segmentation result, adding second random noise to the second medical image set and inputting it into the second image segmentation model to obtain the third segmentation result;
  • the second loss The function includes a segmentation consistency loss function;
  • the determining the second loss function of the first image segmentation model based on the second segmentation result and the third segmentation result includes: analyzing the second segmentation result and the The difference information between the third segmentation results is used to obtain the segmentation consistency loss function of the first image segmentation model.
  • the unlabeled second medical image set is respectively input into the first image segmentation model as the "student” model and the second image segmentation model as the "teacher” model, and the corresponding second The segmentation result and the third segmentation result, and then by analyzing the difference information between the second segmentation result and the third segmentation result, the segmentation consistency loss function of the first image segmentation model and the second image segmentation model can be obtained, so through the segmentation consistency
  • the loss function can optimize the image segmentation model, so that the image segmentation model can generate more stable segmentation results, that is, use a large amount of unlabeled medical image data to improve the robustness of the image segmentation model.
  • the network parameters of the first image segmentation model are adjusted by using the first loss function and the second loss function, and the first image segmentation model is adjusted based on the adjusted network parameters of the first image segmentation model.
  • the network parameters of the two image segmentation models including: transmitting the first loss function and the second loss function to the first image segmentation model through a reverse transmission algorithm, so as to optimize the network of the first image segmentation model Adjusting the parameters; performing a sliding average on the adjusted network parameters of the first image segmentation model to obtain the adjusted network parameters of the second image segmentation model.
  • the network parameters of the first image segmentation model can be adjusted to optimize the first image segmentation model, and then the adjusted The network parameters of the first image segmentation model after the sliding average are carried out to obtain the network parameters of the adjusted second image segmentation model, that is, the second image segmentation model is optimized, so the first medical image marked with at least one target object is fully utilized set and a large number of unlabeled second medical image sets, the first image segmentation model as a "student" model and the second image segmentation model as a "teacher” model can be jointly optimized and improved to improve the robustness of the image segmentation model sex and split performance.
  • the second aspect of the present application provides an image segmentation method
  • the image segmentation method includes: acquiring a medical image to be segmented; using the first image segmentation model and/or the second image segmentation model to The medical image is segmented to obtain the segmentation result corresponding to the medical image to be segmented; wherein, the first image segmentation model and the second image segmentation model are obtained by using the image segmentation model training method in the first aspect above of.
  • the training device includes: a sample acquisition module, used to acquire medical image samples; wherein, the medical image samples include at least one marked with The first medical image set of the target object and the unlabeled second medical image set; an image segmentation module, configured to segment the first medical image set using a first image segmentation model to obtain the first medical image set The corresponding first segmentation result; and, using the first image segmentation model and the second image segmentation model to segment the second medical image set respectively, to obtain the segmented results of the first image segmentation model The second segmentation result corresponding to the second medical image set, and the third segmentation result corresponding to the second medical image set segmented by the second image segmentation model; wherein, the second image segmentation model is obtained by combining the obtained by sliding average of the network parameters of the first image segmentation model; a loss function determination module, configured to determine a first loss function of the first image segmentation model based on the first segmentation result; and,
  • the fourth aspect of the present application provides an image segmentation device, the image segmentation device includes: an image acquisition module, used to obtain a medical image to be segmented; an image segmentation module, used to use the first image segmentation model and /or the second image segmentation model performs segmentation processing on the medical image to be segmented to obtain the segmentation result corresponding to the medical image to be segmented; wherein, the first image segmentation model and the second image segmentation model use the above The training method of the image segmentation model in the first aspect is obtained.
  • the fifth aspect of the present application provides an electronic device, including a memory and a processor coupled to each other, and the processor is used to execute the program instructions stored in the memory, so as to realize the above-mentioned first aspect.
  • the sixth aspect of the present application provides a computer-readable storage medium, on which program instructions are stored, and when the program instructions are executed by a processor, the training method of the image segmentation model in the above first aspect is implemented, Or the image segmentation method in the second aspect above.
  • the sixth aspect of the present application provides a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in a processor of an electronic device When running in the electronic device, the processor in the electronic device is used to implement the above method.
  • the first image segmentation model and the second image segmentation model are used for the second medical image
  • the set is segmented, and the second segmentation result corresponding to the second medical image set segmented by the first image segmentation model and the third segmentation result corresponding to the second medical image set segmented by the second image segmentation model can be obtained.
  • the segmentation model is obtained by sliding the average of the network parameters of the first image segmentation model, so the second loss function of the first image segmentation model can be determined based on the second segmentation result and the third segmentation result, so there will be the labeled first medical
  • the image set and the unlabeled second medical image set are combined, and then jointly trained in a semi-supervised manner, which can obtain highly robust target object segmentation results.
  • Fig. 1 is the schematic flow chart of an embodiment of the training method of the image segmentation model of the present application
  • Fig. 2 is a schematic flow chart of another embodiment of the training method of the image segmentation model of the present application
  • Fig. 3 a is a schematic flow chart of an embodiment of step S24 in Fig. 2;
  • Fig. 3b is a schematic diagram of the training method of the image segmentation model of the present application-the segmentation model of the application scene;
  • Fig. 4 is a schematic flow chart of an embodiment of the image segmentation method of the present application.
  • Fig. 5 is the frame schematic diagram of an embodiment of the training device of the image segmentation model of the present application.
  • Fig. 6 is a schematic frame diagram of an embodiment of an image segmentation device of the present application.
  • FIG. 7 is a schematic frame diagram of an embodiment of the electronic device of the present application.
  • FIG. 8 is a schematic diagram of an embodiment of a computer-readable storage medium of the present application.
  • system and “network” are often used interchangeably herein.
  • the term “and/or” in this article is just an association relationship describing associated objects, which means that there can be three relationships, for example, A and/or B can mean: A exists alone, A and B exist simultaneously, and there exists alone B these three situations.
  • the character "/” in this article generally indicates that the contextual objects are an “or” relationship.
  • “many” herein means two or more than two.
  • the subject of execution of the method steps of the present application may be executed by hardware, or executed by a processor running computer executable codes.
  • FIG. 1 is a schematic flowchart of an embodiment of a training method for an image segmentation model in the present application.
  • the training method of the image segmentation model is applied to the first image segmentation model and the second image segmentation model
  • the second image segmentation model is obtained by sliding and averaging the network parameters of the first image segmentation model
  • the first image segmentation model and the second image segmentation model are used to segment the target object in the medical image
  • the medical image may be a three-dimensional image of the abdomen
  • the corresponding target object includes but not limited to liver, left kidney, right kidney, spleen, inferior vena cava
  • the training method of image segmentation model can comprise the following steps:
  • Step S11 Obtain medical image samples.
  • the medical image samples include a first set of medical images labeled with at least one target object and a second set of medical images without labels.
  • the first image segmentation model and the second image segmentation model are pre-built, and medical image samples are used to train the first image segmentation model and the second image segmentation model;
  • the first abdominal image and the second abdominal image are included, and the second medical image set only includes the second abdominal image, wherein the first abdominal images all carry annotations, and the second abdominal images do not carry annotations.
  • the above step S11 includes: performing balancing processing on each collected medical image set, so that the proportions of the medical images of each phase in the first medical image set are the same, and/or the second medical image set The proportion of medical images in each phase of the image set is the same.
  • the upper and lower boundaries of multiple organs can be located by designing a boundary regression model. For example, the region of interest about multiple organs in the abdomen can be intercepted and invalid regions can be eliminated, so that the image segmentation model of medical images can be reduced. probability of interference.
  • the balance processing of the set can make the ratio of the medical images of each phase in the first medical image set the same, and the proportion of the medical images of each phase in the second medical image set is the same, for example, the first medical image set and the second medical image
  • the collection contains equal proportions of image data in the plain scan period, arterial period, portal vein period, and delay period, so that the balanced medical image samples can be used for training, which can ensure that the trained image segmentation model can be applied to multi-center, labeling Non-uniform medical image datasets have good versatility.
  • Step S12 Using the first image segmentation model to perform segmentation processing on the first medical image set to obtain a first segmentation result corresponding to the first medical image set.
  • Step S13 Determine a first loss function of the first image segmentation model based on the first segmentation result.
  • the first segmentation result corresponding to the first medical image set can be obtained, since the first medical image set contains the first abdominal image with annotations , therefore, by comparing the first segmentation result corresponding to the first medical image set with the first abdominal image with annotations in the first medical image set, the first loss function of the first image segmentation model can be determined.
  • Step S14 Segment the second medical image set by using the first image segmentation model and the second image segmentation model respectively, to obtain the correspondence of the second medical image set segmented by the first image segmentation model and a third segmentation result corresponding to the second medical image set segmented by the second image segmentation model.
  • Step S15 Determine a second loss function of the first image segmentation model based on the second segmentation result and the third segmentation result.
  • a second segmentation result corresponding to the second medical image set segmented by the first image segmentation model can be obtained, and The third segmentation result corresponding to the second medical image set segmented by the second image segmentation model; since the second image segmentation model is obtained by sliding and averaging the network parameters of the first image segmentation model, it is expected that the second The segmentation result and the third segmentation result have high consistency, so by comparing the second segmentation result and the third segmentation result, the second loss function of the first image segmentation model can be determined.
  • Step S16 Using the first loss function and the second loss function, adjust the network parameters of the first image segmentation model, and adjust the second network parameters based on the adjusted network parameters of the first image segmentation model Network parameters for image segmentation models.
  • the network parameters of the first image segmentation model can be adjusted according to the first loss function and the second loss function, so as to realize the optimization of the first image segmentation model update, and then the updated second image segmentation model can be obtained by sliding and averaging the network parameters of the updated first image segmentation model.
  • the convergence of the first loss function and the second loss function can be obtained, and when the first loss function and the second loss function converge, the network parameters of the image segmentation model can be stopped. update, and when the first loss function and the second loss function do not converge, the number of adjustments of the network parameters can be obtained, and when the number of adjustments reaches the preset number of times, the final image segmentation model can be determined according to the network parameters at this time, with Reduce the probability that the loss function will not converge and affect the training efficiency.
  • the above-mentioned step S16 specifically includes: transferring the first loss function and the second loss function to the first image segmentation model through a reverse transmission algorithm, so as to optimize the first image segmentation model adjusting the network parameters of the first image segmentation model; performing a sliding average of the adjusted network parameters of the first image segmentation model to obtain the adjusted network parameters of the second image segmentation model.
  • the network parameters of the first image segmentation model can be adjusted to optimize the first image segmentation model, Then the network parameters of the adjusted first image segmentation model are carried out sliding average, and the network parameters of the adjusted second image segmentation model are obtained, that is, the second image segmentation model is optimized, so the first image segmentation model marked with at least one target object is fully utilized.
  • a medical image set and a large number of unlabeled second medical image sets can optimize and improve the first image segmentation model as a "student" model and the second image segmentation model as a "teacher” model to improve the image segmentation model robustness and segmentation performance.
  • the first image segmentation model and the second image segmentation model are used for the second medical image
  • the set is segmented, and the second segmentation result corresponding to the second medical image set segmented by the first image segmentation model and the third segmentation result corresponding to the second medical image set segmented by the second image segmentation model can be obtained.
  • the segmentation model is obtained by sliding the average of the network parameters of the first image segmentation model, so the second loss function of the first image segmentation model can be determined based on the second segmentation result and the third segmentation result, so there will be the labeled first medical
  • the image set and the unlabeled second medical image set are combined, and then jointly trained in a semi-supervised manner, which can obtain highly robust target object segmentation results.
  • FIG. 2 is a schematic flowchart of another embodiment of the training method of the image segmentation model of the present application.
  • the training method of the image segmentation model may include the following steps:
  • Step S21 Obtain medical image samples.
  • the medical image samples include a first set of medical images labeled with at least one target object and a second set of medical images without labels.
  • Step S22 Based on the annotation information of the first medical image set and the volume prior information of each target object, generate the loss function weight of each target object; wherein, the volume prior information of each target object includes the target object The corresponding volume mean of all annotations.
  • the first medical image set contains both the first abdominal image with labels and the second abdominal image without labels
  • the label information of the first medical image set contains the label value Label i of each target object , where the label value of the labeled target object is 1, the label value of the unlabeled target object is 0, and i represents the index of the target object to be segmented.
  • all the labeled objects of a certain target object in the first medical image set The average value of the volume data of the target object is used as the volume prior information V i of the target object. Therefore, according to the label value Label i and the volume prior information V i of each target object, the unique The loss function weight W i of each target object can be used for the calculation of the loss function, where the calculation formula of W i is as follows:
  • Step S23 Segment the first medical image set by using the first image segmentation model to obtain a first segmentation result corresponding to the first medical image set.
  • step S21 and step S23 are basically similar to step S11 and step S12 in the above embodiment of the present application, and will not be repeated here.
  • Step S24 Calculate and obtain the first loss function based on the first segmentation result and the loss function weight of each target object.
  • the weight of each target object in the loss function can be balanced and controlled, so that simultaneous training of multiple centers with partial annotation can be achieved.
  • the medical image set makes the image segmentation model easy to converge.
  • the first loss function includes a three-dimensional segmentation loss function and a two-dimensional projection boundary loss function
  • the above-mentioned second loss function includes a segmentation consistency loss function
  • the above-mentioned step S24 may specifically include the following steps:
  • Step S241 Analyzing the difference information between the first segmentation result and the annotation information in the first medical image set, and calculating the 3D of the first image segmentation model according to the weight of the loss function of each target object Segmentation loss function.
  • the 3D segmentation loss function may include mainstream segmentation loss functions for 3D image segmentation such as the dice coefficient loss function l dice_3d and the cross entropy loss function l cross_entropy_3d . It can be understood that, by analyzing the difference information between the first segmentation result and the annotation information in the first medical image set, and according to the loss function weight of each target object, the three-dimensional segmentation loss function of the first image segmentation model can be calculated, Therefore, the segmentation loss function can be calculated in the three-dimensional space, so that the image segmentation model has high segmentation performance.
  • Step S242 Project the first segmentation result onto a two-dimensional plane, and obtain segmentation boundary information of each target object according to the projection result.
  • Step S243 Calculate the boundary loss of each target object on the two-dimensional projection through the segmentation boundary information, and obtain the two-dimensional projection boundary loss function of the first image segmentation model.
  • the idea of high-dimensional projection to low-dimensional can be adopted, and the three-dimensional segmentation result (that is, the first segmentation result) along three planes (ZY plane, ZX plane and YX plane) respectively to obtain three two-dimensional projection results, and then obtain the segmentation boundary information of each target object on the three two-dimensional projection results, so the boundary loss of each target object on the two-dimensional projection can be calculated separately,
  • the two-dimensional projection boundary loss function l project_2d of the first image segmentation model and use the Hausdorff distance to introduce the two-dimensional projection boundary loss function l project_2d , so that the image segmentation model can better learn the spatial distribution characteristics of each target object, Therefore, the accuracy of the segmentation boundary of the target object can be improved, and the situation of "isolated islands" in the segmentation can be greatly reduced.
  • the segmentation boundary information of each target object is obtained according to the projection result, and then the boundary loss of each target object on the two-dimensional projection is calculated by the segmentation boundary information , to obtain the two-dimensional projection boundary loss function of the first image segmentation model, so in addition to calculating the segmentation loss function on the three-dimensional space, the boundary loss of each target object on the two-dimensional projection is also simultaneously calculated on the two-dimensional level.
  • the spatial position constraints of the segmentation results can be better, thereby improving the accuracy of the boundary segmentation of each target object, greatly improving the overall robustness of image segmentation, and speeding up the optimization of the image segmentation model. Improve the segmentation performance and generalization ability of image segmentation models.
  • Step S25 Add the first random noise to the second medical image set and input it into the first image segmentation model to obtain the second segmentation result, add the second random noise to the second medical image set and input it into the first image segmentation model The second image segmentation model is used to obtain the third segmentation result.
  • Step S26 Analyzing the difference information between the second segmentation result and the third segmentation result to obtain a segmentation consistency loss function of the first image segmentation model.
  • the unlabeled second medical image set is added with random noise, and then input into the first image segmentation model as the "student” model and the second image segmentation model as the "teacher” model,
  • the corresponding second segmentation result and the third segmentation result are obtained, and then by analyzing the difference information between the second segmentation result and the third segmentation result, the segmentation consistency loss function of the first image segmentation model and the second image segmentation model can be obtained l consistency , so the image segmentation model can be optimized through the segmentation consistency loss function l consistency , so that the image segmentation model can generate more stable segmentation results, that is, using a large number of unlabeled medical image data to improve the robustness of the image segmentation model .
  • Step S27 Using the first loss function and the second loss function, adjust the network parameters of the first image segmentation model, and adjust the second network parameters based on the adjusted network parameters of the first image segmentation model Network parameters for image segmentation models.
  • the resulting first loss function includes a 3D segmentation loss function (dice coefficient loss function l dice_3d and cross entropy loss function l cross_entropy_3d ) and a 2D projected boundary loss function l project_2d
  • the second loss function includes a segmentation consistency loss function l consistency , so the network parameters of the first image segmentation model as a "student" model can be continuously updated and optimized by minimizing the loss function and using gradient descent.
  • the loss function Loss of the first image segmentation model satisfies formula (2):
  • step S27 is basically similar to step S16 in the above embodiment of the present application, and will not be repeated here.
  • FIG. 3b is a schematic diagram of a training method for an image segmentation model of the present application-a segmentation model of an application scenario.
  • the student model is trained by using the volume prior information of each target object and the annotation information of part of the labeled data set, which can balance and control the weight of each target object in the loss function.
  • the segmentation consistency loss function of the image segmentation model can be obtained l consistency , so the image segmentation model can be optimized through the segmentation consistency loss function l consistency , so that the image segmentation model can generate more stable segmentation results. Therefore, in the training process of the image segmentation model, combining the self-supervised task of judging the segmentation consistency with the main task of multi-organ segmentation can improve the image segmentation model's encoding and decoding robustness and generalization ability to unknown data sets.
  • FIG. 4 is a schematic flowchart of an embodiment of an image segmentation method of the present application. Specifically, the following steps may be included:
  • Step S41 Obtain the medical image to be segmented.
  • Step S42 Segment the medical image to be segmented using the first image segmentation model and/or the second image segmentation model to obtain a segmentation result corresponding to the medical image to be segmented.
  • the first image segmentation model and the second image segmentation model are obtained by using the above-mentioned image segmentation model training method.
  • the image segmentation method of this application can be used for imaging doctors to observe the three-dimensional shape of each abdominal organ when assisting diagnosis and reading, for example, it can be used to observe the smoothness of the three-dimensional mask on the liver surface; it can also be used to pass the corresponding three-dimensional
  • the segmentation results automatically calculate the gray value relationship between various organs. For example, during the plain scan period, the ratio of the liver parenchyma to the spleen parenchyma has guiding significance for the evaluation of the degree of fatty liver; in addition, it can also be used for abdominal organs in the surgical planning assistance system 3D visualization.
  • FIG. 5 is a schematic frame diagram of an embodiment of a training device for an image segmentation model of the present application.
  • the training device 50 of the image segmentation model includes: a sample acquisition module 500 for acquiring medical image samples; wherein the medical image samples include a first medical image set marked with at least one target object and a second medical image set without labeling
  • the image segmentation module 502 is configured to use the first image segmentation model to segment the first medical image set to obtain a first segmentation result corresponding to the first medical image set; and, use the first image segmentation model and the second image segmentation model respectively segment the second medical image set to obtain a second segmentation result corresponding to the second medical image set segmented by the first image segmentation model, and the first The third segmentation result corresponding to the second medical image set segmented by the second image segmentation model; wherein, the second image segmentation model is obtained by sliding and averaging the network parameters of the first image segmentation model; the loss function is determined Module 504, configured to determine a first loss function of the
  • the image segmentation model can have a highly robust target object segmentation result and the ability to generalize to unknown medical images.
  • the sample acquisition module 500 can be specifically configured to perform balancing processing on each collected medical image set, so that the ratio of the medical images of each phase in the first medical image set is the same, and/or the The proportions of the medical images of each phase in the second medical image set are the same.
  • the image segmentation module 502 can also be specifically configured to generate a loss function weight for each target object based on the annotation information of the first medical image set and the volume prior information of each target object; wherein, each The volume prior information of a target object includes the volume mean of all labels corresponding to the target object.
  • the loss function determining module 504 may be specifically configured to calculate and obtain the first loss function based on the first segmentation result and the weight of the loss function of each target object.
  • the first loss function includes a three-dimensional segmentation loss function
  • the loss function determination module 504 is specifically configured to analyze the difference information between the first segmentation result and the annotation information in the first medical image set, And calculate the 3D segmentation loss function of the first image segmentation model according to the loss function weight of each target object.
  • the first loss function further includes a two-dimensional projection boundary loss function
  • the loss function determination module 504 is specifically configured to project the first segmentation result onto a two-dimensional plane, and obtain each object according to the projection result Segmentation boundary information of the object; calculate the boundary loss of each target object on the two-dimensional projection through the segmentation boundary information, and obtain the two-dimensional projection boundary loss function of the first image segmentation model.
  • the image segmentation module 502 can be specifically configured to add the first random noise to the second medical image set and input it into the first image segmentation model to obtain the second segmentation result, and the second The medical image set is input to the second image segmentation model after adding the second random noise to obtain the third segmentation result.
  • the second loss function includes a segmentation consistency loss function, and the loss function determination module 504 can specifically be used to analyze the difference information between the second segmentation result and the third segmentation result to obtain the first image segmentation model The segmentation consistency loss function of .
  • the parameter adjustment module 506 can be specifically configured to transfer the first loss function and the second loss function to the first image segmentation model through a reverse transfer algorithm, so as to Adjusting the network parameters of the segmentation model; performing a sliding average on the adjusted network parameters of the first image segmentation model to obtain the adjusted network parameters of the second image segmentation model.
  • FIG. 6 is a schematic frame diagram of an embodiment of an image segmentation device of the present application.
  • the image segmentation device 60 includes: an image acquisition module 600 for acquiring a medical image to be segmented; an image segmentation module 602 for performing segmentation processing on the medical image to be segmented by using the first image segmentation model and/or the second image segmentation model , to obtain a segmentation result corresponding to the medical image to be segmented; wherein, the first image segmentation model and the second image segmentation model are obtained by using the above-mentioned image segmentation model training method.
  • both the first image segmentation model and the second image segmentation model are obtained by combining the labeled first medical image set and the unlabeled second medical image set, they are jointly trained in a semi-supervised manner , both the first image segmentation model and the second image segmentation model have highly robust target object segmentation results and generalization ability to unknown medical images, therefore, using the first image segmentation model and/or the second image segmentation model to treat Segmenting the medical image for segmentation processing can obtain an accurate segmentation result corresponding to the medical image to be segmented.
  • FIG. 7 is a schematic frame diagram of an embodiment of an electronic device of the present application.
  • the electronic device 70 includes a memory 71 and a processor 72 coupled to each other, and the processor 72 is used to execute the program instructions stored in the memory 71, so as to realize the steps in the embodiment of the training method for any of the above-mentioned image segmentation models, or any of the above-mentioned image segmentation models. Steps of an embodiment of a segmentation method.
  • the electronic device 70 may include, but is not limited to: a microcomputer and a server.
  • the processor 72 is used to control itself and the memory 71 to implement the steps in any of the above embodiments of the image segmentation model training method, or the steps in any of the above embodiments of the image segmentation method.
  • the processor 72 may also be called a CPU (Central Processing Unit, central processing unit).
  • the processor 72 may be an integrated circuit chip with signal processing capability.
  • the processor 72 can also be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field-programmable gate array (Field-Programmable Gate Array, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the processor 72 may be jointly implemented by an integrated circuit chip.
  • the processor combines the labeled first medical image set with the unlabeled second medical image set, and uses a semi-supervised method for joint training, so that the image segmentation model can have a highly robust target object Segmentation results and generalization ability to unseen medical images.
  • FIG. 8 is a schematic frame diagram of an embodiment of a computer-readable storage medium of the present application.
  • the computer-readable storage medium 80 stores program instructions 800 that can be executed by the processor, and the program instructions 800 are used to implement the steps in any of the above-mentioned image segmentation model training method embodiments, or the steps in any of the above-mentioned image segmentation method embodiments .
  • the present application provides a computer program product, including computer readable codes, or a non-volatile computer readable storage medium bearing computer readable codes, when the computer readable codes are run in a processor of an electronic device , the processor in the electronic device executes to implement the above method.
  • the disclosed methods and devices may be implemented in other ways.
  • the device implementations described above are only illustrative.
  • the division of modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separated, and a component shown as a unit may or may not be a physical unit, that is, it may be located in one place, or may also be distributed to network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • an integrated unit is implemented in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) execute all or part of the steps of the methods in various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

提供了图像分割方法及其模型的训练方法及相关装置、电子设备,其中,图像分割模型的训练方法包括:获取包括标注有至少一个目标对象的第一医学图像集和无标注的第二医学图像集的医学图像样本;利用第一图像分割模型对第一医学图像集进行分割处理,得到第一医学图像集对应的第一分割结果(S12),并确定第一图像分割模型的第一损失函数;利用第一图像分割模型和将其网络参数滑动平均而得到的第二图像分割模型分别对第二医学图像集进行分割处理,得到对应的第二分割结果和第三分割结果,并确定第一图像分割模型的第二损失函数;根据第一损失函数和第二损失函数调整图像分割模型的网络参数。方案可以得到高鲁棒性的目标对象分割结果。

Description

图像分割方法及其模型的训练方法及相关装置、电子设备
本申请要求2021年11月23日提交、申请号为202111395877.1,发明名称为“图像分割方法及其模型的训练方法及相关装置、电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,特别是涉及一种图像分割方法及其模型的训练方法及相关装置、电子设备、计算机程序产品。
背景技术
对CT(Computed Tomography,计算机断层扫描)和MRI(Magnetic Resonance Imaging,核磁共振扫描)等医学图像中的器官、血管等目标对象的分割在临床具有重要意义。例如,高精准度、高鲁棒性的腹部多器官分割,有利于计算机辅助诊断、计算机辅助手术规划等。
但实际研究中,如何提高图像分割的通用性成为极具研究价值的课题。
发明内容
本申请提供一种图像分割方法及其模型的训练方法及相关装置、电子设备。
本申请第一方面提供了一种图像分割模型的训练方法,应用于第一图像分割模型和第二图像分割模型,所述第二图像分割模型是通过将所述第一图像分割模型的网络参数滑动平均得到的;所述训练方法包括:获取医学图像样本;其中,所述医学图像样本包括标注有至少一个目标对象的第一医学图像集和无标注的第二医学图像集;利用所述第一图像分割模型对所述第一医学图像集进行分割处理,得到所述第一医学图像集对应的第一分割结果;基于所述第一分割结果,确定所述第一图像分割模型的第一损失函数;利用所述第一图像分割模型和所述第二图像分割模型分别对所述第二医学图像集进行分割处理,得到所述第一图像分割模型分割的所述第二医学图像集对应的第二分割结果,以及所述第二图像分割模型分割的所述第二医学图像集对应的第三分割结果;基于所述第二分割结果和所述第三分割结果,确定所述第一图像分割模型的第二损失函数;利用所述第一损失函数和所述第二损失函数,调整所述第一图像分割模型的网络参数,并基于调整后的所述第一图像分割模型的网络参数调整所述第二图像分割模型的网络参数。
因此,通过获取包括标注有至少一个目标对象的第一医学图像集和无标注的第二医学图像集的医学图像样本,利用第一图像分割模型对第一医学图像集进行分割处理,可以得到第一医学图像集对应的第一分割结果,于是基于第一分割结果可以确定第一图像分割模型的第一损失函数,另外利用第一图像分割模型和第二图像分割模型分别对第二医学图像集进行分割处理,可以得到第一图像分割模型分割的第二医学图像集对应的第二分割结果,以及第二图像分割模型分割的第二医学图像集对应的第三分割结果,由于第二图像分割模型是通过将第一图像分割模型的网络参数滑动平均得到的,于是基于第二分割结果和第三分割结果可以确定第一图像分割模型的第二损失函数,因此将有标注的第一医学图像集和无标注的第二医学图像集结合在一 起,然后采用半监督的方式进行共同训练,可以得到高鲁棒性的目标对象分割结果。
其中,所述获取医学图像样本,包括:对采集的每个医学图像集进行平衡化处理,使所述第一医学图像集中各期象的医学图像的比例相同,和/或所述第二医学图像集中各期象的医学图像的比例相同。
因此,考虑到组成每个医学图像集的各医学图像来自不同的数据集,而每个数据集的标注不同,所用的扫描机器参数和医学图像的质量也不同,通过对采集的每个医学图像集进行平衡化处理,使第一医学图像集中各期象的医学图像的比例相同,和/或第二医学图像集中各期象的医学图像的比例相同,利用平衡化之后的医学图像样本进行训练,可以保证训练后的图像分割模型可以适用于多中心、标注不统一的医学图像数据集,具有较好的通用性。
其中,在所述基于所述第一分割结果,确定所述第一图像分割模型的第一损失函数之前,所述方法包括:基于所述第一医学图像集的标注信息和每个目标对象的体积先验信息,生成每个目标对象的损失函数权重;其中,每个目标对象的体积先验信息包括该目标对象对应的所有标注的体积均值;所述基于所述第一分割结果,确定所述第一图像分割模型的第一损失函数,包括:基于所述第一分割结果和所述每个目标对象的损失函数权重计算得到所述第一损失函数。
因此,根据每个目标对象的体积先验信息和第一医学图像集的标注信息,可以平衡和控制各个目标对象在损失函数中的权重,从而可以实现同时训练多中心、具有部分标注的医学图像集,使得图像分割模型容易收敛。
其中,所述第一损失函数包括三维分割损失函数;所述基于所述第一分割结果和所述每个目标对象的损失函数权重计算得到所述第一损失函数,包括:分析所述第一分割结果和所述第一医学图像集中的标注信息之间的差异信息,并根据所述每个目标对象的损失函数权重计算得到所述第一图像分割模型的三维分割损失函数。
因此,通过分析第一分割结果和第一医学图像集中的标注信息之间的差异信息,并根据每个目标对象的损失函数权重可以计算得到第一图像分割模型的三维分割损失函数,于是在三维空间上可以计算分割损失函数,使图像分割模型具有较高的分割性能。
其中,所述第一损失函数还包括二维投影边界损失函数;所述基于所述第一分割结果和所述每个目标对象的损失函数权重计算得到所述第一损失函数,还包括:将所述第一分割结果向二维平面进行投影,根据投影结果得到每个目标对象的分割边界信息;通过所述分割边界信息计算出每个目标对象在二维投影上的边界损失,得到所述第一图像分割模型的二维投影边界损失函数。
因此,通过将第一分割结果向二维平面进行投影,根据投影结果得到每个目标对象的分割边界信息,然后通过分割边界信息计算出每个目标对象在二维投影上的边界损失,得到第一图像分割模型的二维投影边界损失函数,于是除了在三维空间上计算分割损失函数以外,在二维层面上也同步计算各目标对象在二维投影上的边界损失,在没有引入太多计算量情况下,可以更好地对分割结果进行空间位置约束,从而提高各目标对象的边界分割的准确性,大幅提高图像分割的整体鲁棒性,并且可以加速图像分割模型的优化,提升图像分割模型的分割性能和泛化能力。
其中,所述利用所述第一图像分割模型和所述第二图像分割模型分别对所述第二医学图像集进行分割处理,得到所述第一图像分割模型分割的所述第二医学图像集对应的第二分割结果,以及所述第二图像分割模型分割的所述第二医学图像集对应的第三分割结果,包括:将所述第二医学图像集加入第一随机噪声后输入所述第一图像分割模型,得到所述第二分割结果,将所述第二医学图像集加入第二随机噪声后输入所述第二图像分割模型,得到所述第三分割结果; 所述第二损失函数包括分割一致性损失函数;所述基于所述第二分割结果和所述第三分割结果,确定所述第一图像分割模型的第二损失函数,包括:分析所述第二分割结果和所述第三分割结果之间的差异信息,得到所述第一图像分割模型的分割一致性损失函数。
因此,采用半监督的训练策略,将无标注的第二医学图像集分别输入作为“学生”模型的第一图像分割模型和作为“老师”模型的第二图像分割模型中,得到对应的第二分割结果和第三分割结果,然后通过分析第二分割结果和第三分割结果之间的差异信息,可以得到第一图像分割模型和第二图像分割模型的分割一致性损失函数,于是通过分割一致性损失函数可以对图像分割模型进行优化,使图像分割模型可以生成更加稳定的分割结果,即利用大量无标注的医学图像数据来提高图像分割模型的鲁棒性。
其中,所述利用所述第一损失函数和所述第二损失函数,调整所述第一图像分割模型的网络参数,并基于调整后的所述第一图像分割模型的网络参数调整所述第二图像分割模型的网络参数;包括:将所述第一损失函数和所述第二损失函数通过反向传输算法传输到所述第一图像分割模型,以对所述第一图像分割模型的网络参数进行调整;将调整后的所述第一图像分割模型的网络参数进行滑动平均,得到调整后的所述第二图像分割模型的网络参数。
因此,将第一损失函数和第二损失函数通过反向传输算法传输到第一图像分割模型,可以对第一图像分割模型的网络参数进行调整,使第一图像分割模型得到优化,然后将调整后的第一图像分割模型的网络参数进行滑动平均,得到调整后的第二图像分割模型的网络参数,即第二图像分割模型得到优化,于是充分利用标注有至少一个目标对象的第一医学图像集和大量无标注的第二医学图像集,可以使作为“学生”模型的第一图像分割模型和作为“老师”模型的第二图像分割模型共同优化和提升,以提高图像分割模型的鲁棒性和分割性能。
为了解决上述问题,本申请第二方面提供了一种图像分割方法,所述图像分割方法包括:获取待分割医学图像;利用第一图像分割模型和/或第二图像分割模型对所述待分割医学图像进行分割处理,得到所述待分割医学图像对应的分割结果;其中,所述第一图像分割模型和所述第二图像分割模型是利用上述第一方面中的图像分割模型的训练方法得到的。
为了解决上述问题,本申请第三方面提供了一种图像分割模型的训练装置,所述训练装置包括:样本获取模块,用于获取医学图像样本;其中,所述医学图像样本包括标注有至少一个目标对象的第一医学图像集和无标注的第二医学图像集;图像分割模块,用于利用第一图像分割模型对所述第一医学图像集进行分割处理,得到所述第一医学图像集对应的第一分割结果;以及,利用所述第一图像分割模型和所述第二图像分割模型分别对所述第二医学图像集进行分割处理,得到所述第一图像分割模型分割的所述第二医学图像集对应的第二分割结果,以及所述第二图像分割模型分割的所述第二医学图像集对应的第三分割结果;其中,所述第二图像分割模型是通过将所述第一图像分割模型的网络参数滑动平均得到的;损失函数确定模块,用于基于所述第一分割结果,确定所述第一图像分割模型的第一损失函数;以及,基于所述第二分割结果和所述第三分割结果,确定所述第一图像分割模型的第二损失函数;参数调整模块,用于利用所述第一损失函数和所述第二损失函数,调整所述第一图像分割模型的网络参数,并基于调整后的所述第一图像分割模型的网络参数调整所述第二图像分割模型的网络参数。
为了解决上述问题,本申请第四方面提供了一种图像分割装置,所述图像分割装置包括:图像获取模块,用于获取待分割医学图像;图像分割模块,用于利用第一图像分割模型和/或第二图像分割模型对所述待分割医学图像进行分割处理,得到所述待分割医学图像对应的分割结果;其中,所述第一图像分割模型和所述第二图像分割模型是利用上述第一方面中的图像分割模型的训练方法得到的。
为了解决上述问题,本申请第五方面提供了一种电子设备,包括相互耦接的存储器和处理器,所述处理器用于执行所述存储器中存储的程序指令,以实现上述第一方面中的图像分割模型的训练方法,或上述第二方面中的图像分割方法。
为了解决上述问题,本申请第六方面提供了一种计算机可读存储介质,其上存储有程序指令,所述程序指令被处理器执行时实现上述第一方面中的图像分割模型的训练方法,或上述第二方面中的图像分割方法。
本申请第六方面提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器执行用于实现上述方法。
上述方案,通过获取包括标注有至少一个目标对象的第一医学图像集和无标注的第二医学图像集的医学图像样本,利用第一图像分割模型对第一医学图像集进行分割处理,可以得到第一医学图像集对应的第一分割结果,于是基于第一分割结果可以确定第一图像分割模型的第一损失函数,另外利用第一图像分割模型和第二图像分割模型分别对第二医学图像集进行分割处理,可以得到第一图像分割模型分割的第二医学图像集对应的第二分割结果,以及第二图像分割模型分割的第二医学图像集对应的第三分割结果,由于第二图像分割模型是通过将第一图像分割模型的网络参数滑动平均得到的,于是基于第二分割结果和第三分割结果可以确定第一图像分割模型的第二损失函数,因此将有标注的第一医学图像集和无标注的第二医学图像集结合在一起,然后采用半监督的方式进行共同训练,可以得到高鲁棒性的目标对象分割结果。
附图说明
图1是本申请图像分割模型的训练方法一实施例的流程示意图;
图2是本申请图像分割模型的训练方法另一实施例的流程示意图;
图3a是图2中步骤S24一实施例的流程示意图;
图3b是本申请图像分割模型的训练方法一应用场景的分割模型的示意图;
图4是本申请图像分割方法一实施例的流程示意图;
图5是本申请图像分割模型的训练装置一实施例的框架示意图;
图6是本申请图像分割装置一实施例的框架示意图;
图7是本申请电子设备一实施例的框架示意图;
图8是本申请计算机可读存储介质一实施例的框架示意图。
具体实施方式
下面结合说明书附图,对本申请实施例的方案进行详细说明。
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、接口、技术之类的具体细节,以便透彻理解本申请。
本文中术语“系统”和“网络”在本文中常被可互换使用。本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。此外,本文中的“多”表示两个或者多于两个。
本申请的方法步骤的执行主体可以为硬件执行,或者通过处理器运行计算机可执行代码的方式执行。
请参阅图1,图1是本申请图像分割模型的训练方法一实施例的流程示意图。具体而言,图 像分割模型的训练方法应用于第一图像分割模型和第二图像分割模型,第二图像分割模型是通过将第一图像分割模型的网络参数滑动平均得到的,第一图像分割模型和第二图像分割模型用于对医学图像中的目标对象进行分割,所述医学图像可以是腹部三维图像,对应的目标对象包含但不限于肝脏、左肾、右肾、脾脏、下腔静脉、主动脉、胃、胆囊等,通过第一图像分割模型和第二图像分割模型可以将腹部三维图像中的多个器官分别分割出来并进行标记。图像分割模型的训练方法可以包括如下步骤:
步骤S11:获取医学图像样本。其中,医学图像样本包括标注有至少一个目标对象的第一医学图像集和无标注的第二医学图像集。
具体地,第一图像分割模型和第二图像分割模型为预先构建的,医学图像样本用于训练第一图像分割模型和第二图像分割模型;以腹部三维图像为例,第一医学图像集中既包含有第一腹部图像,又包含有第二腹部图像,而第二医学图像集仅包含有第二腹部图像,其中,第一腹部图像均携带标注,而第二腹部图像均未携带标注。
在一个实施场景中,上述步骤S11包括:对采集的每个医学图像集进行平衡化处理,使所述第一医学图像集中各期象的医学图像的比例相同,和/或所述第二医学图像集中各期象的医学图像的比例相同。可以理解的是,在获取医学图像时,可以通过设计边界回归模型来定位多器官的上下边界,例如可以截取关于腹部多器官的感兴趣区域,剔除无效区域,从而可以降低医学图像对图像分割模型造成干扰的概率。而考虑到组成每个医学图像集的各医学图像来自不同的数据集,而每个数据集的标注不同,所用的扫描机器参数和医学图像的质量也不同,于是通过对采集的每个医学图像集进行平衡化处理,可以使第一医学图像集中各期象的医学图像的比例相同,第二医学图像集中各期象的医学图像的比例相同,例如,第一医学图像集和第二医学图像集中均包含等比例的平扫期、动脉期、门脉期、延迟期的影像数据,从而利用平衡化之后的医学图像样本进行训练,可以保证训练后的图像分割模型可以适用于多中心、标注不统一的医学图像数据集,具有较好的通用性。
步骤S12:利用所述第一图像分割模型对所述第一医学图像集进行分割处理,得到所述第一医学图像集对应的第一分割结果。
步骤S13:基于所述第一分割结果,确定所述第一图像分割模型的第一损失函数。
可以理解的是,利用第一图像分割模型对第一医学图像集进行分割处理,可以得到第一医学图像集对应的第一分割结果,由于第一医学图像集中包含有携带标注的第一腹部图像,因此,通过比较第一医学图像集对应的第一分割结果以及第一医学图像集中携带标注的第一腹部图像,可以确定出第一图像分割模型的第一损失函数。
步骤S14:利用所述第一图像分割模型和所述第二图像分割模型分别对所述第二医学图像集进行分割处理,得到所述第一图像分割模型分割的所述第二医学图像集对应的第二分割结果,以及所述第二图像分割模型分割的所述第二医学图像集对应的第三分割结果。
步骤S15:基于所述第二分割结果和所述第三分割结果,确定所述第一图像分割模型的第二损失函数。
可以理解的是,利用第一图像分割模型和第二图像分割模型分别对第二医学图像集进行分割处理,可以得到第一图像分割模型分割的第二医学图像集对应的第二分割结果,以及第二图像分割模型分割的第二医学图像集对应的第三分割结果;由于第二图像分割模型是通过将第一图像分割模型的网络参数滑动平均得到的,故期望两者预测得到的第二分割结果和第三分割结果具有较高的一致性,因此通过比较第二分割结果和第三分割结果,可以确定第一图像分割模型的第二损失函数。
步骤S16:利用所述第一损失函数和所述第二损失函数,调整所述第一图像分割模型的网络参数,并基于调整后的所述第一图像分割模型的网络参数调整所述第二图像分割模型的网络参数。
可以理解的是,在得到第一损失函数和第二损失函数后,可以根据第一损失函数和第二损失函数对第一图像分割模型的网络参数进行调整,以实现对第一图像分割模型进行更新,然后通过将更新后的第一图像分割模型的网络参数滑动平均,可以得到更新后的第二图像分割模型。
另外,在图像分割模型的训练过程中,可以获取第一损失函数和第二损失函数的收敛性,当第一损失函数和第二损失函数收敛时,则可以停止对图像分割模型的网络参数的更新,而当第一损失函数和第二损失函数不收敛时,可以获取网络参数的调整次数,当调整次数达到预设次数时,则可以根据此时的网络参数确定最终的图像分割模型,以降低损失函数不收敛而影响训练效率的概率。
在一实施例中,上述步骤S16具体包括:将所述第一损失函数和所述第二损失函数通过反向传输算法传输到所述第一图像分割模型,以对所述第一图像分割模型的网络参数进行调整;将调整后的所述第一图像分割模型的网络参数进行滑动平均,得到调整后的所述第二图像分割模型的网络参数。可以理解的是,将第一损失函数和第二损失函数通过反向传输算法传输到第一图像分割模型,可以对第一图像分割模型的网络参数进行调整,使第一图像分割模型得到优化,然后将调整后的第一图像分割模型的网络参数进行滑动平均,得到调整后的第二图像分割模型的网络参数,即第二图像分割模型得到优化,于是充分利用标注有至少一个目标对象的第一医学图像集和大量无标注的第二医学图像集,可以使作为“学生”模型的第一图像分割模型和作为“老师”模型的第二图像分割模型共同优化和提升,以提高图像分割模型的鲁棒性和分割性能。
上述方案,通过获取包括标注有至少一个目标对象的第一医学图像集和无标注的第二医学图像集的医学图像样本,利用第一图像分割模型对第一医学图像集进行分割处理,可以得到第一医学图像集对应的第一分割结果,于是基于第一分割结果可以确定第一图像分割模型的第一损失函数,另外利用第一图像分割模型和第二图像分割模型分别对第二医学图像集进行分割处理,可以得到第一图像分割模型分割的第二医学图像集对应的第二分割结果,以及第二图像分割模型分割的第二医学图像集对应的第三分割结果,由于第二图像分割模型是通过将第一图像分割模型的网络参数滑动平均得到的,于是基于第二分割结果和第三分割结果可以确定第一图像分割模型的第二损失函数,因此将有标注的第一医学图像集和无标注的第二医学图像集结合在一起,然后采用半监督的方式进行共同训练,可以得到高鲁棒性的目标对象分割结果。
请参阅图2,图2是本申请图像分割模型的训练方法另一实施例的流程示意图。具体而言,图像分割模型的训练方法可以包括如下步骤:
步骤S21:获取医学图像样本。其中,医学图像样本包括标注有至少一个目标对象的第一医学图像集和无标注的第二医学图像集。
步骤S22:基于所述第一医学图像集的标注信息和每个目标对象的体积先验信息,生成每个目标对象的损失函数权重;其中,每个目标对象的体积先验信息包括该目标对象对应的所有标注的体积均值。
具体地,第一医学图像集中既包含有携带标注的第一腹部图像,又包含有未携带标注的第二腹部图像,第一医学图像集的标注信息中有关于各目标对象的标签值Label i,其中,有标注的目标对象的标签值为1,无标注的目标对象的标签值为0,i代表待分割目标对象索引,另外,将某个目标对象在第一医学图像集中所有的有标注的体积数据的平均值,作为该目标对象的体积 先验信息V i,因此,依据各目标对象的标签值Label i和体积先验信息V i,可以生成各目标对象在不同期象数据中特有的损失函数权重W i,每个目标对象的损失函数权重可以用于损失函数的计算,其中,W i的计算公式如下:
Figure PCTCN2022093353-appb-000001
根据公式(1)可以发现,体积较大的目标对象的损失函数权重相对较低,而体积较小的目标对象的损失函数权重相对较高。
步骤S23:利用第一图像分割模型对所述第一医学图像集进行分割处理,得到所述第一医学图像集对应的第一分割结果。
本实施例中,步骤S21和步骤S23与本申请上述实施例的步骤S11和步骤S12基本类似,此处不再赘述。
步骤S24:基于所述第一分割结果和所述每个目标对象的损失函数权重计算得到所述第一损失函数。
可以理解的是,根据每个目标对象的体积先验信息和第一医学图像集的标注信息,可以平衡和控制各个目标对象在损失函数中的权重,从而可以实现同时训练多中心、具有部分标注的医学图像集,使得图像分割模型容易收敛。
请参阅图3a,图3a是图2中步骤S24一实施例的流程示意图。本实施例中,第一损失函数包括三维分割损失函数和二维投影边界损失函数,上述的第二损失函数包括分割一致性损失函数;上述步骤S24具体可以包括如下步骤:
步骤S241:分析所述第一分割结果和所述第一医学图像集中的标注信息之间的差异信息,并根据所述每个目标对象的损失函数权重计算得到所述第一图像分割模型的三维分割损失函数。
具体地,三维分割损失函数可以包括骰子系数损失函数l dice_3d和交叉熵损失函数l cross_entropy_3d等用于三维影像分割的主流分割损失函数。可以理解的是,通过分析第一分割结果和第一医学图像集中的标注信息之间的差异信息,并根据每个目标对象的损失函数权重可以计算得到第一图像分割模型的三维分割损失函数,于是在三维空间上可以计算分割损失函数,使图像分割模型具有较高的分割性能。
步骤S242:将所述第一分割结果向二维平面进行投影,根据投影结果得到每个目标对象的分割边界信息。
步骤S243:通过所述分割边界信息计算出每个目标对象在二维投影上的边界损失,得到所述第一图像分割模型的二维投影边界损失函数。
由于考虑到上述骰子系数损失函数l dice_3d和交叉熵损失函数l cross_entropy_3d本身对于分割边界没有约束,而豪斯多夫距离作为形状相似性的一种度量,能够做出较好的补充,但是在三维分割空间中计算豪斯多夫距离会占用较大计算资源,因此,可以采用高维投影至低维的思路,将三维分割结果(即第一分割结果)沿着三个平面(ZY平面、ZX平面及YX平面)分别投影,得到三个二维投影结果,然后在三个二维投影结果上得到各目标对象的分割边界信息,于是可以分别计算各目标对象在二维投影上的边界损失,得到第一图像分割模型的二维投影边界损失函数l project_2d,采用豪斯多夫距离来引入二维投影边界损失函数l project_2d,使图像分割模型可以更好地学习各目标对象的空间分布特征,从而可以提高目标对象的分割边界的准确度,大大降低分割存在“孤岛”的情况。
可以理解的是,通过将第一分割结果向二维平面进行投影,根据投影结果得到每个目标对象的分割边界信息,然后通过分割边界信息计算出每个目标对象在二维投影上的边界损失,得 到第一图像分割模型的二维投影边界损失函数,于是除了在三维空间上计算分割损失函数以外,在二维层面上也同步计算各目标对象在二维投影上的边界损失,在没有引入太多计算量情况下,可以更好地对分割结果进行空间位置约束,从而提高各目标对象的边界分割的准确性,大幅提高图像分割的整体鲁棒性,并且可以加速图像分割模型的优化,提升图像分割模型的分割性能和泛化能力。
步骤S25:将所述第二医学图像集加入第一随机噪声后输入所述第一图像分割模型,得到所述第二分割结果,将所述第二医学图像集加入第二随机噪声后输入所述第二图像分割模型,得到所述第三分割结果。
步骤S26:分析所述第二分割结果和所述第三分割结果之间的差异信息,得到所述第一图像分割模型的分割一致性损失函数。
因此,采用半监督的训练策略,将无标注的第二医学图像集加入随机噪声后,分别输入作为“学生”模型的第一图像分割模型和作为“老师”模型的第二图像分割模型中,得到对应的第二分割结果和第三分割结果,然后通过分析第二分割结果和第三分割结果之间的差异信息,可以得到第一图像分割模型和第二图像分割模型的分割一致性损失函数l consistency,于是通过分割一致性损失函数l consistency可以对图像分割模型进行优化,使图像分割模型可以生成更加稳定的分割结果,即利用大量无标注的医学图像数据来提高图像分割模型的鲁棒性。
步骤S27:利用所述第一损失函数和所述第二损失函数,调整所述第一图像分割模型的网络参数,并基于调整后的所述第一图像分割模型的网络参数调整所述第二图像分割模型的网络参数。
因此,最终得到的第一损失函数包括三维分割损失函数(骰子系数损失函数l dice_3d和交叉熵损失函数l cross_entropy_3d)和二维投影边界损失函数l project_2d,第二损失函数包括分割一致性损失函数l consistency,于是作为“学生”模型的第一图像分割模型的网络参数可以通过最小化损失函数,采用梯度下降方式不断更新优化,其中,第一图像分割模型的损失函数Loss满足公式(2):
Loss=l dice_3d+l cross_entropy_3d+l consistency+l project_2d           (2)
本实施例中,步骤S27与本申请上述实施例的步骤S16基本类似,此处不再赘述。
请结合图3b,图3b是本申请图像分割模型的训练方法一应用场景的分割模型的示意图。在图像分割模型的训练过程中,利用每个目标对象的体积先验信息和部分标注数据集的标注信息对学生模型进行训练,可以平衡和控制各个目标对象在损失函数中的权重,在三维空间上计算分割损失函数(骰子系数损失函数l dice_3d和交叉熵损失函数l cross_entrony_3d),使图像分割模型具有较高的分割性能,在二维层面上同步计算各目标对象在二维投影上的边界损失(二维投影边界损失函数l project_2d),可以更好地对分割结果进行空间位置约束,从而提高各目标对象的边界分割的准确性,提升图像分割模型的分割性能和泛化能力。并且,在图像分割模型的训练过程中,除了使用部分标注数据集进行训练以外,大量无标注的多器官数据集也被利用起来,将无标注数据集加入随机噪声后,分别输入“学生”模型和“老师”模型中,得到对应的分割结果,然后通过分析“学生”模型对应的分割结果和“老师”模型对应的分割结果之间的差异信息,可以得到图像分割模型的分割一致性损失函数l consistency,于是通过分割一致性损失函数l consistency可以对图像分割模型进行优化,使图像分割模型可以生成更加稳定的分割结果。因此,在图像分割模型的训练过程中,将判断分割一致性的自监督的任务和多器官分割主任务相结合,可以提高图像分割模型编码和解码鲁棒性和对未知数据集泛化能力。
请参阅图4,图4是本申请图像分割方法一实施例的流程示意图。具体而言,可以包括如下步骤:
步骤S41:获取待分割医学图像。
步骤S42:利用第一图像分割模型和/或第二图像分割模型对所述待分割医学图像进行分割处理,得到所述待分割医学图像对应的分割结果。
其中,所述第一图像分割模型和所述第二图像分割模型是利用上述的图像分割模型的训练方法得到的。
通过本申请的图像分割方法,可用于影像医生在辅助诊断阅片时,观测各腹部器官的三维形态,比如可以观测肝表面三维掩模的平滑程度;也可用于通过待分割医学图像对应的三维分割结果自动化计算各器官之间的灰度值关系,比如在平扫期,肝实质和脾脏实质的比值对于脂肪肝程度的评价具有指导意义;此外,还可以用于手术规划辅助系统中腹部器官的三维可视化。
请参阅图5,图5是本申请图像分割模型的训练装置一实施例的框架示意图。图像分割模型的训练装置50包括:样本获取模块500,用于获取医学图像样本;其中,所述医学图像样本包括标注有至少一个目标对象的第一医学图像集和无标注的第二医学图像集;图像分割模块502,用于利用第一图像分割模型对所述第一医学图像集进行分割处理,得到所述第一医学图像集对应的第一分割结果;以及,利用所述第一图像分割模型和所述第二图像分割模型分别对所述第二医学图像集进行分割处理,得到所述第一图像分割模型分割的所述第二医学图像集对应的第二分割结果,以及所述第二图像分割模型分割的所述第二医学图像集对应的第三分割结果;其中,所述第二图像分割模型是通过将所述第一图像分割模型的网络参数滑动平均得到的;损失函数确定模块504,用于基于所述第一分割结果,确定所述第一图像分割模型的第一损失函数;以及,基于所述第二分割结果和所述第三分割结果,确定所述第一图像分割模型的第二损失函数;参数调整模块506,用于利用所述第一损失函数和所述第二损失函数,调整所述第一图像分割模型的网络参数,并基于调整后的所述第一图像分割模型的网络参数调整所述第二图像分割模型的网络参数。
上述方案,通过将有标注的第一医学图像集和无标注的第二医学图像集结合在一起,采用半监督的方式进行共同训练,可以使图像分割模型具有高鲁棒性的目标对象分割结果和对未知医学图像的泛化能力。
在一些实施例中,样本获取模块500具体可以用于对采集的每个医学图像集进行平衡化处理,使所述第一医学图像集中各期象的医学图像的比例相同,和/或所述第二医学图像集中各期象的医学图像的比例相同。
在一些实施例中,图像分割模块502具体还可以用于基于所述第一医学图像集的标注信息和每个目标对象的体积先验信息,生成每个目标对象的损失函数权重;其中,每个目标对象的体积先验信息包括该目标对象对应的所有标注的体积均值。此时,损失函数确定模块504具体可以用于基于所述第一分割结果和所述每个目标对象的损失函数权重计算得到所述第一损失函数。
在一些实施例中,所述第一损失函数包括三维分割损失函数,损失函数确定模块504具体用于分析所述第一分割结果和所述第一医学图像集中的标注信息之间的差异信息,并根据所述每个目标对象的损失函数权重计算得到所述第一图像分割模型的三维分割损失函数。
在一些实施例中,所述第一损失函数还包括二维投影边界损失函数,损失函数确定模块504具体用于将所述第一分割结果向二维平面进行投影,根据投影结果得到每个目标对象的分割边界信息;通过所述分割边界信息计算出每个目标对象在二维投影上的边界损失,得到所述第一 图像分割模型的二维投影边界损失函数。
在一些实施例中,图像分割模块502具体可以用于将所述第二医学图像集加入第一随机噪声后输入所述第一图像分割模型,得到所述第二分割结果,将所述第二医学图像集加入第二随机噪声后输入所述第二图像分割模型,得到所述第三分割结果。所述第二损失函数包括分割一致性损失函数,损失函数确定模块504具体可以用于分析所述第二分割结果和所述第三分割结果之间的差异信息,得到所述第一图像分割模型的分割一致性损失函数。
在一些实施例中,参数调整模块506具体可以用于将所述第一损失函数和所述第二损失函数通过反向传输算法传输到所述第一图像分割模型,以对所述第一图像分割模型的网络参数进行调整;将调整后的所述第一图像分割模型的网络参数进行滑动平均,得到调整后的所述第二图像分割模型的网络参数。
请参阅图6,图6是本申请图像分割装置一实施例的框架示意图。图像分割装置60包括:图像获取模块600,用于获取待分割医学图像;图像分割模块602,用于利用第一图像分割模型和/或第二图像分割模型对所述待分割医学图像进行分割处理,得到所述待分割医学图像对应的分割结果;其中,所述第一图像分割模型和所述第二图像分割模型是利用上述的图像分割模型的训练方法得到的。
上述方案,由于第一图像分割模型和第二图像分割模型是通过将有标注的第一医学图像集和无标注的第二医学图像集结合在一起,采用半监督的方式进行共同训练而得到的,第一图像分割模型和第二图像分割模型均具有高鲁棒性的目标对象分割结果和对未知医学图像的泛化能力,因此,利用第一图像分割模型和/或第二图像分割模型对待分割医学图像进行分割处理,可以得到待分割医学图像对应的精准分割结果。
请参阅图7,图7是本申请电子设备一实施例的框架示意图。电子设备70包括相互耦接的存储器71和处理器72,处理器72用于执行存储器71中存储的程序指令,以实现上述任一图像分割模型的训练方法实施例的步骤,或上述任一图像分割方法实施例的步骤。在一个具体的实施场景中,电子设备70可以包括但不限于:微型计算机、服务器。
具体而言,处理器72用于控制其自身以及存储器71以实现上述任一图像分割模型的训练方法实施例的步骤,或上述任一图像分割方法实施例中的步骤。处理器72还可以称为CPU(Central Processing Unit,中央处理单元)。处理器72可能是一种集成电路芯片,具有信号的处理能力。处理器72还可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。另外,处理器72可以由集成电路芯片共同实现。
上述方案,处理器通过将有标注的第一医学图像集和无标注的第二医学图像集结合在一起,采用半监督的方式进行共同训练,可以使图像分割模型具有高鲁棒性的目标对象分割结果和对未知医学图像的泛化能力。
请参阅图8,图8是本申请计算机可读存储介质一实施例的框架示意图。计算机可读存储介质80存储有能够被处理器运行的程序指令800,程序指令800用于实现上述任一图像分割模型的训练方法实施例的步骤,或上述任一图像分割方法实施例中的步骤。
本申请提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器执行用于实现上述方法。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法、装置,可以通过其它的方式实现。例如,以上所描述的装置实施方式仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性、机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施方式方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施方式方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。

Claims (13)

  1. 一种图像分割模型的训练方法,其特征在于,应用于第一图像分割模型和第二图像分割模型,所述第二图像分割模型是通过将所述第一图像分割模型的网络参数滑动平均得到的;所述训练方法包括:
    获取医学图像样本;其中,所述医学图像样本包括标注有至少一个目标对象的第一医学图像集和无标注的第二医学图像集;
    利用所述第一图像分割模型对所述第一医学图像集进行分割处理,得到所述第一医学图像集对应的第一分割结果;
    基于所述第一分割结果,确定所述第一图像分割模型的第一损失函数;
    利用所述第一图像分割模型和所述第二图像分割模型分别对所述第二医学图像集进行分割处理,得到所述第一图像分割模型分割的所述第二医学图像集对应的第二分割结果,以及所述第二图像分割模型分割的所述第二医学图像集对应的第三分割结果;
    基于所述第二分割结果和所述第三分割结果,确定所述第一图像分割模型的第二损失函数;
    利用所述第一损失函数和所述第二损失函数,调整所述第一图像分割模型的网络参数,并基于调整后的所述第一图像分割模型的网络参数调整所述第二图像分割模型的网络参数。
  2. 根据权利要求1所述的图像分割模型的训练方法,其特征在于,所述获取医学图像样本,包括:
    对采集的一个或多个医学图像集进行平衡化处理,使所述第一医学图像集中一个或多个期象的医学图像的比例相同,和/或所述第二医学图像集中一个或多个期象的医学图像的比例相同。
  3. 根据权利要求1所述的图像分割模型的训练方法,其特征在于,在所述基于所述第一分割结果,确定所述第一图像分割模型的第一损失函数之前,所述方法包括:
    基于所述第一医学图像集的标注信息和一个或多个目标对象的体积先验信息,生成一个或多个目标对象的损失函数权重;其中,一个或多个目标对象的体积先验信息包括该目标对象对应的至少部分标注的体积均值;
    所述基于所述第一分割结果,确定所述第一图像分割模型的第一损失函数,包括:
    基于所述第一分割结果和所述一个或多个目标对象的损失函数权重计算得到所述第一损失函数。
  4. 根据权利要求3所述的图像分割模型的训练方法,其特征在于,所述第一损失函数包括三维分割损失函数;所述基于所述第一分割结果和所述一个或多个目标对象的损失函数权重计算得到所述第一损失函数,包括:
    分析所述第一分割结果和所述第一医学图像集中的标注信息之间的差异信息,并根据所述一个或多个目标对象的损失函数权重计算得到所述第一图像分割模型的三维分割损失函数。
  5. 根据权利要求4所述的图像分割模型的训练方法,其特征在于,所述第一损失函数还包括二维投影边界损失函数;所述基于所述第一分割结果和所述一个或多个目标对象的损失函数权重计算得到所述第一损失函数,还包括:
    将所述第一分割结果向二维平面进行投影,根据投影结果得到一个或多个目标对象的分割边界信息;
    通过所述分割边界信息计算出一个或多个目标对象在二维投影上的边界损失,得到所述第一图像分割模型的二维投影边界损失函数。
  6. 根据权利要求1所述的图像分割模型的训练方法,其特征在于,所述利用所述第一图像分割模型和所述第二图像分割模型分别对所述第二医学图像集进行分割处理,得到所述第一图像分割模型分割的所述第二医学图像集对应的第二分割结果,以及所述第二图像分割模型分割的所述第二医学图像集对应的第三分割结果,包括:
    将所述第二医学图像集加入第一随机噪声后输入所述第一图像分割模型,得到所述第二分割结果,将所述第二医学图像集加入第二随机噪声后输入所述第二图像分割模型,得到所述第三分割结果;
    所述第二损失函数包括分割一致性损失函数;所述基于所述第二分割结果和所述第三分割结果,确定所述第一图像分割模型的第二损失函数,包括:
    分析所述第二分割结果和所述第三分割结果之间的差异信息,得到所述第一图像分割模型的分割一致性损失函数。
  7. 根据权利要求1所述的图像分割模型的训练方法,其特征在于,所述利用所述第一损失函数和所述第二损失函数,调整所述第一图像分割模型的网络参数,并基于调整后的所述第一图像分割模型的网络参数调整所述第二图像分割模型的网络参数;包括:
    将所述第一损失函数和所述第二损失函数通过反向传输算法传输到所述第一图像分割模型,以对所述第一图像分割模型的网络参数进行调整;
    将调整后的所述第一图像分割模型的网络参数进行滑动平均,得到调整后的所述第二图像分割模型的网络参数。
  8. 一种图像分割方法,其特征在于,所述图像分割方法包括:
    获取待分割医学图像;
    利用第一图像分割模型和/或第二图像分割模型对所述待分割医学图像进行分割处理,得到所述待分割医学图像对应的分割结果;
    其中,所述第一图像分割模型和所述第二图像分割模型是利用权利要求1至7任一项所述的图像分割模型的训练方法得到的。
  9. 一种图像分割模型的训练装置,其特征在于,所述训练装置包括:
    样本获取模块,用于获取医学图像样本;其中,所述医学图像样本包括标注有至少一个目标对象的第一医学图像集和无标注的第二医学图像集;
    图像分割模块,用于利用第一图像分割模型对所述第一医学图像集进行分割处理,得到所述第一医学图像集对应的第一分割结果;以及,利用所述第一图像分割模型和所述第二图像分割模型分别对所述第二医学图像集进行分割处理,得到所述第一图像分割模型分割的所述第二医学图像集对应的第二分割结果,以及所述第二图像分割模型分割的所述第二医学图像集对应的第三分割结果;其中,所述第二图像分割模型是通过将所述第一图像分割模型的网络参数滑动平均得到的;
    损失函数确定模块,用于基于所述第一分割结果,确定所述第一图像分割模型的第一损失函数;以及,基于所述第二分割结果和所述第三分割结果,确定所述第一图像分割模型的第二损失函数;
    参数调整模块,用于利用所述第一损失函数和所述第二损失函数,调整所述第一图像分割模型的网络参数,并基于调整后的所述第一图像分割模型的网络参数调整所述第二图像分割模型的网络参数。
  10. 一种图像分割装置,其特征在于,所述图像分割装置包括:
    图像获取模块,用于获取待分割医学图像;
    图像分割模块,用于利用第一图像分割模型和/或第二图像分割模型对所述待分割医学图像进行分割处理,得到所述待分割医学图像对应的分割结果;
    其中,所述第一图像分割模型和所述第二图像分割模型是利用权利要求1至7任一项所述的图像分割模型的训练方法得到的。
  11. 一种电子设备,其特征在于,包括相互耦接的存储器和处理器,所述处理器用于执行所述存储器中存储的程序指令,以实现权利要求1至7任一项所述的图像分割模型的训练方法,或权利要求8所述的图像分割方法。
  12. 一种计算机可读存储介质,其上存储有程序指令,其特征在于,所述程序指令被处理器执行时实现权利要求1至7任一项所述的图像分割模型的训练方法,或权利要求8所述的图像分割方法。
  13. 一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器用于实现权利要求1-7中的任一权利要求所述的方法。
PCT/CN2022/093353 2021-11-23 2022-05-17 图像分割方法及其模型的训练方法及相关装置、电子设备 WO2023092959A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111395877.1 2021-11-23
CN202111395877.1A CN114049344A (zh) 2021-11-23 2021-11-23 图像分割方法及其模型的训练方法及相关装置、电子设备

Publications (1)

Publication Number Publication Date
WO2023092959A1 true WO2023092959A1 (zh) 2023-06-01

Family

ID=80211259

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/093353 WO2023092959A1 (zh) 2021-11-23 2022-05-17 图像分割方法及其模型的训练方法及相关装置、电子设备

Country Status (2)

Country Link
CN (1) CN114049344A (zh)
WO (1) WO2023092959A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049344A (zh) * 2021-11-23 2022-02-15 上海商汤智能科技有限公司 图像分割方法及其模型的训练方法及相关装置、电子设备
CN115471662B (zh) * 2022-11-03 2023-05-02 深圳比特微电子科技有限公司 语义分割模型的训练方法、识别方法、装置和存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564134A (zh) * 2018-04-27 2018-09-21 网易(杭州)网络有限公司 数据处理方法、装置、计算设备和介质
CN110598504A (zh) * 2018-06-12 2019-12-20 北京市商汤科技开发有限公司 图像识别方法及装置、电子设备和存储介质
CN110956635A (zh) * 2019-11-15 2020-04-03 上海联影智能医疗科技有限公司 一种肺段分割方法、装置、设备及存储介质
CN111539947A (zh) * 2020-04-30 2020-08-14 上海商汤智能科技有限公司 图像检测方法及相关模型的训练方法和相关装置、设备
US20210038198A1 (en) * 2019-08-07 2021-02-11 Siemens Healthcare Gmbh Shape-based generative adversarial network for segmentation in medical imaging
CN113313697A (zh) * 2021-06-08 2021-08-27 青岛商汤科技有限公司 图像分割和分类方法及其模型训练方法、相关装置及介质
CN113538480A (zh) * 2020-12-15 2021-10-22 腾讯科技(深圳)有限公司 图像分割处理方法、装置、计算机设备和存储介质
CN114049344A (zh) * 2021-11-23 2022-02-15 上海商汤智能科技有限公司 图像分割方法及其模型的训练方法及相关装置、电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564134A (zh) * 2018-04-27 2018-09-21 网易(杭州)网络有限公司 数据处理方法、装置、计算设备和介质
CN110598504A (zh) * 2018-06-12 2019-12-20 北京市商汤科技开发有限公司 图像识别方法及装置、电子设备和存储介质
US20210038198A1 (en) * 2019-08-07 2021-02-11 Siemens Healthcare Gmbh Shape-based generative adversarial network for segmentation in medical imaging
CN110956635A (zh) * 2019-11-15 2020-04-03 上海联影智能医疗科技有限公司 一种肺段分割方法、装置、设备及存储介质
CN111539947A (zh) * 2020-04-30 2020-08-14 上海商汤智能科技有限公司 图像检测方法及相关模型的训练方法和相关装置、设备
CN113538480A (zh) * 2020-12-15 2021-10-22 腾讯科技(深圳)有限公司 图像分割处理方法、装置、计算机设备和存储介质
CN113313697A (zh) * 2021-06-08 2021-08-27 青岛商汤科技有限公司 图像分割和分类方法及其模型训练方法、相关装置及介质
CN114049344A (zh) * 2021-11-23 2022-02-15 上海商汤智能科技有限公司 图像分割方法及其模型的训练方法及相关装置、电子设备

Also Published As

Publication number Publication date
CN114049344A (zh) 2022-02-15

Similar Documents

Publication Publication Date Title
EP3382642B1 (en) Highly integrated annotation and segmentation system for medical imaging
WO2023092959A1 (zh) 图像分割方法及其模型的训练方法及相关装置、电子设备
RU2677764C2 (ru) Координатная привязка медицинских изображений
Jafari et al. Automatic biplane left ventricular ejection fraction estimation with mobile point-of-care ultrasound using multi-task learning and adversarial training
Zheng et al. Deep pancreas segmentation with uncertain regions of shadowed sets
WO2021128825A1 (zh) 三维目标检测及模型的训练方法及装置、设备、存储介质
US8861891B2 (en) Hierarchical atlas-based segmentation
Ozdemir et al. Extending pretrained segmentation networks with additional anatomical structures
Li et al. Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images
WO2022213654A1 (zh) 一种超声图像的分割方法、装置、终端设备和存储介质
Fajar et al. Reconstructing and resizing 3D images from DICOM files
Wu et al. Registration of longitudinal brain image sequences with implicit template and spatial–temporal heuristics
KR20210065871A (ko) 자기 공명 이미지들에서 정중시상 평면을 결정하기 위한 방법 및 장치
Hržić et al. XAOM: A method for automatic alignment and orientation of radiographs for computer-aided medical diagnosis
Kunapinun et al. Improving GAN learning dynamics for thyroid nodule segmentation
Luo et al. Deformable adversarial registration network with multiple loss constraints
Yong et al. Automatic ventricular nuclear magnetic resonance image processing with deep learning
Gonzales et al. TVnet: Automated time-resolved tracking of the tricuspid valve plane in MRI long-axis cine images with a dual-stage deep learning pipeline
CN112164447B (zh) 图像处理方法、装置、设备及存储介质
Li et al. Semi-supervised medical image segmentation based on GAN with the pyramid attention mechanism and transfer learning
Ma et al. SEN-FCB: an unsupervised twinning neural network for image registration
US20220270256A1 (en) Compensation of organ deformation for medical image registration
Chang et al. Structure-aware independently trained multi-scale registration network for cardiac images
Cui et al. Multi-perspectives 2D Spine CT images segmentation of 3D fuse algorithm
Guo et al. Controllable fundus image generation based on conditional generative adversarial networks with mask guidance

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22897058

Country of ref document: EP

Kind code of ref document: A1