WO2023071154A1 - 图像分割方法及相关模型的训练方法和装置、设备 - Google Patents

图像分割方法及相关模型的训练方法和装置、设备 Download PDF

Info

Publication number
WO2023071154A1
WO2023071154A1 PCT/CN2022/093458 CN2022093458W WO2023071154A1 WO 2023071154 A1 WO2023071154 A1 WO 2023071154A1 CN 2022093458 W CN2022093458 W CN 2022093458W WO 2023071154 A1 WO2023071154 A1 WO 2023071154A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
blood vessel
segmentation
sample
viewing angle
Prior art date
Application number
PCT/CN2022/093458
Other languages
English (en)
French (fr)
Inventor
王娜
刘星龙
黄宁
陈翼男
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023071154A1 publication Critical patent/WO2023071154A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • the present application relates to the technical field of image processing, in particular to an image segmentation method and related model training methods and devices, equipment, storage media, and computer program products.
  • Vessel segmentation in medical image processing is a hot topic at present.
  • doctors can quickly understand the relevant conditions of blood vessels and perform corresponding simulation operations.
  • the segmentation results can assist doctors in preoperative planning and simulated surgery, which can help reduce risks during surgery and improve the success rate of surgery.
  • the present application provides at least one image segmentation method and a related model training method, device, and equipment.
  • the first aspect of the present application provides a training method for an image segmentation model, the method includes: acquiring a plurality of sample perspective images extracted from a sample medical image from multiple perspectives, wherein the sample medical image contains blood vessels; using the image segmentation model Image segmentation is performed on each sample perspective image to obtain a blood vessel segmentation result related to the sample medical image; based on the blood vessel segmentation result, network parameters of the image segmentation model are adjusted.
  • the trained image segmentation model can use the image information of sample view maps of different views for blood vessel segmentation in subsequent applications, which helps to improve blood vessel segmentation. the accuracy.
  • the above-mentioned image segmentation model includes multiple segmentation sub-networks and fusion sub-networks respectively corresponding to multiple viewing angles; the above-mentioned image segmentation model is used to perform image segmentation on each sample viewing angle image to obtain blood vessel segmentation related to the sample medical image
  • the results include: for each viewing angle, use the segmentation sub-network corresponding to the viewing angle to perform image segmentation on the sample viewing angle images corresponding to the viewing angles, and obtain the first blood vessel segmentation results corresponding to each viewing angle;
  • a blood vessel segmentation result is fused to obtain a second blood vessel segmentation result of the sample medical image;
  • the above-mentioned adjustment of the network parameters of the image segmentation model based on the blood vessel segmentation result includes at least one of the following steps: for each viewing angle, based on the corresponding Each first blood vessel segmentation result and local blood vessel segmentation labeling information corresponding to the viewing angle, adjust the parameters of the segmentation sub-network corresponding to the viewing angle; based on the second blood vessel segment
  • the segmentation sub-network corresponding to the viewing angle to perform image segmentation on the sample viewing angle image
  • the fusion sub-network to perform fusion processing on the first blood vessel segmentation results corresponding to each viewing angle, so that the image segmentation model can be based on image information from different viewing angles, Realize the segmentation of blood vessels.
  • the training of the segmentation sub-network and the fusion sub-network can be realized.
  • the above-mentioned segmentation sub-network includes a sequentially connected feature processing layer, attention layer and prediction layer, and the parameters of the segmentation sub-network corresponding to the above-mentioned adjustment perspective include at least one of the feature processing layer, attention layer and prediction layer. parameter.
  • the above-mentioned segmentation sub-network corresponding to the angle of view is used to perform image segmentation on the sample angle of view image corresponding to the angle of view to obtain the first blood vessel segmentation results corresponding to each angle of view, including: using the feature processing layer to perform feature extraction on the sample angle of view image corresponding to the angle of view to obtain The sample feature map corresponding to the viewing angle; use the attention layer to process the sample feature map corresponding to the viewing angle to obtain the area prediction result corresponding to the viewing angle, where the area prediction result corresponding to the viewing angle is used to represent the preset in the sample viewing angle image corresponding to the viewing angle The position of the region; using the prediction layer to predict the first blood vessel segmentation results corresponding to each view angle based on the region prediction results corresponding to each view angle.
  • the segmentation sub-network can pay more attention to the image information near the preset area during subsequent image segmentation, so as to improve the sensitivity of the segmentation sub-network to blood vessel feature information , which in turn helps to improve the accuracy of vessel segmentation.
  • the above-mentioned local blood vessel segmentation labeling information includes first labeling information indicating whether the first image point of the sample perspective image belongs to a preset category and second labeling information indicating whether the first image point belongs to a preset area, and the preset category includes At least one blood vessel category and non-vessel category; the above-mentioned first blood vessel segmentation results corresponding to each perspective and the local vessel segmentation labeling information corresponding to the perspective, adjusting the parameters of the segmentation sub-network corresponding to the perspective, including at least one of the following steps: based on Adjust the parameters of the attention layer at least for the region prediction results corresponding to the perspective and the second annotation information corresponding to the perspective; adjust the feature processing layer, attention Parameters of at least one of the layer and the prediction layer.
  • training of at least one of the feature processing layer, the attention layer and the prediction layer can be implemented based on the first blood vessel segmentation results corresponding to each view and the first annotation information corresponding to the view.
  • the above-mentioned segmentation sub-network includes at least one processing unit and a prediction layer connected in sequence, each processing unit includes a feature processing layer, at least part of the processing units also includes an attention layer connected after the feature processing layer, and the prediction layer is based on at least
  • the first blood vessel segmentation result is obtained from the region prediction result output by the first attention layer, and the parameters of each attention layer are adjusted based on the region prediction results corresponding to all attention layers and the second annotation information corresponding to the view angle; the above-mentioned view-based correspondence
  • At least adjust the parameters of the attention layer including: using the difference between the area prediction results output by each attention layer and the second annotation information corresponding to the viewing angle to obtain each attention
  • the first loss value of the force layer the first loss value of each attention layer is fused to obtain the second loss value; based on the second loss value, the parameters of each attention layer are adjusted.
  • the above-mentioned first loss value is determined by using a regularization loss function; the above-mentioned first difference between the region prediction results output by each attention layer and the second labeling information corresponding to the viewing angle is used to obtain each attention
  • the first loss value of the layer includes: using the difference corresponding to each attention layer and at least one structural weight to obtain the first loss value of each attention layer correspondingly, wherein at least one structural weight is the weight of the attention layer and/or The weight of the segmentation sub-network where the attention layer is located; the above-mentioned fusion of the first loss value of each attention layer to obtain the second loss value includes: using the loss weight of each attention layer to pair the first loss value of each attention layer The first loss value is weighted to obtain the second loss value.
  • the feature extraction ability of the attention layer for the vessel region can be enhanced.
  • the obtained second loss value can be made more reasonable.
  • the above-mentioned fusion sub-network includes a weight determination layer and a fusion output layer, and the parameters of the adjusted fusion sub-network include parameters of the weight determination layer and/or fusion output layer;
  • the blood vessel segmentation results are fused to obtain the second blood vessel segmentation results of the sample medical image, including: using the weight determination layer to process the first blood vessel segmentation results corresponding to multiple viewing angles to obtain fusion weight information corresponding to each viewing angle; using the fusion output
  • the layer fuses first blood vessel segmentation results corresponding to multiple viewing angles based on the fusion weight information corresponding to each viewing angle to obtain a second blood vessel segmentation result of the sample medical image.
  • the fusion weight information is obtained by combining the first blood vessel segmentation result information corresponding to multiple perspectives by using the weight determination layer, so that the fusion sub-network can output different fusion weight information according to different first blood vessel segmentation results, and realize multiple perspectives.
  • the soft fusion of the corresponding first blood vessel segmentation result information helps to improve the accuracy of blood vessel segmentation.
  • the fusion sub-network can use the image information of the sample view maps of different views to perform blood vessel segmentation, which helps to improve the blood vessel segmentation. the accuracy.
  • the fusion weight information combines the information of the first blood vessel segmentation results corresponding to multiple views, the subsequent use of the fusion weight information for blood vessel segmentation can reduce the misclassification of blood vessel branches.
  • the above-mentioned global blood vessel segmentation labeling information includes third labeling information indicating whether the second image points of the sample medical image belong to a preset category
  • the second blood vessel segmentation result includes prediction information indicating whether each second image point belongs to a preset category
  • the preset category includes at least one vascular category and non-vascular category
  • the above-mentioned global vascular segmentation labeling information based on the second vascular segmentation result and the sample medical image adjusts the parameters of each segmentation sub-network and/or fusion sub-network, including: Based on the positional relationship between each second image point and the preset region of the blood vessel in the sample medical image, determine the position weight of each second image point; and based on the prediction information and third label information corresponding to each second image point, obtain The third loss value of each second image point; use the position weight of each second image point to weight the third loss value of each second image point to obtain a fourth loss value; based on the fourth loss value, adjust each segmentation Parameters of subnetworks and
  • the network can be made to pay more attention to the second image point with a large position weight during training, thereby improving the network's attention to the position Vessel segmentation accuracy in regions with large second image points.
  • determining the position weight of each second image point based on the above-mentioned positional relationship between each second image point and the preset area of the blood vessel in the sample medical image includes: determining the reference distance of each second image point, wherein, The reference distance of the second image point belonging to the blood vessel category is the distance between the second image point and the preset area of the blood vessel in the sample medical image, and the reference distance of the second image point belonging to the non-vascular category is the preset distance value; based on The reference distance of each second image point determines the position weight of each second image point.
  • the position weight of each second image point can be determined based on the reference distance of each second image point, so that the position weight can reflect the distance characteristic of the reference distance.
  • the above-mentioned global blood vessel segmentation and labeling information also includes Indicates whether the second image point belongs to the fourth labeling information of the preset area of the blood vessel; before determining the reference distance of each second image point, the training method of the image segmentation model also includes: using the fourth labeling information to determine that the preset area is within The position in the sample medical image; using the second blood vessel segmentation result or the third annotation information, determine whether each second image point in the sample medical image belongs to the blood vessel category or belongs to the non-vessel category.
  • the second image point can be subsequently determined.
  • the position weight of the image point is a value that assigns the image point to the position of the preset region.
  • the above-mentioned preset area is the center line, and/or, at least one type of blood vessel includes at least one of arteries and veins.
  • the segmentation sub-network can pay more attention to the area near the centerline of the blood vessel when segmenting the sample view image. It helps to improve the accuracy of vessel segmentation.
  • the image segmentation model can be used to perform blood vessel segmentation for arteries and veins.
  • the above-mentioned sample medical image is a three-dimensional image obtained by scanning an organ;
  • the above-mentioned multiple viewing angles include multiple angles of view in the transverse view, sagittal view, and coronal view;
  • a plurality of sample perspective images obtained by image extraction including: for each perspective, several sub-sample images of the perspective are extracted from the sample medical image of the perspective, and several sub-sample images of the perspective are spliced to obtain a sample perspective image corresponding to the perspective .
  • the image information corresponding to different perspectives can be obtained, and the subsequent image segmentation model can perform blood vessel segmentation based on the image information of different perspectives, which helps to improve the accuracy of blood vessel segmentation .
  • the second aspect of the present application provides an image segmentation method.
  • the method includes: acquiring a plurality of target perspective images extracted from a target medical image from multiple perspectives, wherein the target medical image contains blood vessels; Image segmentation is performed on the perspective image to obtain blood vessel segmentation results related to the target medical image.
  • the image segmentation model can use the image information of the target perspective images from multiple perspectives to perform blood vessel segmentation, which helps to improve the segmentation accuracy of the image segmentation model.
  • the above-mentioned image segmentation model includes a plurality of segmentation sub-networks and fusion sub-networks respectively corresponding to multiple viewing angles; the above-mentioned image segmentation model is used to perform image segmentation on each sample viewing angle image to obtain blood vessel segmentation related to the target medical image
  • the results include: for each viewing angle, use the segmentation sub-network corresponding to the viewing angle to perform image segmentation on the target viewing angle image corresponding to the viewing angle, and obtain the first blood vessel segmentation results corresponding to each viewing angle;
  • a blood vessel segmentation result is fused to obtain a second blood vessel segmentation result of the target medical image.
  • the fusion sub-network can use the prediction information of the first blood vessel segmentation results from multiple views, which helps to improve the segmentation of the image segmentation model. Accuracy.
  • the above-mentioned segmentation sub-network corresponding to the angle of view is used to segment the target angle of view image corresponding to the angle of view to obtain the first blood vessel segmentation results corresponding to each angle of view, including: performing feature extraction on the sample angle of view image corresponding to the angle of view to obtain the corresponding angle of view
  • the sample feature map corresponding to the angle of view is processed to obtain the area prediction result corresponding to the angle of view, wherein the area prediction result corresponding to the angle of view is used to represent the position of the preset area in the sample angle of view image corresponding to the angle of view; based on the angle of view
  • the corresponding region prediction results are predicted to obtain the first blood vessel segmentation results corresponding to each viewing angle;
  • the above-mentioned fusion sub-network is used to fuse the first blood vessel segmentation results corresponding to each viewing angle to obtain the second blood vessel segmentation result of the target medical image,
  • the method includes: obtaining fusion weight information corresponding to each viewing angle based on the first blood vessel
  • the fusion weight information is obtained by combining the first blood vessel segmentation result information corresponding to multiple views, so that the fusion sub-network can output different fusion weight information according to different first blood vessel segmentation results, and the first blood vessel segmentation result corresponding to multiple views is realized.
  • the soft fusion of blood vessel segmentation result information is helpful to improve the accuracy of blood vessel segmentation.
  • the fusion sub-network can use the image information of the sample view maps of different views to perform blood vessel segmentation, which helps to improve the accuracy of blood vessel segmentation.
  • the above-mentioned processing of the sample feature map corresponding to the viewing angle to obtain the prediction result of the area corresponding to the viewing angle is performed by the attention layer of the segmentation sub-network; and/or, the preset area is the centerline of the blood vessel; and/or,
  • the area prediction result includes probability information that each first image point in the target perspective image is a preset area.
  • the segmentation sub-network can pay more attention to the feature information of the preset region.
  • the area near the centerline of the blood vessel is also the area of the blood vessel, by setting the preset area as the centerline of the blood vessel, the segmentation sub-network can pay more attention to the area near the centerline of the blood vessel when segmenting the sample view image. This helps to improve the accuracy of blood vessel segmentation.
  • the above-mentioned first blood vessel segmentation result corresponding to the angle of view includes the first prediction information indicating whether each first image point in the target angle of view image corresponding to the angle of view belongs to the preset category
  • the second blood vessel segmentation result includes the first prediction information indicating whether each first image point in the target medical image
  • the above-mentioned first blood vessel segmentation results corresponding to multiple perspectives are used to obtain fusion weight information corresponding to each perspective , including: for each viewing angle, based on the first blood vessel segmentation result of the viewing angle, the fusion weight of each first image point corresponding to the viewing angle is obtained; the above-mentioned first blood vessel segmentation corresponding to multiple viewing angles based on the fusion weight information corresponding to each viewing angle
  • the result is fused to obtain the second blood vessel segmentation result of the target medical image, including: for each first image point, based on the
  • the fusion sub-network can use the image information of the sample view maps of different views to perform blood vessel segmentation, which helps to improve the accuracy of blood vessel segmentation.
  • the fusion weight information combines the information of the first blood vessel segmentation results corresponding to multiple views, the subsequent use of the fusion weight information for blood vessel segmentation can reduce the misclassification of blood vessel branches.
  • the above-mentioned image segmentation model is obtained through training using the training method of the image segmentation model described in the first aspect.
  • the accuracy of blood vessel segmentation is higher when the trained image segmentation model is used for blood vessel segmentation.
  • the above-mentioned target medical image is a three-dimensional image obtained by scanning an organ;
  • the above-mentioned multiple viewing angles include multiple angles of view in the transverse view, sagittal view, and coronal view;
  • Multiple target perspective images obtained by image extraction including: for each perspective, several sub-target images of the perspective are extracted from the target medical image of the perspective, and several sub-target images of the perspective are spliced to obtain the target perspective image corresponding to the perspective .
  • the image information corresponding to different perspectives can be obtained, and the subsequent image segmentation model can perform blood vessel segmentation based on the image information of different perspectives, which helps to improve the accuracy of blood vessel segmentation .
  • the third aspect of the present application provides a training device for an image detection model.
  • the training device includes an acquisition module, an image segmentation module and a parameter adjustment module.
  • the obtaining module is used to obtain multiple sample view images extracted from the sample medical image from multiple view angles, wherein the sample medical image contains blood vessels;
  • the image segmentation module is used to obtain multiple sample view images extracted from the multiple view angles respectively A sample perspective image, wherein the sample medical image contains a blood vessel;
  • the parameter adjustment module is used to adjust the network parameters of the image segmentation model based on the blood vessel segmentation result.
  • the fourth aspect of the present application provides an image segmentation device, and the image segmentation device includes an acquisition module and an image segmentation module.
  • the obtaining module is used to obtain a plurality of target view images extracted from the target medical image from multiple views, wherein the target medical image contains blood vessels; the image segmentation module is used to use the image segmentation model to perform image segmentation on each target medical image, to A blood vessel segmentation result related to the target medical image is obtained.
  • the fifth aspect of the present application provides an electronic device, including a memory and a processor coupled to each other, and the processor is used to execute the program instructions stored in the memory, so as to realize the training method of the image segmentation model in the first aspect above, or realize The image segmentation method in the second aspect above.
  • the sixth aspect of the present application provides a computer-readable storage medium, on which program instructions are stored.
  • the program instructions are executed by a processor, the training method of the image segmentation model in the above-mentioned first aspect is realized, or the method in the above-mentioned second aspect is realized. image segmentation method.
  • the seventh aspect of the present application provides a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in a processor of an electronic device
  • the processor in the electronic device is used to implement the image segmentation model training method in the first aspect above, or implement the image segmentation method in the second aspect.
  • Fig. 1 is the first flowchart of an embodiment of the training method of the image detection model of the present application
  • Fig. 2 is the second schematic flow chart of an embodiment of the training method of the image segmentation model of the present application
  • Fig. 3 is the third schematic flow chart of an embodiment of the training method of the image segmentation model of the present application.
  • Fig. 4 is a schematic structural diagram of the segmentation sub-network in the training method of the image segmentation model of the present application
  • Fig. 5 is the fourth schematic flow chart of an embodiment of the training method of the image segmentation model of the present application.
  • Fig. 6 is a fifth schematic flow chart of an embodiment of the training method of the image segmentation model of the present application.
  • Fig. 7 is a structural representation of the image segmentation model in the training method of the image segmentation model of the present application.
  • FIG. 8 is a first schematic flow diagram of an embodiment of an image segmentation method of the present application.
  • Fig. 9 is a schematic frame diagram of an embodiment of a training device for an image segmentation model of the present application.
  • Fig. 10 is a schematic frame diagram of an embodiment of an image segmentation device of the present application.
  • Fig. 11 is a schematic frame diagram of an embodiment of the electronic device of the present application.
  • Fig. 12 is a schematic diagram of an embodiment of a computer-readable storage medium of the present application.
  • the execution subject of the method steps in the embodiments disclosed in the present application may be executed by hardware, or executed by a processor running computer-executable codes.
  • FIG. 1 is a schematic flow chart of an embodiment of an image segmentation model training method of the present application. Specifically, the following steps may be included:
  • Step S11 Obtain a plurality of sample perspective images extracted from the sample medical image from multiple perspectives respectively.
  • the sample medical image may be a three-dimensional image, specifically, a three-dimensional image obtained by scanning an organ.
  • three-dimensional imaging may be performed by using computerized tomography (Computed Tomography, CT) imaging technology to obtain a medical image of the sample. Including blood vessels in the sample medical image, the blood vessels can then be segmented.
  • the sample medical image is, for example, a three-dimensional image of the lungs, or a three-dimensional image of the heart.
  • a constituent unit of a three-dimensional image such as a sample medical image, a sample perspective image, or the like is a voxel.
  • a plurality of viewing angles means including at least two viewing angles.
  • the plurality of viewing angles include multiples of a transverse viewing angle, a sagittal viewing angle, and a coronal viewing angle.
  • the multiple sample perspective images obtained by extracting the sample medical image from multiple perspectives are to crop the sample medical image in the direction of the perspective to obtain multiple sample perspective images.
  • several sub-sample images of the viewing angle can be extracted from the sample medical image of the viewing angle, and the several sub-sample images of the viewing angle can be stitched together to obtain a sample viewing angle image corresponding to the viewing angle.
  • several sub-sample images of the viewing angles are obtained by extracting the sample medical images from the viewing angles, and the image extraction may be performed in the form of a sliding window, so as to obtain several sub-sample images.
  • several sub-sample images of a certain size can be extracted from the transverse direction, and then these sub-sample images can be spliced to obtain a sample perspective image.
  • the size of the sliding window is 128*128*128, and four sub-sample images of size 128*128*128 are extracted from the transect direction, and these four sub-sample images are stitched into a size of 128*128*512
  • the image of the sample view can be obtained. Therefore, by extracting the corresponding sample perspective image for each perspective, the image information corresponding to different perspectives can be obtained, and the subsequent image segmentation model can perform blood vessel segmentation based on the image information of different perspectives, which helps to improve the accuracy of blood vessel segmentation .
  • the sample medical image may be obtained by resampling the initial sample medical image.
  • the resolution of the sample medical image can meet the requirements, which helps to improve the accuracy of blood vessel segmentation.
  • the pixel values in the sample medical images can also be normalized to facilitate the training of the subsequent image segmentation model.
  • operations such as rotation, translation, mirroring, and scaling can be performed on the sample perspective images to achieve data enhancement and balance positive and negative samples in the sample perspective images, Achieving the purpose of increasing the amount of data helps to improve the generalization of the image segmentation model and reduce the possibility of overfitting.
  • Step S12 Using the image segmentation model to perform image segmentation on each sample perspective image to obtain a blood vessel segmentation result related to the sample medical image.
  • the image segmentation model can use the image information of the sample perspective maps from different perspectives to obtain more information about blood vessels. The feature information, and finally output the blood vessel segmentation results related to the sample medical image.
  • the blood vessel segmentation results may include segmentation results of arteries and veins in the sample medical image.
  • the blood vessel segmentation result may be the result that the image points in the sample medical image belong to arteries, veins or background.
  • Step S13 Adjust the network parameters of the image segmentation model based on the blood vessel segmentation result.
  • the blood vessel label information can be considered as a classification result that each pixel in the sample medical image is a blood vessel or a background.
  • the blood vessel label information may also include classification information that the blood vessel is an artery or a vein.
  • the network parameters of the image segmentation model can be adjusted according to the difference between the blood vessel segmentation result and the corresponding blood vessel label information, and the training of the image segmentation model using sample view maps from different perspectives can be realized.
  • the trained image segmentation model can use the image information of sample view maps of different views for blood vessel segmentation in subsequent applications, which helps to improve blood vessel segmentation. the accuracy.
  • FIG. 2 is a second schematic flowchart of an embodiment of an image segmentation model training method of the present application.
  • the image segmentation model mentioned above includes a plurality of segmentation sub-networks and fusion sub-networks respectively corresponding to multiple viewing angles, that is, the number of segmentation sub-networks is the same as the viewing angle, and the output results of all segmentation sub-networks can be input into the fusion sub-network.
  • the "use the image segmentation model to perform image segmentation on each sample perspective image to obtain a blood vessel segmentation result related to the sample medical image" mentioned in the above steps specifically includes step S121 and step S122.
  • Step S121 For each view, use the segmentation sub-network corresponding to the view to perform image segmentation on the sample view images corresponding to the view, and obtain the first blood vessel segmentation results corresponding to each view.
  • the segmentation sub-network corresponding to each view will perform an image segmentation operation on the sample view images corresponding to the view, so as to obtain the first blood vessel segmentation result corresponding to each view.
  • the sample view image extracted from the sample medical image from the transverse view can be input into the segmentation sub-network to obtain the first sample view image of the transverse view.
  • the first blood vessel segmentation result may be a prediction result of whether the first image point of the sample perspective image belongs to a preset category.
  • the preset categories include at least one vascular category and a non-vascular category, and the vascular categories are arteries and veins.
  • Step S122 Use the fusion sub-network to perform fusion processing on the first blood vessel segmentation results corresponding to each view angle to obtain a second blood vessel segmentation result of the sample medical image.
  • the fusion sub-network can be used to perform fusion processing on the first blood vessel segmentation results corresponding to each view, so that the fusion sub-network can realize image information based on different view angles.
  • the second blood vessel segmentation result may be a prediction result of whether the second image point of the sample medical image belongs to a preset category.
  • the preset categories include at least one vascular category and a non-vascular category, and the vascular categories are arteries and veins.
  • the fusion sub-network may be a network of encoding-decoding structure.
  • the convolutional layers of each layer in the encoder and decoder can be atrous convolutional layers, so that information of different sizes of receptive fields can be obtained.
  • a batch normalization layer and an activation layer can be connected after the convolutional layer. Pooling layers can be connected between each layer of the encoder, upsampling can be performed between the encoder and the decoder, and between the layers of the decoder.
  • the image segmentation model can be based on the image information of different viewing angles , to realize the segmentation of blood vessels.
  • the "adjusting the network parameters of the image segmentation model based on the blood vessel segmentation results" mentioned in the above steps may specifically include at least one of the following steps:
  • Step S131 For each view, based on the first blood vessel segmentation results corresponding to each view and the local vessel segmentation labeling information corresponding to the view, adjust the parameters of the segmentation sub-network corresponding to the view.
  • the local blood vessel segmentation and labeling information is the label information of the blood vessels in the sample view image corresponding to the view.
  • the segmentation sub-network can be trained by using each first blood vessel segmentation result corresponding to each viewing angle and the local blood vessel segmentation labeling information corresponding to the viewing angle.
  • the way of training the segmentation sub-network can be supervised learning or semi-supervised learning. For example, based on the difference between the first blood vessel segmentation result and the corresponding local blood vessel segmentation label information, the loss value of the two can be determined, and the parameters of the segmentation sub-network corresponding to the perspective can be adjusted according to the loss value.
  • Step S132 Adjust parameters of each segmentation sub-network and/or fusion sub-network based on the second blood vessel segmentation result and the global vessel segmentation labeling information of the sample medical image.
  • the global blood vessel segmentation annotation information is the label information of blood vessels in the sample medical image. Because the second blood vessel segmentation result is obtained based on the first blood vessel segmentation result, when training the image segmentation model based on the second blood vessel segmentation result and the global blood vessel segmentation labeling information of the sample medical image, each segmentation factor can be adjusted based on the two. Parameters of the network and/or fused sub-networks.
  • the parameters of the segmentation sub-network and the fusion sub-network can be adjusted simultaneously based on the difference between the second blood vessel segmentation result and the global vessel segmentation labeling information of the sample medical image. In one embodiment, only the parameters of the fusion sub-network can be adjusted based on the second blood vessel segmentation result and the global blood vessel segmentation labeling information of the sample medical image. In one embodiment, the parameters of the segmentation sub-network can be adjusted based on the first blood vessel segmentation result and the corresponding local blood vessel segmentation and labeling information, and then adjusted based on the second blood vessel segmentation result and the global blood vessel segmentation and labeling information of the sample medical image. Parameters for fusing subnetworks.
  • the training of the segmentation sub-network and the fusion sub-network can be realized.
  • the segmentation sub-network includes a sequentially connected feature processing layer, attention layer and prediction layer.
  • the segmentation sub-network is such as 3D-Unet, and the attention layer can be set after the feature processing layer.
  • FIG. 3 is a schematic flowchart of a third embodiment of an image segmentation model training method of the present application.
  • the above steps mentioned “use the segmentation sub-network corresponding to the viewing angle to perform image segmentation on the sample viewing angle image corresponding to the viewing angle, and obtain each viewing angle
  • the corresponding first blood vessel segmentation result" specifically includes step S1211 to step S1213.
  • Step S1211 Using the feature processing layer to perform feature extraction on the sample view images corresponding to the view angles to obtain sample feature maps corresponding to the view angles.
  • the feature processing layer is used to extract the feature information of the sample view image, so as to obtain the sample feature map corresponding to the sample view image. It can be understood that the feature processing layer of each segmentation sub-network can output a sample feature map.
  • Step S1212 Use the attention layer to process the sample feature map corresponding to the view, and obtain the region prediction result corresponding to the view.
  • the attention layer is, for example, an attention module based on an attention mechanism.
  • the attention module can be a common attention module in the field of deep learning, and will not be repeated here.
  • the area prediction result corresponding to the viewing angle is used to represent the position of the preset area in the sample viewing angle image corresponding to the viewing angle.
  • the area prediction result corresponding to the viewing angle may indicate the probability that each voxel in the sample feature map is a preset area.
  • the preset area in the sample perspective image may be the centerline position of the blood vessel.
  • the segmentation sub-network can pay more attention to the image information near the preset area during the subsequent image segmentation, so as to improve the sensitivity of the segmentation sub-network to the blood vessel feature information, and then It helps to improve the accuracy of vessel segmentation.
  • the area near the centerline of the blood vessel is also the area of the blood vessel, so by setting the preset area as the centerline of the blood vessel, the segmentation sub-network can pay more attention to the area near the centerline of the blood vessel when segmenting the sample view image. , which helps to improve the accuracy of vessel segmentation.
  • Step S1213 Using the prediction layer to predict and obtain the first blood vessel segmentation results corresponding to each viewing angle based on the region prediction results corresponding to each viewing angle.
  • the prediction layer may perform further prediction according to the region prediction results corresponding to the viewing angles, so as to obtain the first blood vessel segmentation results corresponding to each viewing angle.
  • the sample feature map can be processed based on the region prediction result corresponding to the viewing angle, so that the weight of the feature information of the sample feature map on the preset region is greater, so that the prediction layer can obtain the first blood vessel segmentation result , more feature information near the preset region can be referred to, so that the accuracy of the first blood vessel segmentation result is higher.
  • the first blood vessel segmentation result may include a prediction result of whether the first image point of the sample perspective image belongs to a preset category, for example, the first blood vessel segmentation result may indicate that the first image point belongs to an artery or a vein, or belongs to a background.
  • the adjustment of the parameters of the segmentation sub-network corresponding to the viewing angle mentioned in the above steps can specifically be the adjustment of the feature processing layer, attention layer, and prediction layer. at least one of the parameters.
  • the local blood vessel segmentation and labeling information mentioned in the above steps may include first labeling information indicating whether the first image point of the sample perspective image belongs to a preset category and first labeling information indicating whether the first image point belongs to a preset area.
  • the preset category includes at least one vascular category and non-vascular category.
  • the first image point of the sample perspective image is, for example, a voxel of the sample perspective image.
  • the vascular category includes arteries and veins, and the non-vascular category neither belongs to arteries nor veins, as the background.
  • the "adjusting the parameters of the segmentation sub-network corresponding to the viewing angle based on the first blood vessel segmentation results corresponding to each viewing angle and the local blood vessel segmentation labeling information corresponding to the viewing angle" mentioned in the above steps may include at least one of the following steps:
  • Step S1311 Adjust at least the parameters of the attention layer based on the region prediction result corresponding to the view and the second annotation information corresponding to the view.
  • the parameters of the attention layer can be adjusted based on the difference between the two.
  • parameters of the attention layer and the feature processing layer may also be adjusted based on the region prediction result corresponding to the view and the second annotation information corresponding to the view.
  • Step S1312 Adjust the parameters of at least one of the feature processing layer, the attention layer and the prediction layer based on the first blood vessel segmentation results corresponding to each view and the first annotation information corresponding to the view.
  • the feature processing layer, attention layer and prediction layer can be adjusted A parameter of at least one of the layers. In one embodiment, it may be to adjust the parameters of the feature processing layer, the attention layer and the prediction layer. In one embodiment, it may be to adjust the parameters of the feature processing layer and the prediction layer. In one embodiment, it is also possible to only adjust the parameters of the prediction layer.
  • training of at least one of the feature processing layer, the attention layer and the prediction layer can be implemented based on the first blood vessel segmentation results corresponding to each view and the first annotation information corresponding to the view.
  • the difference between the first blood vessel segmentation results corresponding to each view angle and the first label information may be used, and the loss function is used to determine the loss value, and then at least the feature processing layer, the attention layer and the prediction layer are adjusted. parameter of one.
  • the weight of the first image point farther from the center line of the blood vessel can be set to be greater, and then the weight of the first image point and the loss value are weighted, In this way, when training the segmentation sub-network, a higher weight can be given to the edge region of the blood vessel, so that the edge region of the blood vessel can become the focus of network training, and the accuracy of the segmentation sub-network to segment the edge region of the blood vessel can be improved.
  • the segmentation sub-network includes at least one processing unit and a prediction layer connected in sequence, each processing unit includes a feature processing layer, at least some processing units also include an attention layer connected to the feature processing layer, and the prediction layer A first blood vessel segmentation result is obtained based on the region prediction result output by at least one attention layer.
  • Each processing unit can be connected sequentially, and the last processing unit can be connected with the prediction layer.
  • each processing unit includes a feature processing layer, and the feature processing layer may be a feature extraction layer or a feature decoding layer.
  • an attention layer is connected after the feature processing layer.
  • FIG. 4 is a schematic structural diagram of the segmentation sub-network in the training method of the image segmentation model of the present application.
  • the network structure of the division sub-network 40 is a 3D-Unet structure.
  • the number of processing units 41 is 9, which are respectively processing units S1 to S9, and the prediction layer 42 is S10.
  • the feature processing layer 411 in the processing units S1-S5 is a feature extraction layer
  • the feature processing layer 411 in the processing units S6-S9 is a feature decoding layer.
  • each feature processing layer 311 is followed by an attention layer 412 .
  • the feature processing layer 411 may include two layers of sub-processing layers, and each sub-processing layer may include a convolutional layer (Conv), batch normalization (BN) and an activation function (Relu ).
  • the prediction layer S10 includes a convolutional layer (Conv) and a normalized exponential function (softmax).
  • the number next to each layer in FIG. 4 indicates the number of channels of the layer, for example, the number of channels of the first sub-processing layer of the feature processing layer 411 of the processing unit S1 is 16. Maxpooling means maximum pooling operation, upsample is upsampling, Conv is convolution operation, for feature union operations.
  • the sample perspective image can be input to the processing unit S1, and finally the prediction layer 42 outputs the first blood vessel segmentation result.
  • the parameters of each attention layer in the segmentation sub-model are adjusted based on the region prediction results corresponding to all attention layers and the second annotation information corresponding to the view angle.
  • the "at least adjust the parameters of the attention layer based on the region prediction result corresponding to the view and the second annotation information corresponding to the view" mentioned in the above steps specifically includes steps S13111 to S13113.
  • Step S13111 Using the difference between the region prediction results output by each attention layer and the second annotation information corresponding to the view angle, correspondingly obtain the first loss value of each attention layer.
  • Each attention layer can output region prediction results, so the difference between the region prediction results output by each attention layer and the second label information corresponding to the view can be used to obtain the first loss value of each attention layer .
  • the difference corresponding to each attention layer and at least one structural weight may be used to obtain the first loss value of each attention layer.
  • the difference corresponding to each attention layer is the difference between the regional prediction results output by each attention layer and the second label information
  • at least one structural weight is the weight of the attention layer and/or the weight of the segmentation sub-network where the attention layer is located .
  • the weight of the attention layer may be the weight of the loss value of the attention layer.
  • the weight of the segmented sub-network where the attention layer is located indicates the overall weight of the segmented sub-network where the attention layer is located.
  • the first loss value is determined by using a regularized loss function, that is, in the process of calculating the first loss value, the loss value obtained by using the regularized loss function is further constrained.
  • a regularized loss function to further constrain the first loss value, the feature extraction ability of the attention layer for the vessel region can be enhanced.
  • Step S13112 Fusing the first loss values of each attention layer to obtain the second loss value.
  • the first loss values of each attention layer may be fused to obtain a comprehensive loss value representing all attention layers, that is, a second loss value.
  • the second loss value may specifically be obtained by weighting the first loss values of each attention layer by using the loss weights of each attention layer.
  • the formula (1) for calculating the second loss value is as follows:
  • L attention (X, Y, w) represents the first loss value based on all attention layers in a segmentation sub-network
  • X is the first image point
  • Y is the corresponding second label information
  • w (w 1 ; w 2 ;...; w s ) represents the weight of the segmentation sub-network where each attention layer is located
  • ( ⁇ 1 ; ⁇ 2 ;...; ⁇ s ) represents the loss weight of each attention layer.
  • the loss weight of the attention layer closer to the prediction layer can be set to be larger .
  • ⁇ 1 ⁇ 9 0.2, 0.2, 0.4, 0.4, 0.6, 0.6, 0.6, 0.8, 0.8 may be set. In this way, by setting the loss weight of the attention layer closer to the prediction layer to be larger, the obtained second loss value can be more reasonable.
  • the formula (2) for calculating the second loss value is as follows:
  • the regularization loss function is, for example, an L2 regularization loss function.
  • the formula (3) for calculating the second loss value is as follows:
  • the L2 regularization loss function The calculation formula (4) of is as follows:
  • w is the loss value of the L2 regularization loss function based on each attention layer in a segmentation sub-network
  • w (w 1 ; w 2 ;...; w s ) represents the weight of the attention layer of each processing unit
  • X is the first An image point
  • Y is the corresponding second label information
  • y i is the region prediction result corresponding to the first image point
  • P represents the probability value of the region prediction result of the first image point.
  • Step S13113 Adjust the parameters of each attention layer based on the second loss value.
  • the parameters of each attention layer can be adjusted according to the second loss value, so as to realize the training of the attention layer.
  • the above fusion sub-network includes a weight determination layer and a fusion output layer.
  • the fusion sub-network can also have several feature extraction layers and several encoding layers.
  • FIG. 5 is a schematic diagram of a fourth flow chart of an embodiment of an image segmentation model training method of the present application.
  • the second blood vessel segmentation result" specifically includes step S1221 and step S1222.
  • Step S1221 Using the weight determination layer to process the first blood vessel segmentation results corresponding to multiple viewing angles to obtain fusion weight information corresponding to each viewing angle.
  • the first blood vessel segmentation results corresponding to multiple viewing angles can be channel concatenated, so as to obtain the first blood vessel segmentation results representing multiple viewing angles Information.
  • the weight determination layer processes the first blood vessel segmentation results corresponding to multiple views, either directly based on the first blood vessel segmentation results corresponding to multiple views, or based on other network layers in the fusion sub-network to process the first blood vessel segmentation results. Segmentation results are processed later.
  • the fusion weight information corresponding to each viewing angle may be the weight of the probability of which category the second image point in the sample medical image belongs to. Therefore, the fusion weight information is obtained by combining the first blood vessel segmentation result information corresponding to multiple perspectives by using the weight determination layer, so that the fusion sub-network can output different fusion weight information according to different first blood vessel segmentation results, and realize multiple perspectives.
  • the soft fusion of the corresponding first blood vessel segmentation result information helps to improve the accuracy of blood vessel segmentation.
  • the fusion weight information combines the information of the first blood vessel segmentation results corresponding to multiple views, the subsequent use of the fusion weight information for blood vessel segmentation can reduce the misclassification of blood vessel branches.
  • the fusion formula (5) of fusion weight information is as follows:
  • F(W g ) is the fusion weight information
  • G is the fusion sub-network
  • W g is the weight of the fusion sub-network
  • It is the first blood vessel segmentation result information corresponding to multiple viewing angles.
  • Step S1222 Use the fusion output layer to fuse the first blood vessel segmentation results corresponding to multiple views based on the fusion weight information corresponding to each view to obtain a second blood vessel segmentation result of the sample medical image.
  • the fusion output layer can be used to fuse the fusion weight information and the first blood vessel segmentation results corresponding to multiple perspectives, so as to make full use of the first blood vessel segmentation results corresponding to multiple perspectives, so as to obtain the sample medical The second blood vessel segmentation result of the image.
  • the fusion sub-network can use the image information of the sample view maps of different views to perform blood vessel segmentation, which is helpful to improve the blood vessel segmentation. the accuracy.
  • the global blood vessel segmentation labeling information mentioned in the above steps includes third labeling information indicating whether the second image points of the sample medical image belong to a preset category, and the second blood vessel segmentation results include whether each second image point is Forecast information belonging to preset categories.
  • the preset categories include at least one vascular category and a non-vascular category.
  • at least one blood vessel class includes at least one of arteries and veins.
  • the adjustment of the parameters of the fusion sub-network mentioned in the above steps may specifically include adjusting the parameters of the weight determination layer and/or the fusion output layer.
  • FIG. 6 is a schematic flowchart of a fifth embodiment of a method for training an image segmentation model of the present application.
  • the "adjusting the parameters of each segmentation sub-network and/or fusion sub-network based on the second blood vessel segmentation result and the global vessel segmentation labeling information of the sample medical image" mentioned in the above steps specifically includes steps S1321 to S1324.
  • Step S1321 Based on the positional relationship between each second image point and a preset region of blood vessels in the sample medical image, determine the position weight of each second image point.
  • the preset area is a centerline, specifically, a centerline of a blood vessel, that is, a centerline of an arterial vessel and a centerline of a venous vessel.
  • the positional relationship between each second image point and a preset area of the blood vessel in the sample medical image may be the distance between the second image point and the preset area.
  • step S1321 may specifically include step S13212 and step S13213.
  • Step S13211 Determine the reference distance of each second image point.
  • the reference distance of the second image point belonging to the blood vessel category is the distance between the second image point and the preset area of the blood vessel in the sample medical image, that is, the distance between the second image point of the blood vessel area category
  • the reference distance is the distance between the second image point and the preset area of the blood vessel in the sample medical image.
  • the preset area of the blood vessel is the centerline of the blood vessel, and the distance from the upper point of the centerline of the blood vessel to the preset area can be regarded as zero.
  • the reference distance of the second image points belonging to the non-blood vessel category is a preset distance value, that is, the reference distance of the second image points belonging to the background is a preset distance value, and the preset distance value is, for example, 0.
  • Step S13212 Based on the reference distance of each second image point, determine the position weight of each second image point.
  • the positional relationship between each second image point and the preset area of the blood vessel in the sample medical image is obtained, so that the positional relationship between each second image point can be determined based on the reference distance according to the needs of network training. position weight.
  • the position weight of each second image point can be determined based on the reference distance of each second image point, so that the position weight can reflect the distance characteristic of the reference distance.
  • the greater the reference distance of the second image point belonging to the blood vessel category, the greater the corresponding position weight, and the position weight of the second image point belonging to the non-blood vessel category is a preset weight value.
  • the weight of the second image point in the edge area of the blood vessel area may be greater, so that the fusion sub-network can pay more attention to the edge area of the blood vessel during training, which helps to improve the segmentation of the edge area of the blood vessel in the blood vessel segmentation the accuracy.
  • the reference distance formula (6) for calculating the second image point belonging to the blood vessel category is as follows:
  • d i is the reference distance of the second image point belonging to the blood vessel category
  • point cj is a point on the preset area
  • y i is the second image point belonging to the blood vessel category.
  • the calculation formula (7) of the position weight of the second image point is as follows:
  • max(d i ) is the reference distance from the second image point at the edge of the blood vessel to the preset area
  • d i is the reference distance of the second image point belonging to the blood vessel category
  • D is the position weight of the second image point.
  • the calculation formula (8) of the position weight of the second image point is as follows:
  • Step S1322 Obtain a third loss value of each second image point based on the prediction information and the third label information corresponding to each second image point.
  • the prediction information corresponding to the second image point corresponds to the third label information (whether the second image point belongs to the preset category), so based on the difference between the two , using the relevant loss function to calculate, the third loss value of the second image point can be obtained.
  • the third loss value is obtained based on loss values corresponding to multiple different loss functions.
  • the third loss value may be obtained based on loss values corresponding to two different loss functions.
  • Two different loss functions are Cross Entropy Loss (CEL) and Dice Loss.
  • the formula (9) for calculating the second loss value is as follows:
  • L total is the second loss value
  • L dl is the loss value corresponding to the Dice Loss function
  • L cel is the loss value corresponding to the CEL loss function
  • is the weight of the loss value corresponding to the CEL loss function.
  • the formula (10) for calculating the loss value corresponding to the CEL loss function is as follows:
  • Y +1 , Y +2 , Y - represent the artery, vein and background of the third label information
  • W represents the prediction information corresponding to the second image point, where W is the weight of each segmentation sub-network and fusion sub-network, and ⁇ and ⁇ are adjustment coefficients.
  • the formula (11) for calculating the loss value corresponding to the Dice Loss loss function is as follows:
  • step S1321 and step S1322 is not limited.
  • Step S1323 Perform weighting processing on the third loss value of each second image point by using the position weight of each second image point to obtain a fourth loss value.
  • the third loss value of each second image point is weighted by using the position weight of each second image point, so that the obtained fourth loss value can reflect the difference in the importance of different second image points, so that the position weight is large
  • the second image point has a greater influence on the fourth loss value, so that when training the fusion sub-network, more attention can be paid to the second image point with a large position weight.
  • the formula (12) for calculating the fourth loss value is as follows:
  • L total is the second loss value
  • L dl is the loss value corresponding to the Dice Loss loss function
  • L cel is the loss value corresponding to the CEL loss function
  • is the weight of the loss value corresponding to the CEL loss function
  • D is the second image point position weight.
  • Step S1324 Based on the fourth loss value, adjust the parameters of the segmentation sub-network and/or the fusion sub-network.
  • the parameters of the segmentation sub-network and/or the fusion sub-network can be adjusted according to the fourth loss value. In one embodiment, only the fusion sub-network may be adjusted. In another embodiment, it may be to adjust the parameters of the segmentation sub-network and the fusion sub-network.
  • the network can be made to pay more attention to the second image point with a large position weight during training, thereby improving the network's focus on the position. Vessel segmentation accuracy in regions with large second image points.
  • the global blood vessel segmentation labeling information further includes fourth labeling information indicating whether the second image point belongs to a preset area of a blood vessel.
  • the method for training the image segmentation model of the present application further includes step S21 and step S22.
  • Step S21 Using the fourth annotation information, determine the position of the preset area in the sample medical image.
  • the second image point belongs to the second image point of the preset area of the blood vessel, and then the position of the preset area in the sample medical image can be determined.
  • Step S22 Using the second blood vessel segmentation result or the third annotation information, determine whether each second image point in the sample medical image belongs to the blood vessel category or belongs to the non-vessel category.
  • the second blood vessel segmentation result can be used to determine that each second image point in the sample medical image belongs to the blood vessel category or fall into the non-vascular category.
  • the third labeling information also includes information about whether the second image points of the sample medical image belong to the preset category, it can be determined based on the third labeling information that each second image point in the sample medical image belongs to Vascular or non-vascular.
  • the second image point can be subsequently determined.
  • the position weight of the image point is a value that assigns the image point to the position of the preset region.
  • FIG. 7 is a schematic structural diagram of the image segmentation model in the training method of the image segmentation model in the present application.
  • the image segmentation model 70 includes a plurality of segmentation sub-networks 73 and fusion sub-networks 75 .
  • an encoder 751 and a decoder 752 are included in the fusion sub-network 75 .
  • a weight determination layer 7521 and a fusion output layer 7522 are also included.
  • the sample medical image 71 is extracted into sample perspective images 72 corresponding to three perspectives, and each sample perspective image 72 is input into a corresponding segmentation subnetwork 73 to obtain a first segmentation result 74 corresponding to the number of perspectives. All the first segmentation results 74 are input into the encoder 751 of the fusion sub-network 75 after performing the feature joint operation.
  • the feature information output by the last sub-processing layer of the first feature processing layer in the encoder 751 can be obtained through the dot product to obtain the first feature information that needs to be input to the weight determination layer 7521, and the previous one in the weight determination layer 7521
  • the second feature information that needs to be input to the weight determination layer 7521 can be obtained.
  • the weight determination layer 7521 may obtain fusion weight information based on the first feature information and the second feature information.
  • the fusion weight information continues to be decoded by other network layers of the decoder 752 , and finally the fusion output layer 7522 outputs the second blood vessel segmentation result 76 .
  • FIG. 8 is a schematic flowchart of a first embodiment of an image segmentation method in the present application.
  • the image segmentation method may include step S31 and step S32.
  • Step 31 Obtain multiple target view images extracted from the target medical image from multiple view angles.
  • the target medical image contains blood vessels
  • the image segmentation model can perform blood vessel segmentation on the blood vessels contained in the target medical image.
  • the target medical image is a three-dimensional image obtained by scanning organs.
  • the method of obtaining the target medical image reference may be made to the method of obtaining the sample medical image in step S11 above, and details will not be repeated here.
  • the plurality of viewing angles include multiples of a transverse viewing angle, a sagittal viewing angle, and a coronal viewing angle.
  • several sub-target images of the viewing angle can be extracted from the viewing angle pair target medical image, and the several sub-target images of the viewing angle can be spliced to obtain a target viewing angle image corresponding to the viewing angle.
  • the method of obtaining the target perspective image reference may be made to the method for obtaining the sample perspective image in step S11 above, which will not be repeated here.
  • Step S32 Using the image segmentation model to perform image segmentation on each target view image to obtain a blood vessel segmentation result related to the target medical image.
  • the blood vessel segmentation result may be classification information of image points (voxels) of the target medical image, and the classification information includes arteries, veins and backgrounds. That is, the image segmentation model can be used to perform image segmentation on each target view image, so as to obtain the classification information of the image points of the target medical image.
  • the image segmentation model can use the image information of the target perspective images from multiple perspectives to perform blood vessel segmentation, which helps to improve the segmentation accuracy of the image segmentation model.
  • the image segmentation model may be the image segmentation model described in the above-mentioned embodiment of the training method of the image segmentation model.
  • the image segmentation model is obtained through training using the image segmentation model training method described in the embodiment of the image segmentation model training method. Therefore, by limiting the image segmentation model to be obtained by using the embodiment of the image segmentation model training method, the accuracy of blood vessel segmentation is higher when the trained image segmentation model is used for blood vessel segmentation.
  • the image segmentation model described in this embodiment includes multiple segmentation sub-networks and fusion sub-networks respectively corresponding to multiple viewing angles.
  • the segmentation sub-network and the fusion sub-network are, for example, the segmentation sub-network and the fusion sub-network of the image segmentation model described in the embodiment of the training method for the image segmentation model above.
  • the "image segmentation model” mentioned in the above steps performs image segmentation on each sample viewing angle image to obtain blood vessels related to the target medical image Segmentation result" specific steps S321 and S322.
  • Step S321 For each viewing angle, use the segmentation sub-network corresponding to the viewing angle to perform image segmentation on the target viewing angle image corresponding to the viewing angle, and obtain the first blood vessel segmentation results corresponding to each viewing angle.
  • the above-mentioned first blood vessel segmentation result corresponding to the viewing angle includes first prediction information indicating whether each first image point in the target viewing angle image corresponding to the viewing angle belongs to a preset category.
  • the preset categories include at least one vascular category and a non-vascular category.
  • Blood vessel classes are, for example, arteries and veins.
  • step S321 For a detailed description of step S321, please refer to the relevant description of step S121 in the embodiment of the above-mentioned image segmentation model training method, which will not be repeated here.
  • Step S322 Use the fusion sub-network to perform fusion processing on the first blood vessel segmentation results corresponding to each view angle to obtain a second blood vessel segmentation result of the target medical image.
  • the second blood vessel segmentation result includes second prediction information indicating whether each second image point in the target medical image belongs to a preset category, and the preset category includes at least one of a blood vessel category and a non-vascular category.
  • step S321 For a detailed description of step S321, please refer to the relevant description of step S122 in the embodiment of the above-mentioned image segmentation model training method, which will not be repeated here.
  • the fusion sub-network can use the prediction information of the first blood vessel segmentation results from multiple views, which helps to improve the segmentation of the image segmentation model. Accuracy.
  • the "use the segmentation sub-network corresponding to the viewing angle to perform image segmentation on the target viewing angle image corresponding to the viewing angle to obtain the first blood vessel segmentation results corresponding to each viewing angle" mentioned in the above steps specifically includes steps 3211 to 3213.
  • Step S3211 Perform feature extraction on the sample view images corresponding to the view angles to obtain sample feature maps corresponding to the view angles.
  • step S3211 please refer to the related description of step S1211 in the embodiment of the above-mentioned image segmentation model training method, which will not be repeated here.
  • Step S3212 Process the sample feature map corresponding to the viewing angle to obtain the region prediction result corresponding to the viewing angle.
  • the region prediction result corresponding to the viewpoint is used to represent the position of the preset region in the sample viewpoint image corresponding to the viewpoint.
  • the preset area is the centerline of the blood vessel.
  • the region prediction result includes probability information that each first image point in the target perspective image is a preset region.
  • the "processing the sample feature map corresponding to the view to obtain the region prediction result corresponding to the view" mentioned in step 3212 is performed by the attention layer of the segmentation sub-network.
  • step S3212 please refer to the related description of step S1212 in the embodiment of the above-mentioned image segmentation model training method, which will not be repeated here.
  • Step S3213 Predict and obtain the first blood vessel segmentation results corresponding to each viewing angle based on the region prediction results corresponding to the viewing angles.
  • step S3213 For a detailed description of step S3213, please refer to the relevant description of step S1213 in the above embodiment of the training method for an image segmentation model, which will not be repeated here.
  • the first blood vessel segmentation result corresponding to the viewing angle is obtained by predicting the first blood vessel segmentation result corresponding to the viewing angle based on the area prediction result corresponding to the viewing angle, when the first segmentation result can be obtained by using more of the area prediction result, it is helpful to improve the segmentation accuracy of the image segmentation model.
  • the "using the fusion sub-network to fuse the first blood vessel segmentation results corresponding to each view to obtain the second blood vessel segmentation result of the target medical image" mentioned in the above steps specifically includes steps 3221 to 3222.
  • Step 3221 Based on the first blood vessel segmentation results corresponding to multiple views, obtain fusion weight information corresponding to each view.
  • the fusion weight of each first image point corresponding to the viewing angle may be obtained based on the first blood vessel segmentation result of the viewing angle.
  • step S3221 please refer to the related description of step S1221 in the embodiment of the above-mentioned image segmentation model training method, which will not be repeated here.
  • Step 3222 Based on the fusion weight information corresponding to each viewing angle, the first blood vessel segmentation results corresponding to multiple viewing angles are fused to obtain a second blood vessel segmentation result of the target medical image.
  • the prediction information corresponding to each view angle of the first image point can be weighted based on the fusion weights of the first image point corresponding to each view angle, to obtain the target medical image and the first image The second prediction information of the second image point corresponding to the point.
  • step S3222 please refer to the related description of step S1222 in the embodiment of the above-mentioned image segmentation model training method, which will not be repeated here.
  • the information of the first blood vessel segmentation results corresponding to multiple viewing angles can be more fully utilized, which helps to improve image quality.
  • the segmentation accuracy of the segmentation model is a measure of the segmentation model.
  • the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible
  • the inner logic is OK.
  • FIG. 9 is a schematic diagram of an embodiment of a training device for an image segmentation model of the present application.
  • the image segmentation model training device 90 includes an acquisition module 91 , an image segmentation module 92 and a parameter adjustment module 93 .
  • the acquisition module 91 is configured to acquire a plurality of sample view images extracted from a sample medical image from a plurality of view angles, wherein the sample medical image includes blood vessels.
  • the image segmentation module 92 is configured to acquire a plurality of sample perspective images extracted from the sample medical image from multiple perspectives, wherein the sample medical image includes blood vessels.
  • the parameter adjustment module 93 is used for adjusting the network parameters of the image segmentation model based on the blood vessel segmentation result.
  • the above-mentioned image segmentation model includes multiple segmentation sub-networks and fusion sub-networks respectively corresponding to multiple viewing angles.
  • the above-mentioned image segmentation module 92 uses the image segmentation model to perform image segmentation on each sample viewing angle image to obtain the blood vessel segmentation result related to the sample medical image, including: for each viewing angle, use the segmentation sub-network corresponding to the viewing angle to segment the sample corresponding to the viewing angle Perform image segmentation on the viewing angle image to obtain first blood vessel segmentation results corresponding to each viewing angle; use the fusion sub-network to perform fusion processing on each first blood vessel segmentation result corresponding to each viewing angle to obtain a second blood vessel segmentation result of the sample medical image.
  • the above-mentioned parameter adjustment module 93 is used to adjust the network parameters of the image segmentation model based on the blood vessel segmentation results, including at least one of the following steps: for each viewing angle, based on the first blood vessel segmentation results corresponding to each viewing angle and the local blood vessel segmentation corresponding to the viewing angle Labeling information, adjusting parameters of the segmentation sub-network corresponding to the perspective; adjusting parameters of each segmentation sub-network and/or fusion sub-network based on the second blood vessel segmentation result and the global blood vessel segmentation labeling information of the sample medical image.
  • the above-mentioned segmentation sub-network includes a feature processing layer, an attention layer, and a prediction layer connected in sequence
  • the above-mentioned parameter adjustment module 93 is used to adjust the parameters of the segmentation sub-network corresponding to the perspective including a feature processing layer, an attention layer, and a prediction layer.
  • the above-mentioned image segmentation module 92 is used to perform image segmentation on the sample view images corresponding to the view angles by using the segmentation sub-network corresponding to the view angles to obtain the first blood vessel segmentation results corresponding to each view angle, including: using the feature processing layer to perform image segmentation on the sample view angle images corresponding to the view angles Feature extraction is performed on the image to obtain the sample feature map corresponding to the viewing angle; the attention layer is used to process the sample feature map corresponding to the viewing angle to obtain the area prediction result corresponding to the viewing angle, where the area prediction result corresponding to the viewing angle is used to represent the sample corresponding to the viewing angle The position of the preset area in the view image; using the prediction layer to predict and obtain the first blood vessel segmentation results corresponding to each view based on the region prediction results corresponding to each view.
  • the above-mentioned local blood vessel segmentation labeling information includes first labeling information indicating whether the first image point of the sample perspective image belongs to a preset category and second labeling information indicating whether the first image point belongs to a preset area, and the preset category includes At least one vascular class and non-vascular class.
  • the above-mentioned parameter adjustment module 93 is used to adjust the parameters of the segmentation sub-network corresponding to the viewing angle based on the first blood vessel segmentation results corresponding to each viewing angle and the local blood vessel segmentation labeling information corresponding to the viewing angle, including at least one of the following steps: The area prediction results of the region and the second labeling information corresponding to the viewing angles, at least adjust the parameters of the attention layer; based on the first blood vessel segmentation results corresponding to each viewing angle and the first labeling information corresponding to the viewing angles, adjust the feature processing layer, attention layer and Parameters of at least one of the layers are predicted.
  • the above-mentioned segmentation sub-network includes at least one processing unit and a prediction layer connected in sequence, each processing unit includes a feature processing layer, at least part of the processing units also includes an attention layer connected after the feature processing layer, and the prediction layer is based on at least
  • the region prediction result output by the first attention layer is the first blood vessel segmentation result
  • the parameters of each attention layer are adjusted based on the region prediction results corresponding to all attention and the second annotation information corresponding to the viewing angle.
  • the above-mentioned parameter adjustment module 93 is used to adjust at least the parameters of the attention layer based on the region prediction result corresponding to the view and the second annotation information corresponding to the view, including: using the region prediction result output by each attention layer and the second annotation information corresponding to the view.
  • the difference between the label information corresponds to the first loss value of each attention layer; the first loss value of each attention layer is fused to obtain the second loss value; based on the second loss value, the adjustment of each attention layer parameter.
  • the above-mentioned first loss value is determined by using a regularized loss function.
  • the above-mentioned parameter adjustment module 93 is used to use the first difference between the region prediction results output by each attention layer and the second annotation information corresponding to the viewing angle to obtain the first loss value of each attention layer correspondingly, including: using each attention The difference corresponding to the force layer and at least one structural weight correspond to the first loss value of each attention layer, wherein at least one structural weight is the weight of the attention layer and/or the weight of the segmentation sub-network where the attention layer is located.
  • the above-mentioned parameter adjustment module 93 is used to fuse the first loss value of each attention layer to obtain the second loss value, including: using the loss weight of each attention layer to weight the first loss value of each attention layer Processing to get the second loss value.
  • the above fusion sub-network includes a weight determination layer and a fusion output layer.
  • the parameters of the fusion sub-network that the above-mentioned parameter adjustment module 93 adjusts include parameters of the weight determination layer and/or the fusion output layer.
  • the above-mentioned image segmentation module 92 is used to use the fusion sub-network to perform fusion processing on the first blood vessel segmentation results corresponding to each view angle to obtain the second blood vessel segmentation result of the sample medical image, including:
  • the corresponding first blood vessel segmentation results are processed to obtain the fusion weight information corresponding to each viewing angle;
  • the fusion output layer is used to fuse the first blood vessel segmentation results corresponding to multiple viewing angles based on the fusion weight information corresponding to each viewing angle to obtain the sample medical image.
  • the second vessel segmentation result is used to use the fusion sub-network to perform fusion processing on the first blood vessel segmentation results corresponding to each view angle to obtain the second blood vessel segmentation result of the sample medical image, including:
  • the corresponding first blood vessel segmentation results are processed to obtain the fusion weight information corresponding to each viewing angle;
  • the fusion output layer is used to fuse the first blood vessel segmentation results corresponding to multiple viewing angles based on the fusion weight information corresponding to each viewing angle to obtain
  • the above-mentioned global blood vessel segmentation labeling information includes third labeling information indicating whether the second image points of the sample medical image belong to a preset category
  • the second blood vessel segmentation result includes prediction information indicating whether each second image point belongs to a preset category
  • the preset category includes at least one vascular category and a non-vascular category.
  • the above-mentioned parameter adjustment module 93 is used to adjust the parameters of each segmentation sub-network and/or fusion sub-network based on the second blood vessel segmentation result and the global blood vessel segmentation labeling information of the sample medical image, including: based on each second image point and The positional relationship between the preset regions of the blood vessels in the sample medical image determines the position weight of each second image point; and based on the prediction information and the third labeling information corresponding to each second image point, the first Three loss values; using the position weights of each second image point to weight the third loss value of each second image point to obtain a fourth loss value; based on the fourth loss value, adjust each segmentation sub-network and/or fusion sub-network parameters of the network.
  • the above-mentioned parameter adjustment module 93 is used to determine the position weight of each second image point based on the positional relationship between each second image point and the preset area of the blood vessel in the sample medical image, including: determining the position weight of each second image point , where the reference distance of the second image point belonging to the blood vessel category is the distance between the second image point and the preset area of the blood vessel in the sample medical image, and the reference distance of the second image point belonging to the non-vascular category is A preset distance value; based on the reference distance of each second image point, determine the position weight of each second image point.
  • the global blood vessel segmentation labeling information further includes fourth labeling information indicating whether the second image point belongs to a preset area of a blood vessel.
  • the parameter adjustment module 93 is also used to use the fourth annotation information to determine the position of the preset area in the sample medical image; use the second blood vessel segmentation result or The third labeling information determines whether each second image point in the sample medical image belongs to the blood vessel category or belongs to the non-vascular category.
  • the above-mentioned preset area is the center line, and/or, at least one type of blood vessel includes at least one of arteries and veins.
  • the above-mentioned sample medical image is a three-dimensional image obtained by scanning an organ; and/or, the plurality of viewing angles include a variety of axial viewing angles, sagittal viewing angles, and coronal viewing angles; wherein, the above-mentioned acquisition module 91 is used to Obtain multiple sample perspective images extracted from the sample medical images from multiple perspectives, including: for each perspective, extract several sub-sample images of the perspectives from the sample medical images of the perspectives, and perform several sub-sample images of the perspectives Stitching to obtain the sample perspective image corresponding to the perspective.
  • the trained image segmentation model can use the image information of sample view maps of different views for blood vessel segmentation in subsequent applications, which helps to improve blood vessel segmentation. the accuracy.
  • FIG. 10 is a schematic frame diagram of an embodiment of an image segmentation device of the present application.
  • the image segmentation device 100 includes an acquisition module 101 and an image segmentation module 102 .
  • the acquiring module 101 is used to acquire a plurality of target view images extracted from a target medical image from multiple views, wherein the target medical image contains blood vessels;
  • the image segmentation module 102 is used to perform image segmentation on each target medical image using an image segmentation model , to obtain the blood vessel segmentation results related to the target medical image.
  • the above-mentioned image segmentation model includes multiple segmentation sub-networks and fusion sub-networks respectively corresponding to multiple viewing angles.
  • the above-mentioned image segmentation module 102 is used to use the image segmentation model to perform image segmentation on each sample viewing angle image, so as to obtain the blood vessel segmentation result related to the target medical image, including: for each viewing angle, use the segmentation sub-network corresponding to the viewing angle to correspond to the viewing angle Image segmentation is performed on the target view image to obtain the first blood vessel segmentation results corresponding to each view; the fusion sub-network is used to fuse the first blood vessel segmentation results corresponding to each view to obtain the second blood vessel segmentation result of the target medical image.
  • the above-mentioned image segmentation module 102 is configured to use the segmentation sub-network corresponding to the angle of view to perform image segmentation on the target angle of view image corresponding to the angle of view, and obtain the first blood vessel segmentation results corresponding to each angle of view, including: performing image segmentation on the sample angle of view image corresponding to the angle of view Feature extraction to obtain the sample feature map corresponding to the viewing angle; process the sample feature map corresponding to the viewing angle to obtain the area prediction result corresponding to the viewing angle, wherein the area prediction result corresponding to the viewing angle is used to represent the preset in the sample viewing angle image corresponding to the viewing angle The position of the region; predicting and obtaining the first blood vessel segmentation results corresponding to each viewing angle based on the region prediction results corresponding to the viewing angles.
  • the above-mentioned image segmentation module 102 is used to use the fusion sub-network to perform fusion processing on the first blood vessel segmentation results corresponding to each viewing angle to obtain the second blood vessel segmentation result of the target medical image, including: first blood vessel segmentation based on multiple viewing angles As a result, the fusion weight information corresponding to each viewing angle is obtained; based on the fusion weight information corresponding to each viewing angle, the first blood vessel segmentation results corresponding to multiple viewing angles are fused to obtain a second blood vessel segmentation result of the target medical image.
  • the above-mentioned processing of the sample feature map corresponding to the viewing angle to obtain the prediction result of the area corresponding to the viewing angle is performed by the attention layer of the segmentation sub-network; and/or, the preset area is the centerline of the blood vessel; and/or,
  • the area prediction result includes probability information that each first image point in the target perspective image is a preset area.
  • the above-mentioned first blood vessel segmentation result corresponding to the angle of view includes the first prediction information indicating whether each first image point in the target angle of view image corresponding to the angle of view belongs to the preset category
  • the second blood vessel segmentation result includes the first prediction information indicating whether each first image point in the target medical image
  • Second prediction information of whether the image point belongs to a preset category the preset category includes at least one of a blood vessel category and a non-vessel category.
  • the above-mentioned image segmentation module 102 is used to obtain the fusion weight information corresponding to each view based on the first blood vessel segmentation results corresponding to multiple views, including: for each view, based on the first blood vessel segmentation results of the views, to obtain the respective The fusion weight of the first image point; the above-mentioned image segmentation module 102 is used to fuse the first blood vessel segmentation results corresponding to multiple viewing angles based on the fusion weight information corresponding to each viewing angle, to obtain the second blood vessel segmentation result of the target medical image, including : For each first image point, based on the fusion weights of the first image point corresponding to each viewing angle, the prediction information corresponding to each viewing angle of the first image point is weighted to obtain the second image point corresponding to the first image point in the target medical image The second prediction information of .
  • the above-mentioned image segmentation model is obtained through training using the above-mentioned image segmentation model training method.
  • the above-mentioned target medical image is a three-dimensional image obtained by scanning an organ;
  • the above-mentioned multiple viewing angles include various types of transverse viewing angles, sagittal viewing angles, and coronal viewing angles;
  • a plurality of target perspective images obtained by extracting a target medical image from a perspective pair including: for each perspective, extracting several sub-target images of the perspective from the target medical image of the perspective, and splicing several sub-target images of the perspective to obtain the perspective The corresponding target view image.
  • the image segmentation model can use the image information of the target perspective images from multiple perspectives to perform blood vessel segmentation, which helps to improve the segmentation accuracy of the image segmentation model.
  • FIG. 11 is a schematic frame diagram of an embodiment of an electronic device of the present application.
  • the electronic device 110 includes a memory 111 and a processor 112 coupled to each other, and the processor 112 is used to execute the program instructions stored in the memory 111, so as to realize the steps of any one of the above-mentioned image segmentation model training method embodiments, or to realize any of the above-mentioned Steps in the embodiment of the image segmentation method.
  • the electronic device 110 may include, but is not limited to: a microcomputer and a server.
  • the electronic device 110 may also include mobile devices such as notebook computers and tablet computers, which are not limited here.
  • the processor 112 is configured to control itself and the memory 111 to implement the steps in any of the above embodiments of the image segmentation model training method, or to implement the steps in any of the above embodiments of the image segmentation method.
  • the processor 112 may also be called a CPU (Central Processing Unit, central processing unit).
  • the processor 112 may be an integrated circuit chip with signal processing capabilities.
  • the processor 112 can also be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field-programmable gate array (Field-Programmable Gate Array, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the processor 112 may be jointly realized by an integrated circuit chip.
  • FIG. 12 is a schematic frame diagram of an embodiment of a computer-readable storage medium of the present application.
  • the computer-readable storage medium 120 stores program instructions 121 that can be executed by the processor, and the program instructions 121 are used to implement the steps of any of the above-mentioned image segmentation model training method embodiments, or to implement any of the above-mentioned image segmentation method embodiments. step.
  • the computer program product of the present application includes computer-readable codes, or a non-volatile computer-readable storage medium bearing computer-readable codes.
  • the computer-readable codes When the computer-readable codes are run in a processor of an electronic device, the electronic The processor in the device executes to implement the above method.
  • the functions or modules included in the device provided by the embodiments of the present disclosure can be used to execute the methods described in the method embodiments above, and its specific implementation can refer to the description of the method embodiments above. For brevity, here No longer.
  • the disclosed methods and devices may be implemented in other ways.
  • the device implementations described above are only illustrative.
  • the division of modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or part of the contribution to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) execute all or part of the steps of the methods in various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种图像分割方法及相关模型的训练方法和装置、设备。图像分割模型的训练方法包括:获取分别从多个视角对样本医学图像提取得到的多个样本视角图像,其中,样本医学图像包含血管;利用图像分割模型对各样本视角图像进行图像分割,以得到与样本医学图像相关的血管分割结果;基于血管分割结果,调整图像分割模型的网络参数。上述方案,能够提高血管分割的准确度。

Description

图像分割方法及相关模型的训练方法和装置、设备
本申请要求2021年10月29日提交、申请号为202111274342.9,发明名称为“图像分割方法及相关模型的训练方法和装置、设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种图像分割方法及相关模型的训练方法和装置、设备、存储介质、计算机程序产品。
背景技术
医学图像处理的血管分割是目前的热点问题。通过对血管进行分割,可以使得医生快速了解血管的相关情况并进行对应的模拟操作。例如。通过对血管进行分割,分割结果可以辅助医生进行术前规划和模拟手术,这有助于降低手术过程中的风险,提高手术的成功率。
然而,现有的血管分割技术,都是基于单一视角的医学图像进行分割,这也使得血管分割的准确度不高,这极大的限制了血管分割技术的进一步应用。
因此,如何提高血管分割的准确度,对于促进血管技术的进一步发展和应用,具有极其重要的意义。
发明内容
本申请至少提供一种图像分割方法及相关模型的训练方法和装置、设备。
本申请第一方面提供了一种图像分割模型的训练方法,方法包括:获取分别从多个视角对样本医学图像提取得到的多个样本视角图像,其中,样本医学图像包含血管;利用图像分割模型对各样本视角图像进行图像分割,以得到与样本医学图像相关的血管分割结果;基于血管分割结果,调整图像分割模型的网络参数。
因此,通过使用不同视角的样本视角图对图像分割模型的训练,使得训练后的图像分割模型在后续应用中,能够利用不同视角的样本视角图的图像信息进行血管分割,有助于提高血管分割的准确度。
其中,上述的图像分割模型包括分别与多个视角对应的多个分割子网络和融合子网络;上述的利用图像分割模型对各样本视角图像进行图像分割,以得到与样本医学图像相关的血管分割结果,包括:对于每个视角,利用视角对应的分割子网络对视角对应的样本视角图像进行图像分割,得到各视角对应的各第一血管分割结果;利用融合子网络对各视角对应的各第一血管分割结果进行融合处理,得到样本医学图像的第二血管分割结果;上述的基于血管分割结果,调整图像分割模型的网络参数,包括以下至少一个步骤:对于每个视角,基于各视角对应的各第一血管分割结果与视角对应的局部血管分割标注信息,调整视角对应的分割子网络的参数;基于第二血管分割结果与样本医学图像的全局血管分割标注信息,调整各分割子网络和/或融合子网络的参数。
因此,通过设置视角对应的分割子网络对样本视角图像进行图像分割,并利用融合子网络对各视角对应的各第一血管分割结果进行融合处理,使得图像分割模型能够基于不同视角的图像信息,实现对血管的分割。此外,通过利用第一血管分割结果和第二血管分割结果以及与各分割结果对应的标注信息,可以实现对分割子网络和融合子网络的训练。
其中,上述的分割子网络包括依序连接的特征处理层、注意力层和预测层,上述的调整视角对应的分割子网络的参数包括特征处理层、注意力层和预测层中至少一者的参数。上述的利用视角对应的分割子网络对视角对应的样本视角图像进行图像分割,得到各视角对应的各第一血管分割结果,包括:利用特征处理层对视角对应的样本视角图像进行特征提取,得到视角对应的样本特征图;利用注意力层对视角对应的样本特征图进行处理,得到视角对应的区域预测结果,其中,视角对应的区域预测结果用于表示视角对应的样本视角图像中的预设区域的位置;利用预测层基于各视角对应的区域预测结果预测得到各视角对应的各第一血管分割结果。
因此,通过利用注意力层输出预设区域的位置,可以使得分割子网络在后续进行图像分割时, 能够更加关注与预设区域附近的图像信息,以提升分割子网络对血管特征信息的敏感性,进而有助于提高血管分割的准确度。
其中,上述的局部血管分割标注信息包括表示样本视角图像的第一图像点是否属于预设类别的第一标注信息和表示第一图像点是否属于预设区域的第二标注信息,预设类别包括至少一种血管类别和非血管类别;上述的基于各视角对应的各第一血管分割结果与视角对应的局部血管分割标注信息,调整视角对应的分割子网络的参数,包括以下至少一个步骤:基于视角对应的区域预测结果和视角对应的第二标注信息,至少调整注意力层的参数;基于各视角对应的各第一血管分割结果、视角对应的第一标注信息,调整特征处理层、注意力层和预测层中至少一者的参数。
因此,通过基于各视角对应的各第一血管分割结果、视角对应的第一标注信息,可以实现对特征处理层、注意力层和预测层中至少一者的训练。
其中,上述的分割子网络包括依序连接的至少一个处理单元和预测层,每个处理单元包括特征处理层,至少部分处理单元还包括连接于特征处理层之后的注意力层,预测层基于至少一注意力层输出的区域预测结果得到第一血管分割结果,每层注意力层的参数是基于对应所有注意力层的区域预测结果和视角对应的第二标注信息调整的;上述的基于视角对应的区域预测结果和视角对应的第二标注信息,至少调整注意力层的参数,包括:利用各注意力层输出的区域预测结果和视角对应的第二标注信息之间的差异,对应得到各注意力层的第一损失值;对各注意力层的第一损失值进行融合,得到第二损失值;基于第二损失值,调整各注意力层的参数。
因此,通过利用第二损失值来调整各注意力层的参数,可以实现对注意力层的训练。
其中,上述的第一损失值是利用正则化损失函数确定得到的;上述的利用各注意力层输出的区域预测结果和视角对应的第二标注信息之间的第一差异,对应得到各注意力层的第一损失值,包括:利用各注意力层对应的差异、至少一个结构权重,对应得到各注意力层的第一损失值,其中,至少一个结构权重为注意力层的权重和/或注意力层所在的分割子网络的权重;上述的对各注意力层的第一损失值进行融合,得到第二损失值,包括:利用各注意力层的损失权重对对各注意力层的第一损失值进行加权处理,得到第二损失值。
因此,通过利用正则化损失函数对第一损失值进行进一步的约束,可以加强注意力层对血管区域的特征提取能力。
其中,越靠近预测层的注意力层的损失权重越大。
因此,通过设置越靠近预测层的注意力层的损失权重越大,可以使得得到的第二损失值可以更加合理。
其中,上述的融合子网络包括权重确定层和融合输出层,调整的融合子网络的参数包括权重确定层和/或融合输出层的参数;上述的利用融合子网络对各视角对应的各第一血管分割结果进行融合处理,得到样本医学图像的第二血管分割结果,包括:利用权重确定层对多个视角对应的第一血管分割结果进行处理,得到各视角对应的融合权重信息;利用融合输出层基于各视角对应的融合权重信息对多个视角对应的第一血管分割结果进行融合,得到样本医学图像的第二血管分割结果。
因此,通过利用权重确定层结合多个视角对应的第一血管分割结果信息来得到融合权重信息,使得融合子网络能够根据不同的第一血管分割结果输出不同的融合权重信息,实现了多个视角对应的第一血管分割结果信息的软融合,有助于提高血管分割的准确度。此外,通过利用融合输出层将融合权重信息和多个视角对应的第一血管分割结果进行融合,使得融合子网络能够利用不同视角的样本视角图的图像信息进行血管分割,有助于提高血管分割的准确度。此外,由于融合权重信息结合了多个视角对应的第一血管分割结果信息,使得后续利用融合权重信息进行血管分割时,可以减少血管分支错分的情况。
其中,上述的全局血管分割标注信息包括表示样本医学图像的第二图像点是否属于预设类别的第三标注信息,第二血管分割结果包括表示各第二图像点是否属于预设类别的预测信息,预设类别包括至少一种血管类别和非血管类别;上述的基于第二血管分割结果与样本医学图像的全局血管分割标注信息,调整各分割子网络和/或融合子网络的参数,包括:基于各第二图像点与样本医学图像中血管的预设区域之间的位置关系,确定各第二图像点的位置权重;以及基于各第二图像点对应的预测信息和第三标注信息,得到各第二图像点的第三损失值;利用各第二图像点的位置权重对各第二图像点的第三损失值进行加权处理,得到第四损失值;基于第四损失值,调整各分割子网络和/或融合子网络的参数。
因此,通过利用第二图像点的位置权重对各第二图像点的第三损失值进行加权处理,可以在训练时使得网络更加关注位置权重大的第二图像点,以此提高网络对关注位置权重大的第二图像点的区域的血管分割准确度。
其中,上述的基于各第二图像点与样本医学图像中血管的预设区域之间的位置关系,确定各第 二图像点的位置权重,包括:确定各第二图像点的参考距离,其中,属于血管类别的第二图像点的参考距离为第二图像点与样本医学图像中血管的预设区域之间的距离,属于非血管类别的第二图像点的参考距离为预设距离值;基于各第二图像点的参考距离,确定各第二图像点的位置权重。
因此,通过确定各第二图像点的参考距离,便可基于各第二图像点的参考距离,确定各第二图像点的位置权重,使得位置权重可以反映参考距离的距离特征。
其中,属于血管类别的第二图像点的参考距离越大,对应的位置权重越大,属于非血管类别的第二图像点的位置权重为预设权重值;上述的全局血管分割标注信息还包括表示第二图像点是否属于血管的预设区域的第四标注信息;在确定各第二图像点的参考距离之前,图像分割模型的训练方法还包括:利用第四标注信息,确定预设区域在样本医学图像中的位置;利用第二血管分割结果或第三标注信息,确定样本医学图像中各第二图像点为属于血管类别或属于非血管类别。
因此,通过利用第四标注信息来确定预设区域的位置,以及利用第二血管分割结果或第三标注信息来确定第二图像点为属于血管类别或属于非血管类别,后续便可以确定第二图像点的位置权重。
其中,上述的预设区域为中心线,和/或,至少一种血管类别包括动脉和静脉中的至少一种。
因为血管中心线附近的区域也是血管区域,因此通过将预设区域设置为血管中心线,可以使得分割子网络在对样本视角图像进行血管分割时,可以更加关注血管中心线附近的区域,以此有助于提高血管分割的准确度。此外,通过限定至少一种血管类别包括动脉和静脉中的至少一种,可以使得图像分割模型能够针对动脉和静脉进行血管分割。
其中,上述的样本医学图像为对器官扫描得到的三维图像;上述的多个视角包括横断位视角、矢状位视角、冠状位视角中的多种;上述的获取分别从多个视角对样本医学图像提取得到的多个样本视角图像,包括:对于每个视角,从视角对样本医学图像提取得到视角的若干子样本图像,并将视角的若干子样本图像进行拼接,得到视角对应的样本视角图像。
因此,通过对每个视角都提取得到对应的样本视角图像,可以获得不同视角对应的图像信息,后续图像分割模型便能基于不同视角的图像信息进行血管分割,有助于提高血管分割的准确度。
本申请第二方面提供了一种图像分割方法,方法包括:获取分别从多个视角对目标医学图像提取得到的多个目标视角图像,其中,目标医学图像包含血管;利用图像分割模型对各目标视角图像进行图像分割,以得到与目标医学图像相关的血管分割结果。
因此,通过利用图像分割模型对各目标视角图像进行图像分割,使得图像分割模型可以利用多个视角的目标视角图像的图像信息进行血管分割,有助于提高图像分割模型的分割准确度。
其中,上述的图像分割模型包括分别与多个视角对应的多个分割子网络和融合子网络;上述的利用图像分割模型对各样本视角图像进行图像分割,以得到与目标医学图像相关的血管分割结果,包括:对于每个视角,利用视角对应的分割子网络对视角对应的目标视角图像进行图像分割,得到各视角对应的各第一血管分割结果;利用融合子网络对各视角对应的各第一血管分割结果进行融合处理,得到目标医学图像的第二血管分割结果。
因此,通过利用融合子网络对各视角对应的各第一血管分割结果进行融合处理,使得融合子网络能够利用多个视角的第一血管分割结果的预测信息,有助于提高图像分割模型的分割准确度。
其中,上述的利用视角对应的分割子网络对视角对应的目标视角图像进行图像分割,得到各视角对应的各第一血管分割结果,包括:对视角对应的样本视角图像进行特征提取,得到视角对应的样本特征图;对视角对应的样本特征图进行处理,得到视角对应的区域预测结果,其中,视角对应的区域预测结果用于表示视角对应的样本视角图像中的预设区域的位置;基于视角对应的区域预测结果预测得到各视角对应的各第一血管分割结果;上述的利用融合子网络对各视角对应的各第一血管分割结果进行融合处理,得到目标医学图像的第二血管分割结果,包括:基于多个视角对应的第一血管分割结果,得到各视角对应的融合权重信息;基于各视角对应的融合权重信息对多个视角对应的第一血管分割结果进行融合,得到目标医学图像的第二血管分割结果。
因此,通过基于视角对应的区域预测结果预测得到各视角对应的各第一血管分割结果,使得可以更多的利用区域预测结果来得到第一分割结果时,有助于提高图像分割模型的分割准确度。此外,通过结合多个视角对应的第一血管分割结果信息来得到融合权重信息,使得融合子网络能够根据不同的第一血管分割结果输出不同的融合权重信息,实现了多个视角对应的第一血管分割结果信息的软融合,有助于提高血管分割的准确度。此外,通过将融合权重信息和多个视角对应的第一血管分割结果进行融合,使得融合子网络能够利用不同视角的样本视角图的图像信息进行血管分割,有助于提高血管分割的准确度。
其中,上述的对视角对应的样本特征图进行处理,得到视角对应的区域预测结果是由分割子网络的注意力层执行的;和/或,预设区域为血管的中心线;和/或,区域预测结果包括目标视角图像中各第一图像点为预设区域的概率信息。
因此,通过利用注意力层对视角对应的样本特征图进行处理,得到视角对应的区域预测结果,可以使得分割子网络更加关注预设区域的特征信息。此外,因为血管中心线附近的区域也是血管区域,因此通过将预设区域设置为血管中心线,可以使得分割子网络在对样本视角图像进行血管分割时,可以更加关注血管中心线附近的区域,以此有助于提高血管分割的准确度。
其中,上述的视角对应的第一血管分割结果包括表示视角对应的目标视角图像中各第一图像点是否属于预设类别的第一预测信息,第二血管分割结果包括表示目标医学图像中各第二图像点是否属于预设类别的第二预测信息,预设类别包括至少一种血管类别和非血管类别;上述的基于多个视角对应的第一血管分割结果,得到各视角对应的融合权重信息,包括:对于每个视角,基于视角的第一血管分割结果,得到视角对应的各第一图像点的融合权重;上述的基于各视角对应的融合权重信息对多个视角对应的第一血管分割结果进行融合,得到目标医学图像的第二血管分割结果,包括:对于各第一图像点,基于第一图像点对应各视角的融合权重对第一图像点对应各视角的预测信息进行加权处理,得到目标医学图像中与第一图像点对应的第二图像点的第二预测信息。
因此,通过结合多个视角对应的第一血管分割结果信息来得到融合权重信息,实现了根据不同的第一血管分割结果输出不同的融合权重信息,实现了多个视角对应的第一血管分割结果信息的软融合,有助于提高血管分割的准确度。此外,通过将融合权重信息和多个视角对应的第一血管分割结果进行融合,使得融合子网络能够利用不同视角的样本视角图的图像信息进行血管分割,有助于提高血管分割的准确度。此外,由于融合权重信息结合了多个视角对应的第一血管分割结果信息,使得后续利用融合权重信息进行血管分割时,可以减少血管分支错分的情况。
其中,上述的图像分割模型为利用上述第一方面描述的图像分割模型的训练方法训练得到的。
因此,通过限定图像分割模型是利用图像分割模型的训练方法的实施例得到的,使得在利用经过训练的图像分割模型进行血管分割时,血管分割的准确度更高。
其中,上述的目标医学图像为对器官扫描得到的三维图像;上述的多个视角包括横断位视角、矢状位视角、冠状位视角中的多种;上述的获取分别从多个视角对目标医学图像提取得到的多个目标视角图像,包括:对于每个视角,从视角对目标医学图像提取得到视角的若干子目标图像,并将视角的若干子目标图像进行拼接,得到视角对应的目标视角图像。
因此,通过对每个视角都提取得到对应的样本视角图像,可以获得不同视角对应的图像信息,后续图像分割模型便能基于不同视角的图像信息进行血管分割,有助于提高血管分割的准确度。
本申请第三方面提供了一种图像检测模型的训练装置,训练装置包括获取模块、图像分割模块和参数调整模块。获取模块用于获取分别从多个视角对样本医学图像提取得到的多个样本视角图像,其中,样本医学图像包含血管;图像分割模块用于获取分别从多个视角对样本医学图像提取得到的多个样本视角图像,其中,样本医学图像包含血管;参数调整模块用于基于血管分割结果,调整图像分割模型的网络参数。
本申请第四方面提供了一种图像分割装置,图像分割装置包括获取模块和图像分割模块。获取模块用于获取分别从多个视角对目标医学图像提取得到的多个目标视角图像,其中,目标医学图像包含血管;图像分割模块用于利用图像分割模型对各目标医学图像进行图像分割,以得到与目标医学图像相关的血管分割结果。
本申请第五方面提供了一种电子设备,包括相互耦接的存储器和处理器,处理器用于执行存储器中存储的程序指令,以实现上述第一方面中的图像分割模型的训练方法,或实现上述第二方面中的图像分割方法。
本申请第六方面提供了一种计算机可读存储介质,其上存储有程序指令,程序指令被处理器执行时实现上述第一方面中的图像分割模型的训练方法,或实现上述第二方面中的图像分割方法。
本申请第七方面提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器用于实现上述第一方面中的图像分割模型的训练方法,或实现第二方面中的图像分割方法。
上述方案,通过使用不同视角的样本视角图对图像分割模型的训练,使得训练后的图像分割模型在后续应用中,能够利用不同视角的样本视角图的图像信息进行血管分割,有助于提高血管分割的准确度。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本申请。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本申请的实施例,并与说明书一起用于说明本申请的技术方案。
图1是本申请图像检测模型的训练方法一实施例的第一流程示意图;
图2是本申请图像分割模型的训练方法一实施例的第二流程示意图;
图3是本申请图像分割模型的训练方法一实施例的第三流程示意图;
图4是本申请图像分割模型的训练方法中分割子网络的一结构示意图;
图5是本申请图像分割模型的训练方法一实施例的第四流程示意图;
图6是本申请图像分割模型的训练方法一实施例的第五流程示意图;
图7是本申请图像分割模型的训练方法中图像分割模型的一结构示意图;
图8是本申请图像分割方法实施例的第一流程示意图;
图9是本申请图像分割模型的训练装置一实施例的框架示意图;
图10是本申请图像分割装置一实施例的框架示意图;
图11是本申请电子设备一实施例的框架示意图;
图12是本申请计算机可读存储介质一实施例的框架示意图。
具体实施方式
下面结合说明书附图,对本申请实施例的方案进行详细说明。
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、接口、技术之类的具体细节,以便透彻理解本申请。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。此外,本文中的“多”表示两个或者多于两个。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
本申请公开实施例中方法步骤的执行主体可以为硬件执行,或者通过处理器运行计算机可执行代码的方式执行。
请参阅图1,图1是本申请图像分割模型的训练方法一实施例的第一流程示意图。具体而言,可以包括如下步骤:
步骤S11:获取分别从多个视角对样本医学图像提取得到的多个样本视角图像。
在本申请中,样本医学图像可以是三维图像,具体可以是器官扫描得到的三维图像。例如,可以通过电子计算机断层扫描(Computed Tomography,CT)成像技术,进行三维成像,以此得到样本医学图像。在样本医学图像中包括血管,后续便可以对血管进行分割。样本医学图像例如是肺部的三维图像,或者是心脏的三维图像等。在本申请中,样本医学图像、样本视角图像等三维图像的构成单位为体素。
在本申请中,多个视角指至少包括两个视角。在一个实施方式中,多个视角包括横断位视角、矢状位视角、冠状位视角中的多种。从多个视角对样本医学图像提取得到的多个样本视角图像,即是在视角的方向上,对样本医学图像进行裁剪,以此得到多个样本视角图像。
在一个实施方式中,对于每个视角,可以从视角对样本医学图像提取得到视角的若干子样本图像,并将视角的若干子样本图像进行拼接,以此得到视角对应的样本视角图像。在一个具体实施方式中,从视角对样本医学图像提取得到视角的若干子样本图像,可以是以滑窗的形式进行图像提取,以此获得若干子样本图像。例如,对于横断位视角而言,可以从横断位方向,提取一定大小的若干子样本图像,然后再将这些子样本图像进行拼接,以此便能得到样本视角图像。在一个例子中,滑窗的大小为128*128*128,从横断位方向提取了4个128*128*128大小的子样本图像,通过将这4个子样本图像拼接为128*128*512大小的图像,便可得到样本视角图像。因此,通过对每个视角都提取得到对应的样本视角图像,可以获得不同视角对应的图像信息,后续图像分割模型便能基于不同视角的图像信息进行血管分割,有助于提高血管分割的准确度。
在一个实施方式中,样本医学图像可以是对初始样本医学图像进行重采样得到的。通过对初始样本医学图像进行重采样,可以使得样本医学图像的分辨率符合要求,有助于提高血管分割的准确度。进一步地,还可以对样本医学图像中的像素值进行归一化操作,方便于后续图像分割模型的训练。
在一个实施方式中,在得到多个样本视角图像以后,还可以对样本视角图像进行旋转、平移、镜像、缩放等操作,以此实现数据增强,并且能够平衡样本视角图像中的正负样本,达到扩增数据量的目的,有助于提高图像分割模型的泛化性、降低过拟合的可能。
步骤S12:利用图像分割模型对各样本视角图像进行图像分割,以得到与样本医学图像相关的血管分割结果。
通过将到的样本视角图像输入到图像分割模型,并利用图像分割模型对各样本视角图像进行图像分割,使得图像分割模型可以利用不同视角的样本视角图的图像信息,以此获得更多关于血管的特征信息,最终输出与样本医学图像相关的血管分割结果。
在一个实施方式中,血管分割结果可以包括样本医学图像中的动脉和静脉的分割结果。具体的,血管分割结果可以是样本医学图像中的图像点属于动脉、静脉或背景的结果。
步骤S13:基于血管分割结果,调整图像分割模型的网络参数。
在一个具体实施方式中,血管标签信息可以认为是样本医学图像中每一个像素点为血管或者是背景的分类结果。此外,对于分类为血管的像素点,血管标签信息还可以包括血管是动脉或者静脉的分类信息。
在得到血管分割结果以后,就可以根据血管分割结果和对应的血管标签信息的差异,调整图像分割模型的网络参数,实现了利用不同视角的样本视角图对图像分割模型的训练。
因此,通过使用不同视角的样本视角图对图像分割模型的训练,使得训练后的图像分割模型在后续应用中,能够利用不同视角的样本视角图的图像信息进行血管分割,有助于提高血管分割的准确度。
请参阅图2,图2是本申请图像分割模型的训练方法一实施例的第二流程示意图。在本实施例中,上述提及的图像分割模型包括分别与多个视角对应的多个分割子网络和融合子网络,即分割子网络的数量与视角的相同,全部分割子网络的输出结果可以输入到融合子网络中。在此情况下,上述步骤提及的“利用图像分割模型对各样本视角图像进行图像分割,以得到与样本医学图像相关的血管分割结果”,具体包括步骤S121和步骤S122。
步骤S121:对于每个视角,利用视角对应的分割子网络对视角对应的样本视角图像进行图像分割,得到各视角对应的各第一血管分割结果。
在本实施例中,每个视角对应的分割子网络都会对与视角对应的样本视角图像进行图像分割的操作,以此得到每个视角对应的第一血管分割结果。例如,某一分割子网络与横断位视角对应,则可以将从横断位视角对样本医学图像提取得到的样本视角图像输入到该分割子网络中,以此得到横断位视角的样本视角图像的第一血管分割结果。第一血管分割结果可以是样本视角图像的第一图像点是否属于预设类别的预测结果。预设类别包括至少一种血管类别和非血管类别,血管类别是动脉和静脉。
步骤S122:利用融合子网络对各视角对应的各第一血管分割结果进行融合处理,得到样本医学图像的第二血管分割结果。
通过将全部分割子网络的输出结果输入到融合子网络,可以利用融合子网络对每一个视角对应的各第一血管分割结果进行融合处理,使得融合子网络能够基于不同视角的图像信息,实现对血管的分割,以得到样本医学图像的第二血管分割结果。第二血管分割结果可以是样本医学图像的第二图像点是否属于预设类别的预测结果。预设类别包括至少一种血管类别和非血管类别,血管类别是动脉和静脉。
在一个具体实施方式中,融合子网络可以是编码-解码结构的网络。具体地,在融合子网络中,编码器和解码器中的每一层的卷积层都可以是带孔卷积层,以此可以获得不同大小感受野的信息。在卷积层后可以连接批标准化层和一个激活层。在编码器的每一层之间可以连接池化层,编码器与解码器之间,以及解码器各层之间,都可以进行上采样。
因此,通过设置视角对应的分割子网络对样本视角图像进行图像分割,并利用融合子网络对每一个视角对应的各第一血管分割结果进行融合处理,使得图像分割模型能够基于不同视角的图像信息,实现对血管的分割。
对应于图像分割模型包括分别与多个分割子网络和融合子网络的情况,上述步骤提及的“基于血管分割结果,调整图像分割模型的网络参数”,具体可以包括以下至少一个步骤:
步骤S131:对于每个视角,基于各视角对应的各第一血管分割结果与视角对应的局部血管分割标注信息,调整视角对应的分割子网络的参数。
局部血管分割标注信息是该视角对应的样本视角图像中的血管的标签信息。对于每个视角而言,可以利用与各视角对应的各第一血管分割结果和与该视角对应的局部血管分割标注信息,来对分割子网络进行训练。训练分割子网络的方式可以是监督学习的方式,也可以是半监督学习的方式。例如,可以基于第一血管分割结果与对应的局部血管分割标注信息的差异,确定二者的损失值,并根据损失值来调整视角对应的分割子网络的参数。
步骤S132:基于第二血管分割结果与样本医学图像的全局血管分割标注信息,调整各分割子网络和/或融合子网络的参数。
全局血管分割标注信息是样本医学图像中的血管的标签信息。因为第二血管分割结果是基于第 一血管分割结果得到的,所以在基于第二血管分割结果与样本医学图像的全局血管分割标注信息来训练图像分割模型时,可以基于二者来调整各分割子网络和/或融合子网络的参数。
在一个实施方式中,可以基于第二血管分割结果与样本医学图像的全局血管分割标注信息二者的差异,同时调整分割子网络和融合子网络的参数。在一个实施方式中,可以基于第二血管分割结果与样本医学图像的全局血管分割标注信息,仅调整融合子网络的参数。在一个实施方式中,可以现基于第一血管分割结果与对应的局部血管分割标注信息,调整分割子网络的参数,后续再基于第二血管分割结果与样本医学图像的全局血管分割标注信息,调整融合子网络的参数。
因此,通过利用第一血管分割结果和第二血管分割结果以及与各分割结果对应的标注信息,可以实现对分割子网络和融合子网络的训练。
在本实施例中,分割子网络包括依序连接的特征处理层、注意力层和预测层。分割子网络例如是3D-Unet,注意力层设置可以是在特征处理层之后。
请参阅图3,图3是本申请图像分割模型的训练方法一实施例的第三流程示意图。对应于分割子网络包括依序连接的特征处理层、注意力层和预测层的结果,上述步骤提及的“利用视角对应的分割子网络对视角对应的样本视角图像进行图像分割,得到各视角对应的各第一血管分割结果”,具体包括可以包括步骤S1211至步骤S1213。
步骤S1211:利用特征处理层对视角对应的样本视角图像进行特征提取,得到视角对应的样本特征图。
特征处理层用于提取样本视角图的特征信息,以此可以得到与样本视角图像对应的样本特征图。可以理解的,每一个分割子网络的特征处理层都可以输出样本特征图。
步骤S1212:利用注意力层对视角对应的样本特征图进行处理,得到视角对应的区域预测结果。
注意力层例如是基于注意力机制的注意力模块。注意力模块可以是深度学习领域通用的注意力模块,此处不再赘述。
在本实施例中,视角对应的区域预测结果用于表示视角对应的样本视角图像中的预设区域的位置。具体的,视角对应的区域预测结果可以是表示样本特征图中每个体素为预设区域的概率。
在一个实施方式中,样本视角图像中的预设区域可以是血管的中心线位置。通过利用注意力层输出预设区域的位置,可以使得分割子网络在后续进行图像分割时,能够更加关注与预设区域附近的图像信息,以提升分割子网络对血管特征信息的敏感性,进而有助于提高血管分割的准确度。可以理解的,血管中心线附近的区域也是血管区域,因此通过将预设区域设置为血管中心线,可以使得分割子网络在对样本视角图像进行血管分割时,可以更加关注血管中心线附近的区域,以此有助于提高血管分割的准确度。
步骤S1213:利用预测层基于各视角对应的区域预测结果预测得到各视角对应的各第一血管分割结果。
预测层可以根据视角对应的区域预测结果,进行进一步地预测,以此得到各视角对应的各第一血管分割结果。具体而言,可以是基于视角对应的区域预测结果对样本特征图进行处理,使得样本特征图的特征信息的关于预设区域的权重更大,以此使得预测层在得到第一血管分割结果时,可以更多的参考预设区域附近的特征信息,以使得第一血管分割结果的准确度更高。第一血管分割结果可以包括样本视角图像的第一图像点是否属于预设类别的预测结果,例如,第一血管分割结果可以是第一图像点属于动脉或者是静脉,或者是属于背景。
因为分割子网络包括依序连接的特征处理层、注意力层和预测层,上述步骤中提及的调整视角对应的分割子网络的参数具体可以是调整特征处理层、注意力层和预测层中至少一者的参数。
在一个实施方式中,上述步骤提及的局部血管分割标注信息可以包括表示样本视角图像的第一图像点是否属于预设类别的第一标注信息和表示第一图像点是否属于预设区域的第二标注信息,预设类别包括至少一种血管类别和非血管类别。样本视角图像的第一图像点例如是样本视角图像的体素。血管类别包括动脉和静脉,非血管类别即不属于动脉也不属于静脉,为背景。在此情况下,上述步骤提及的“基于各视角对应的各第一血管分割结果与视角对应的局部血管分割标注信息,调整视角对应的分割子网络的参数”,可以包括以下至少一个步骤:
步骤S1311:基于视角对应的区域预测结果和视角对应的第二标注信息,至少调整注意力层的参数。
因为视角对应的区域预测结果是与第二标注信息(第一图像点是否属于预设区域)相互对应的,因此基于二者的差异,至少可以调整注意力层的参数。在一个实施方式中,还可以基于视角对应的区域预测结果和视角对应的第二标注信息,调整注意力层和特征处理层的参数。
步骤S1312:基于各视角对应的各第一血管分割结果、视角对应的第一标注信息,调整特征处理层、注意力层和预测层中至少一者的参数。
因为每一个视角对应的第一血管分割结果是与第一标注信息(第一图像点是否属于预设类别)相互对应的,因此基于二者的差异,可以调整特征处理层、注意力层和预测层中至少一者的参数。在一个实施方式中,可以是调整特征处理层、注意力层和预测层三者的参数。在一个实施方式中,可以是调整特征处理层和预测层的参数。在一个实施方式中,还可以是仅调整预测层的参数。
因此,通过基于各视角对应的各第一血管分割结果、视角对应的第一标注信息,可以实现对特征处理层、注意力层和预测层中至少一者的训练。
在一个具体实施方式中,可以是利用各视角对应的各第一血管分割结果是与第一标注信息的差异,利用损失函数确定损失值,进而调整特征处理层、注意力层和预测层中至少一者的参数。在一个具体实施方式中,在确定损失值时,在血管区域内,可以设置离血管中心线越远的第一图像点的权重越大,然后将第一图像点的权重和损失值进行加权,以此使得在训练分割子网络时,能够给予血管边缘区域更高的权重,使得血管边缘区域能够成为网络训练的重点,提升分割子网络对血管边缘区域分割的准确度。
在一个实施方式中,分割子网络包括依序连接的至少一个处理单元和预测层,每个处理单元包括特征处理层,至少部分处理单元还包括连接于特征处理层之后的注意力层,预测层基于至少一注意力层输出的区域预测结果得到第一血管分割结果。在本实施方式中,处理单元的数量至少为一个。每个处理单元可以依序连接,最后一个处理单元可以与预测层连接。具体的,每一个处理单元都包括特征处理层,特征处理层可以是特征提取层,也可以是特征解码层。在至少部分的处理单元中,特征处理层后还连接有注意层。
请参阅图4,图4是本申请图像分割模型的训练方法中分割子网络的一结构示意图。在图4中,分割子网络40的网络结构为3D-Unet结构。处理单元41的数量为9个,分别是处理单元S1至S9,预测层42为S10。其中,处理单元S1-S5中的特征处理层411为特征提取层,处理单元S6-S9中的特征处理层411为特征解码层。在处理单元S1至S9中,每一个特征处理层311之后都连接有注意力层412。对于每一个处理单元41的特征处理层411,特征处理层411可以包含两层的子处理层,每个子处理层可以包括卷积层(Conv)、批量归一化(BN)和激活函数(Relu)。预测层S10包括卷积层(Conv)和归一化指数函数(softmax)。图4中每一层旁边的数字表示该层的通道数,如处理单元S1的特征处理层411的第一个子处理层的通道数即为16。Maxpooling表示最大值池化操作,upsample为上采样,Conv为卷积操作,
Figure PCTCN2022093458-appb-000001
为特征联合操作。样本视角图像可以输入至处理单元S1,最终由预测层42输出第一血管分割结果。
在一个实施方式中,分割子模型中的每层注意力层的参数是基于对应所有注意力层的区域预测结果和视角对应的第二标注信息调整的。具体而言,上述步骤提及的“基于视角对应的区域预测结果和视角对应的第二标注信息,至少调整注意力层的参数”,具体包括步骤S13111至步骤S13113。
步骤S13111:利用各注意力层输出的区域预测结果和视角对应的第二标注信息之间的差异,对应得到各注意力层的第一损失值。
每一层注意力层都可以输出区域预测结果,因此可以利用每个注意力层输出的区域预测结果和视角对应的第二标注信息之间的差异,对应得到各注意力层的第一损失值。
在一个实施方式中,在计算第一损失值时,具体可以是利用各注意力层对应的差异、至少一个结构权重,对应得到各注意力层的第一损失值。各注意力层对应的差异为各注意力层输出的区域预测结果与第二标注信息之间的差异,至少一个结构权重为注意力层的权重和/或注意力层所在的分割子网络的权重。注意力层的权重可以是该注意层的损失值的权重。注意力层所在的分割子网络的权重,则表示该注意力层所在的分割子网络整体的权重。
在一个实施方式中,第一损失值是利用正则化损失函数确定得到的,即在计算第一损失值的过程中,会利用正则化损失函数得到的损失值进行进一步地约束。通过利用正则化损失函数对第一损失值进行进一步的约束,可以加强注意力层对血管区域的特征提取能力。
步骤S13112:对各注意力层的第一损失值进行融合,得到第二损失值。
在得到各注意力层的第一损失值以后,可以将每一层注意力层的第一损失值进行融合,以此得到用于表征全部注意力层的综合损失值,即第二损失值。
在一个实施方式中,第二损失值具体可以是利用各注意力层的损失权重对对各注意力层的第一损失值进行加权处理,得到第二损失值。
在一个具体实施方式中,计算第二损失值的公式(1)如下:
Figure PCTCN2022093458-appb-000002
其中,L attention(X,Y,w)表示一个分割子网络中基于全部注意力层的第一损失值,X是第一图像点,Y为对应的第二标注信息,w=(w 1;w 2;…;w s)表示每一个注意力层所在的分割子网络的权重,θ=(θ 1;θ 2;…;θ s)表示各注意力层的损失权重。
在一个具体实施方式中,因为越靠近的预测层的注意力层的获得更高的特征信息以及对应的区域预测结果会更加准确,因此可以设置越靠近预测层的注意力层的损失权重越大。例如,对于公式1而言,可以是设置θ 1~θ 9=0.2,0.2,0.4,0.4,0.6,0.6,0.6,0.8,0.8。以此,通过设置越靠近预测层的注意力层的损失权重越大,可以使得得到的第二损失值可以更加合理。
在一个具体实施方式中,也可以计算全部分割子网络的注意力层的第二损失值,此时计算第二损失值的公式(2)如下:
Figure PCTCN2022093458-appb-000003
其中,对比于公式(1),新增加了表示各个分割子网络的权重系数W l
正则化损失函数例如是L2正则化损失函数。此时,计算第二损失值的公式(3)如下:
Figure PCTCN2022093458-appb-000004
其中,对比于公式(1),新增加了表示各个注意力层的L2正则化损失函数的损失值
Figure PCTCN2022093458-appb-000005
在一个实施方式中,L2正则化损失函数
Figure PCTCN2022093458-appb-000006
的计算公式(4)如下:
Figure PCTCN2022093458-appb-000007
其中,
Figure PCTCN2022093458-appb-000008
为一个分割子网络中基于各注意力层的L2正则化损失函数的损失值,w=(w 1;w 2;…;w s)表示每一个处理单元的注意力层的权重,X是第一图像点,Y为对应的第二标注信息,y i为第一图像点对应的区域预测结果,P表示第一图像点的区域预测结果的概率值。
步骤S13113:基于第二损失值,调整各注意力层的参数。
在得到用于表征全部注意力层的综合损失值的第二损失值以后,便能够根据第二损失值,调整各注意力层的参数,以此实现对注意力层的训练。
在一个实施方式中,上述的融合子网络包括权重确定层和融合输出层。此外,在一个具体实施方式中,融合子网络还可以若干特征提取层,以及若干加码层。
请参阅图5,图5是本申请图像分割模型的训练方法一实施例的第四流程示意图。在本实施例中,对应于融合子网络包括权重确定层和融合输出层,上述步骤提及的“利用融合子网络对各视角对应的各第一血管分割结果进行融合处理,得到样本医学图像的第二血管分割结果”具体包括步骤S1221和步骤S1222。
步骤S1221:利用权重确定层对多个视角对应的第一血管分割结果进行处理,得到各视角对应的融合权重信息。
在利用权重确定层对多个视角对应的第一血管分割结果进行处理之前,可以将多个视角对应的第一血管分割结果进行通道拼接,以此得到表征多个视角对应的第一血管分割结果的信息。
权重确定层对多个视角对应的第一血管分割结果进行处理,可以是直接基于多个视角对应的第一血管分割结果进行处理,也可以是基于融合子网络中的其他网络层对第一血管分割结果进行出的处理以后再进行处理。各视角对应的融合权重信息可以是样本医学图像中的第二图像点属于哪一个类别的概率的权重。因此,通过利用权重确定层结合多个视角对应的第一血管分割结果信息来得到融合权重信息,使得融合子网络能够根据不同的第一血管分割结果输出不同的融合权重信息,实现了多个视角对应的第一血管分割结果信息的软融合,有助于提高血管分割的准确度。此外,由于融合权重信息结合了多个视角对应的第一血管分割结果信息,使得后续利用融合权重信息进行血管分割时,可以减少血管分支错分的情况。
在一个具体实施方式中,融合权重信息的融合公式(5)如下:
Figure PCTCN2022093458-appb-000009
其中,F(W g)为融合权重信息,G为融合子网络,W g是融合子网络的权重,
Figure PCTCN2022093458-appb-000010
为多个视角对应的第一血管分割结果信息。
步骤S1222:利用融合输出层基于各视角对应的融合权重信息对多个视角对应的第一血管分割结果进行融合,得到样本医学图像的第二血管分割结果。
在得到融合权重信息以后,就可以利用融合输出层将融合权重信息和多个视角对应的第一血管分割结果进行融合,以充分利用多个视角对应的第一血管分割结果,以此得到样本医学图像的第二血管分割结果。
因此,通过利用融合输出层将融合权重信息和多个视角对应的第一血管分割结果进行融合,使得融合子网络能够利用不同视角的样本视角图的图像信息进行血管分割,有助于提高血管分割的准确度。
在一个实施方式中,上述步骤提及的全局血管分割标注信息包括表示样本医学图像的第二图像点是否属于预设类别的第三标注信息,第二血管分割结果包括表示各第二图像点是否属于预设类别的预测信息。预设类别包括至少一种血管类别和非血管类别。在一个具体实施方式中,至少一种血 管类别包括动脉和静脉中的至少一种。
对应于融合子网络包括权重确定层和融合输出层,上述步骤提及的调整融合子网络的参数,具体可以是调整包括权重确定层和/或融合输出层的参数。
请参阅图6,图6是本申请图像分割模型的训练方法一实施例的第五流程示意图。在本实施例中,上述步骤提及的“基于第二血管分割结果与样本医学图像的全局血管分割标注信息,调整各分割子网络和/或融合子网络的参数”,具体包括步骤S1321至步骤S1324。
步骤S1321:基于各第二图像点与样本医学图像中血管的预设区域之间的位置关系,确定各第二图像点的位置权重。
在一个实施方式中,预设区域为中心线,具体可以是血管中心线,即动脉血管的中心线和静脉血管的中心线。在一个具体实施方式中,各第二图像点与样本医学图像中血管的预设区域之间的位置关系可以是第二图像点与预设区域之间的距离。
在一个实施方式中,步骤S1321具体可以包括步骤S13212和步骤S13213。
步骤S13211:确定各第二图像点的参考距离。
在本实施方式中,属于血管类别的第二图像点的参考距离为第二图像点与样本医学图像中血管的预设区域之间的距离,也即,在血管区域类的第二图像点的参考距离为第二图像点与样本医学图像中血管的预设区域之间的距离。在一个具体实施方式中,血管的预设区域为血管的中心线,则血管中心线的上点到预设区域的距离可以认为是0。属于非血管类别的第二图像点的参考距离为预设距离值,也即,属于背景的第二图像点的参考距离为预设距离值,预设距离值例如是0。
步骤S13212:基于各第二图像点的参考距离,确定各第二图像点的位置权重。
在确定参考距离以后,即获得了各第二图像点与样本医学图像中血管的预设区域之间的位置关系,以此可以根据网络训练的需要,基于参考距离,确定各第二图像点的位置权重。
因此,通过确定各第二图像点的参考距离,便可基于各第二图像点的参考距离,确定各第二图像点的位置权重,使得位置权重可以反映参考距离的距离特征。
在一个具体实施方式中,属于血管类别的第二图像点的参考距离越大,对应的位置权重越大,属于非血管类别的第二图像点的位置权重为预设权重值。通过这样的设置方法,可以是血管区域的边缘区域的第二图像点的权重更大,使得融合子网络在训练时能够更加关注血管边缘区域,有助于提高血管分割中对血管边缘区域的分割的准确度。
在一个具体实施方式中,计算属于血管类别的第二图像点的参考距离公式(6)如下:
Figure PCTCN2022093458-appb-000011
其中,d i属于血管类别的第二图像点的参考距离,点c j为预设区域上的点,y i为属于血管类别的第二图像点。
在一个具体实施方式中,第二图像点的位置权重的计算公式(7)如下:
Figure PCTCN2022093458-appb-000012
其中,max(d i)为血管最边缘的第二图像点到预设区域的参考距离,d i属于血管类别的第二图像点的参考距离,D为第二图像点的位置权重。
在另一个具体实施方式中,第二图像点的位置权重的计算公式(8)和如下:
Figure PCTCN2022093458-appb-000013
步骤S1322:基于各第二图像点对应的预测信息和第三标注信息,得到各第二图像点的第三损失值。
因为第二图像点对应的预测信息(第二图像点是否属于预设类别的预测信息)是与第三标注信息(第二图像点是否属于预设类别)相互对应的,因此基于二者的差异,利用相关的损失函数进行计算,可以得到第二图像点的第三损失值。
在一个具体实施方式中,第三损失值是基于多个不同的损失函数对应的损失值得到的。例如,可以基于两个不同的损失函数对应的损失值来得到第三损失值。两个不同的损失函数分别为交叉熵损失函数(Cross Entropy Loss,CEL)和Dice Loss。
在一个具体实施方式中,计算第二损失值的公式(9)如下:
L total=L dl+θ*L cel      (9)
其中,L total为第二损失值,L dl为Dice Loss损失函数对应的损失值,L cel为CEL损失函数对应的损失值,θ为CEL损失函数对应的损失值的权重。
在一个具体实施方式中,计算CEL损失函数对应的损失值的公式(10)如下:
Figure PCTCN2022093458-appb-000014
其中,Y +1,Y +2,Y -表示第三标注信息的动脉、静脉和背景,y i∈0,1,2分别表示第三标注信息中的第二图像点i属于背景、动脉和静脉,P(y i=n)|X;W表示的第二图像点对应的预测信息,其中,W是各分割子网络和融合子网络的权重,α,β为调节系数。
在一个具体实施方式中,计算Dice Loss损失函数对应的损失值的公式(11)如下:
Figure PCTCN2022093458-appb-000015
其中,n∈0,1,2分别表示第二图像点属于背景,动脉和静脉,P(y i=n)|X;W表示第二图像点对应的预测信息,P(y i;W)表示各分割子网络和融合子网络的权重,Y为第三标注信息。
在本申请中,步骤S1321和步骤S1322的执行顺序不做限定。
步骤S1323:利用各第二图像点的位置权重对各第二图像点的第三损失值进行加权处理,得到第四损失值。
利用各第二图像点的位置权重对各第二图像点的第三损失值进行加权处理,可以使得得到的第四损失值可以反映出不同第二图像点的重要程度的差别,使得位置权重大的第二图像点对第四损失值的影响更大,以此使得在训练融合子网络时,可以更加关注位置权重大的第二图像点。
在一个具体实施方式中,计算第四损失值的公式(12)如下:
L total=D*(L dl+θ*L cel)      (12)
其中L total为第二损失值,L dl为Dice Loss损失函数对应的损失值,L cel为CEL损失函数对应的损失值,θ为CEL损失函数对应的损失值的权重,D为第二图像点的位置权重。
步骤S1324:基于第四损失值,各调整分割子网络和/或融合子网络的参数。
在得到第四损失值,便可根据第四损失值,各调整分割子网络和/或融合子网络的参数。在一个实施方式中,可以仅调整融合子网络。在另一个实施方式中,可以是调整分割子网络和融合子网络的参数。
因此,通过利用第二图像点的位置权重对各第二图像点的第三损失值进行加权处理,可以在训练时使得网络更加关注位置权重大的第二图像点,以此提高网络对注位置权重大的第二图像点的区域的血管分割准确度。
在一个实施例中,全局血管分割标注信息还包括表示第二图像点是否属于血管的预设区域的第四标注信息。
在一个实施方式中,在上述提及的步骤“确定各第二图像点的参考距离”之前,本申请的图像分割模型的训练方法还包括步骤S21和步骤S22。
步骤S21:利用第四标注信息,确定预设区域在样本医学图像中的位置。
通过利用第四标注信息,可以确定第二图像点属于血管的预设区域的第二图像点,进而可以确定预设区域在样本医学图像中的位置。
步骤S22:利用第二血管分割结果或第三标注信息,确定样本医学图像中各第二图像点为属于血管类别或属于非血管类别。
在一个实施方式中,因为第二血管结果包括表示各第二图像点是否属于预设类别的预测信息,因此可以利用第二血管分割结果,确定样本医学图像中各第二图像点为属于血管类别或属于非血管类别。
在另一个实施方式中,因为第三标注信息也包括样本医学图像的第二图像点是否属于预设类别的信息,因此可以基于第三标注信息,确定样本医学图像中各第二图像点为属于血管类别或属于非血管类别。
因此,通过利用第四标注信息来确定预设区域的位置,以及利用第二血管分割结果或第三标注信息来确定第二图像点为属于血管类别或属于非血管类别,后续便可以确定第二图像点的位置权重。
请参阅图7,图7是本申请图像分割模型的训练方法中图像分割模型的一结构示意图。在图7中,图像分割模型70包括多个分割子网络73和融合子网络75。在融合子网络75中,包括编码器751和解码器752。在解码器752中,还包括权重确定层7521和融合输出层7522。
以下结合图7中的图像分割网络的结构,示例性地简要描述图像分割模型70的预测过程。样本医学图像71被提取成三个视角对应的样本视角图像72,将每个样本视角图像72输入到对应的分割子网络73中,可以得到与视角数量对应的第一分割结果74。在将全部第一分割结果74进行特征联合操作后输入到融合子网络75的编码器751中。此时,编码器751中的第一个特征处理层的最后一个子处理层输出的特征信息经过点积能够得到需要输入到权重确定层7521的第一特征信息, 权重确定层7521中的上一层输出的特征信息经过上采样操作后能够得到需要输入到权重确定层7521的第二特征信息。然后,权重确定层7521可以基于第一特征信息和第二特征信息,得到融合权重信息。融合权重信息继续被解码器752的其他网络层解码,最后由融合输出层7522输出第二血管分割结果76。
请参阅图8,图8是本申请图像分割方法实施例的第一流程示意图。具体的,图像分割方法可以包括步骤S31和步骤S32。
步骤31:获取分别从多个视角对目标医学图像提取得到的多个目标视角图像。
在本实施例中,目标医学图像包含血管,以此图像分割模型可以针对目标医学图像包含的血管进行血管分割,在一个实施方式中,目标医学图像为对器官扫描得到的三维图像。获得目标医学图像的方法可以参阅上述步骤S11中获得样本医学图像的方法,此处不再赘述。
在一个实施方式中,多个视角包括横断位视角、矢状位视角、冠状位视角中的多种。在一个具体实施方式中,对于每个视角,可以从视角对目标医学图像提取得到视角的若干子目标图像,并将视角的若干子目标图像进行拼接,得到视角对应的目标视角图像。获得目标视角图像的方法可以参阅上述步骤S11中获得样本视角图像的方法,此处不再赘述。
步骤S32:利用图像分割模型对各目标视角图像进行图像分割,以得到与目标医学图像相关的血管分割结果。
血管分割结果可以是目标医学图像的图像点(体素)的分类信息,类别信息包括动脉、静脉和背景。也即,可以利用图像分割模型对各目标视角图像进行图像分割,以此得到目标医学图像的图像点的分类信息。
因此,通过利用图像分割模型对各目标视角图像进行图像分割,使得图像分割模型可以利用多个视角的目标视角图像的图像信息进行血管分割,有助于提高图像分割模型的分割准确度。
在一个实施方式中,图像分割模型可以是上述图像分割模型的训练方式的实施例所描述的图像分割模型。在一个具体实施方式中,图像分割模型是利用图像分割模型的训练方法的实施例描述的图像分割模型的训练方法训练得到的。因此,通过限定图像分割模型是利用图像分割模型的训练方法的实施例得到的,使得在利用经过训练的图像分割模型进行血管分割时,血管分割的准确度更高。
在一个实施方式中,本实施例描述的图像分割模型包括分别与多个视角对应的多个分割子网络和融合子网络。分割子网络和融合子网络例如是上述图像分割模型的训练方法的实施例描述的图像分割模型的分割子网络和融合子网络。
对应于图像分割模型包括分别与多个视角对应的多个分割子网络和融合子网络,上述步骤提及的“图像分割模型对各样本视角图像进行图像分割,以得到与目标医学图像相关的血管分割结果”具体步骤S321和步骤S322。
步骤S321:对于每个视角,利用视角对应的分割子网络对视角对应的目标视角图像进行图像分割,得到各视角对应的各第一血管分割结果。
在一个实施中方式,上述的视角对应的第一血管分割结果包括表示视角对应的目标视角图像中各第一图像点是否属于预设类别的第一预测信息。预设类别包括至少一种血管类别和非血管类别。血管类别例如是动脉和静脉。
关于步骤S321的详细描述请参阅上述图像分割模型的训练方法的实施例中步骤S121的相关描述,此处不再赘述。
步骤S322:利用融合子网络对各视角对应的各第一血管分割结果进行融合处理,得到目标医学图像的第二血管分割结果。
在一个实施中方式,第二血管分割结果包括表示目标医学图像中各第二图像点是否属于预设类别的第二预测信息,预设类别包括至少一种血管类别和非血管类别。
关于步骤S321的详细描述请参阅上述图像分割模型的训练方法的实施例中步骤S122的相关描述,此处不再赘述。
因此,通过利用融合子网络对各视角对应的各第一血管分割结果进行融合处理,使得融合子网络能够利用多个视角的第一血管分割结果的预测信息,有助于提高图像分割模型的分割准确度。
在一个实施方式中,上述步骤提及的“利用视角对应的分割子网络对视角对应的目标视角图像进行图像分割,得到各视角对应的各第一血管分割结果”具体包括步骤3211至步骤3213。
步骤S3211:对视角对应的样本视角图像进行特征提取,得到视角对应的样本特征图。
关于步骤S3211的详细描述请参阅上述图像分割模型的训练方法的实施例中步骤S1211的相关描述,此处不再赘述。
步骤S3212:对视角对应的样本特征图进行处理,得到视角对应的区域预测结果。
在本实施例中,视角对应的区域预测结果用于表示视角对应的样本视角图像中的预设区域的位 置。在一个实施方式中,预设区域为血管的中心线。在一个具体实施方式中,区域预测结果包括目标视角图像中各第一图像点为预设区域的概率信息。
在一个实施方式中,步骤3212提及的“对视角对应的样本特征图进行处理,得到视角对应的区域预测结果”是由分割子网络的注意力层执行的。
关于步骤S3212的详细描述请参阅上述图像分割模型的训练方法的实施例中步骤S1212的相关描述,此处不再赘述。
步骤S3213:基于视角对应的区域预测结果预测得到各视角对应的各第一血管分割结果。
关于步骤S3213的详细描述请参阅上述图像分割模型的训练方法的实施例中步骤S1213的相关描述,此处不再赘述。
因此,通过基于视角对应的区域预测结果预测得到视角对应的第一血管分割结果,使得可以更多的利用区域预测结果来得到第一分割结果时,有助于提高图像分割模型的分割准确度。
在一个实施方式中,上述步骤提及的“利用融合子网络对各视角对应的各第一血管分割结果进行融合处理,得到目标医学图像的第二血管分割结果”具体包括步骤3221至步骤3222。
步骤3221:基于多个视角对应的第一血管分割结果,得到各视角对应的融合权重信息。
在一个具体实施方式中,对于每个视角,可以基于视角的第一血管分割结果,得到视角对应的各第一图像点的融合权重。
关于步骤S3221的详细描述请参阅上述图像分割模型的训练方法的实施例中步骤S1221的相关描述,此处不再赘述。
步骤3222:基于各视角对应的融合权重信息对多个视角对应的第一血管分割结果进行融合,得到目标医学图像的第二血管分割结果。
在一个具体实施方式中,对于各第一图像点,可以基于第一图像点对应各视角的融合权重对第一图像点对应各视角的预测信息进行加权处理,得到目标医学图像中与第一图像点对应的第二图像点的第二预测信息。
关于步骤S3222的详细描述请参阅上述图像分割模型的训练方法的实施例中步骤S1222的相关描述,此处不再赘述。
因此,通过基于各视角对应的融合权重信息对多个视角对应的第一血管分割结果进行融合,以此可以更加充分的利用多个视角对应的第一血管分割结果的信息,有助于提高图像分割模型的分割准确度。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
请参阅图9,图9是本申请图像分割模型的训练装置一实施例的框架示意图。图像分割模型的训练装置90包括获取模块91、图像分割模块92和参数调整模块93。获取模块91用于获取分别从多个视角对样本医学图像提取得到的多个样本视角图像,其中,样本医学图像包含血管。图像分割模块92用于获取分别从多个视角对样本医学图像提取得到的多个样本视角图像,其中,样本医学图像包含血管。参数调整模块93用于基于血管分割结果,调整图像分割模型的网络参数。
其中,上述的图像分割模型包括分别与多个视角对应的多个分割子网络和融合子网络。上述的图像分割模块92利用图像分割模型对各样本视角图像进行图像分割,以得到与样本医学图像相关的血管分割结果,包括:对于每个视角,利用视角对应的分割子网络对视角对应的样本视角图像进行图像分割,得到各视角对应的各第一血管分割结果;利用融合子网络对各视角对应的各第一血管分割结果进行融合处理,得到样本医学图像的第二血管分割结果。上述的参数调整模块93用于基于血管分割结果,调整图像分割模型的网络参数,包括以下至少一个步骤:对于每个视角,基于各视角对应的各第一血管分割结果与视角对应的局部血管分割标注信息,调整视角对应的分割子网络的参数;基于第二血管分割结果与样本医学图像的全局血管分割标注信息,调整各分割子网络和/或融合子网络的参数。
其中,上述的分割子网络包括依序连接的特征处理层、注意力层和预测层,上述的参数调整模块93用于调整视角对应的分割子网络的参数包括特征处理层、注意力层和预测层中至少一者的参数。上述的图像分割模块92用于利用视角对应的分割子网络对视角对应的样本视角图像进行图像分割,得到各视角对应的各第一血管分割结果,包括:利用特征处理层对视角对应的样本视角图像进行特征提取,得到视角对应的样本特征图;利用注意力层对视角对应的样本特征图进行处理,得到视角对应的区域预测结果,其中,视角对应的区域预测结果用于表示视角对应的样本视角图像中的预设区域的位置;利用预测层基于各视角对应的区域预测结果预测得到视角对应的各第一血管分割结果。
其中,上述的局部血管分割标注信息包括表示样本视角图像的第一图像点是否属于预设类别的第一标注信息和表示第一图像点是否属于预设区域的第二标注信息,预设类别包括至少一种血管类别和非血管类别。
其中,上述的参数调整模块93用于基于各视角对应的各第一血管分割结果与视角对应的局部血管分割标注信息,调整视角对应的分割子网络的参数,包括以下至少一个步骤:基于视角对应的区域预测结果和视角对应的第二标注信息,至少调整注意力层的参数;基于各视角对应的各第一血管分割结果、视角对应的第一标注信息,调整特征处理层、注意力层和预测层中至少一者的参数。
其中,上述的分割子网络包括依序连接的至少一个处理单元和预测层,每个处理单元包括特征处理层,至少部分处理单元还包括连接于特征处理层之后的注意力层,预测层基于至少一注意力层输出的区域预测结果得到第一血管分割结果,每层注意力层的参数是基于对应所有注意力的区域预测结果和视角对应的第二标注信息调整的。上述的参数调整模块93用于基于视角对应的区域预测结果和视角对应的第二标注信息,至少调整注意力层的参数,包括:利用各注意力层输出的区域预测结果和视角对应的第二标注信息之间的差异,对应得到各注意力层的第一损失值;对各注意力层的第一损失值进行融合,得到第二损失值;基于第二损失值,调整各注意力层的参数。
其中,上述的第一损失值是利用正则化损失函数确定得到的。上述的参数调整模块93用于利用各注意力层输出的区域预测结果和视角对应的第二标注信息之间的第一差异,对应得到各注意力层的第一损失值,包括:利用各注意力层对应的差异、至少一个结构权重,对应得到各注意力层的第一损失值,其中,至少一个结构权重为注意力层的权重和/或注意力层所在的分割子网络的权重。上述的参数调整模块93用于对各注意力层的第一损失值进行融合,得到第二损失值,包括:利用各注意力层的损失权重对对各注意力层的第一损失值进行加权处理,得到第二损失值。
其中,越靠近预测层的注意力层的损失权重越大。
其中,上述的融合子网络包括权重确定层和融合输出层。上述的参数调整模块93用于调整的融合子网络的参数包括权重确定层和/或融合输出层的参数。
其中,上述的图像分割模块92用于利用融合子网络对各视角对应的各第一血管分割结果进行融合处理,得到样本医学图像的第二血管分割结果,包括:利用权重确定层对多个视角对应的第一血管分割结果进行处理,得到各视角对应的融合权重信息;利用融合输出层基于各视角对应的融合权重信息对多个视角对应的第一血管分割结果进行融合,得到样本医学图像的第二血管分割结果。
其中,上述的全局血管分割标注信息包括表示样本医学图像的第二图像点是否属于预设类别的第三标注信息,第二血管分割结果包括表示各第二图像点是否属于预设类别的预测信息,预设类别包括至少一种血管类别和非血管类别。其中,上述的参数调整模块93用于基于第二血管分割结果与样本医学图像的全局血管分割标注信息,调整各分割子网络和/或融合子网络的参数,包括:基于各第二图像点与样本医学图像中血管的预设区域之间的位置关系,确定各第二图像点的位置权重;以及基于各第二图像点对应的预测信息和第三标注信息,得到各第二图像点的第三损失值;利用各第二图像点的位置权重对各第二图像点的第三损失值进行加权处理,得到第四损失值;基于第四损失值,调整各分割子网络和/或融合子网络的参数。
其中,上述的参数调整模块93用于基于各第二图像点与样本医学图像中血管的预设区域之间的位置关系,确定各第二图像点的位置权重,包括:确定各第二图像点的参考距离,其中,属于血管类别的第二图像点的参考距离为第二图像点与样本医学图像中血管的预设区域之间的距离,属于非血管类别的第二图像点的参考距离为预设距离值;基于各第二图像点的参考距离,确定各第二图像点的位置权重。
其中,属于血管类别的第二图像点的参考距离越大,对应的位置权重越大,属于非血管类别的第二图像点的位置权重为预设权重值。全局血管分割标注信息还包括表示第二图像点是否属于血管的预设区域的第四标注信息。在参数调整模块93用于确定各第二图像点的参考距离之前,参数调整模块93还用于利用第四标注信息,确定预设区域在样本医学图像中的位置;利用第二血管分割结果或第三标注信息,确定样本医学图像中各第二图像点为属于血管类别或属于非血管类别。
其中,上述的预设区域为中心线,和/或,至少一种血管类别包括动脉和静脉中的至少一种。
其中,上述的样本医学图像为对器官扫描得到的三维图像;和/或,多个视角包括横断位视角、矢状位视角、冠状位视角中的多种;其中,上述的获取模块91用于获取分别从多个视角对样本医学图像提取得到的多个样本视角图像,包括:对于每个视角,从视角对样本医学图像提取得到视角的若干子样本图像,并将视角的若干子样本图像进行拼接,得到视角对应的样本视角图像。
因此,通过使用不同视角的样本视角图对图像分割模型的训练,使得训练后的图像分割模型在后续应用中,能够利用不同视角的样本视角图的图像信息进行血管分割,有助于提高血管分割的准确度。
请参阅图10,图10是本申请图像分割装置一实施例的框架示意图。图像分割装置100包括获取模块101和图像分割模块102。获取模块101用于获取分别从多个视角对目标医学图像提取得到的多个目标视角图像,其中,目标医学图像包含血管;图像分割模块102用于利用图像分割模型对各目标医学图像进行图像分割,以得到与目标医学图像相关的血管分割结果。
其中,上述的图像分割模型包括分别与多个视角对应的多个分割子网络和融合子网络。上述的图像分割模块102用于利用图像分割模型对各样本视角图像进行图像分割,以得到与目标医学图像相关的血管分割结果,包括:对于每个视角,利用视角对应的分割子网络对视角对应的目标视角图像进行图像分割,得到各视角对应的各第一血管分割结果;利用融合子网络对各视角对应的各第一血管分割结果进行融合处理,得到目标医学图像的第二血管分割结果。
其中,上述的图像分割模块102用于利用视角对应的分割子网络对视角对应的目标视角图像进行图像分割,得到各视角对应的各第一血管分割结果,包括:对视角对应的样本视角图像进行特征提取,得到视角对应的样本特征图;对视角对应的样本特征图进行处理,得到视角对应的区域预测结果,其中,视角对应的区域预测结果用于表示视角对应的样本视角图像中的预设区域的位置;基于视角对应的区域预测结果预测得到各视角对应的各第一血管分割结果。上述的图像分割模块102用于利用融合子网络对各视角对应的各第一血管分割结果进行融合处理,得到目标医学图像的第二血管分割结果,包括:基于多个视角对应的第一血管分割结果,得到各视角对应的融合权重信息;基于各视角对应的融合权重信息对多个视角对应的第一血管分割结果进行融合,得到目标医学图像的第二血管分割结果。
其中,上述的对视角对应的样本特征图进行处理,得到视角对应的区域预测结果是由分割子网络的注意力层执行的;和/或,预设区域为血管的中心线;和/或,区域预测结果包括目标视角图像中各第一图像点为预设区域的概率信息。
其中,上述的视角对应的第一血管分割结果包括表示视角对应的目标视角图像中各第一图像点是否属于预设类别的第一预测信息,第二血管分割结果包括表示目标医学图像中各第二图像点是否属于预设类别的第二预测信息,预设类别包括至少一种血管类别和非血管类别。上述的图像分割模块102用于基于多个视角对应的第一血管分割结果,得到各视角对应的融合权重信息,包括:对于每个视角,基于视角的第一血管分割结果,得到视角对应的各第一图像点的融合权重;上述的图像分割模块102用于基于各视角对应的融合权重信息对多个视角对应的第一血管分割结果进行融合,得到目标医学图像的第二血管分割结果,包括:对于各第一图像点,基于第一图像点对应各视角的融合权重对第一图像点对应各视角的预测信息进行加权处理,得到目标医学图像中与第一图像点对应的第二图像点的第二预测信息。
其中,上述的图像分割模型为利用上述的图像分割模型的训练方法训练得到的。
其中,上述的目标医学图像为对器官扫描得到的三维图像;上述的多个视角包括横断位视角、矢状位视角、冠状位视角中的多种;上述的获取模块101用于获取分别从多个视角对目标医学图像提取得到的多个目标视角图像,包括:对于每个视角,从视角对目标医学图像提取得到视角的若干子目标图像,并将视角的若干子目标图像进行拼接,得到视角对应的目标视角图像。
因此,通过利用图像分割模型对各目标视角图像进行图像分割,使得图像分割模型可以利用多个视角的目标视角图像的图像信息进行血管分割,有助于提高图像分割模型的分割准确度。
请参阅图11,图11是本申请电子设备一实施例的框架示意图。电子设备110包括相互耦接的存储器111和处理器112,处理器112用于执行存储器111中存储的程序指令,以实现上述任一图像分割模型的训练方法实施例的步骤,或实现上述任一图像分割方法实施例中的步骤。在一个具体的实施场景中,电子设备110可以包括但不限于:微型计算机、服务器,此外,电子设备110还可以包括笔记本电脑、平板电脑等移动设备,在此不做限定。
具体而言,处理器112用于控制其自身以及存储器111以实现上述任一图像分割模型的训练方法实施例的步骤,或实现上述任一图像分割方法实施例中的步骤。处理器112还可以称为CPU(Central Processing Unit,中央处理单元)。处理器112可能是一种集成电路芯片,具有信号的处理能力。处理器112还可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。另外,处理器112可以由集成电路芯片共同实现。
请参阅图12,图12为本申请计算机可读存储介质一实施例的框架示意图。计算机可读存储介质120存储有能够被处理器运行的程序指令121,程序指令121用于实现上述任一图像分割模型的训练方法实施例的步骤,或实现上述任一图像分割方法实施例中的步骤。
本申请计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器执行用于实现上述方法。
在一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。
上文对各个实施例的描述倾向于强调各个实施例之间的不同之处,其相同或相似之处可以互相参考,为了简洁,本文不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法和装置,可以通过其它的方式实现。例如,以上所描述的装置实施方式仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性、机械或其它的形式。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施方式方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。

Claims (25)

  1. 一种图像分割模型的训练方法,其特征在于,包括:
    获取分别从多个视角对样本医学图像提取得到的多个样本视角图像,其中,所述样本医学图像包含血管;
    利用图像分割模型对至少一个所述样本视角图像进行图像分割,以得到与所述样本医学图像相关的血管分割结果;
    基于所述血管分割结果,调整所述图像分割模型的网络参数。
  2. 根据权利要求1所述的方法,其特征在于,所述图像分割模型包括分别与所述多个视角对应的多个分割子网络和融合子网络;所述利用图像分割模型对至少一个所述样本视角图像进行图像分割,以得到与所述样本医学图像相关的血管分割结果,包括:
    对于至少一个视角,利用所述视角对应的分割子网络对所述视角对应的样本视角图像进行图像分割,得到至少一个所述视角对应的第一血管分割结果;
    利用所述融合子网络对至少一个所述视角对应的第一血管分割结果进行融合处理,得到所述样本医学图像的第二血管分割结果;以及
    所述基于所述血管分割结果,调整所述图像分割模型的网络参数,包括以下至少一个步骤:
    对于至少一个视角,基于至少一个所述视角对应的所述第一血管分割结果与至少一个所述视角对应的局部血管分割标注信息,调整所述视角对应的分割子网络的参数;
    基于所述第二血管分割结果与所述样本医学图像的全局血管分割标注信息,调整至少一个所述分割子网络和/或融合子网络的参数。
  3. 根据权利要求2所述的方法,其特征在于,所述分割子网络包括依序连接的特征处理层、注意力层和预测层,所述调整所述视角对应的分割子网络的参数包括所述特征处理层、注意力层和预测层中至少一者的参数;
    所述利用所述视角对应的分割子网络对所述视角对应的样本视角图像进行图像分割,得到至少一个所述视角对应的第一血管分割结果,包括:
    利用所述特征处理层对所述视角对应的样本视角图像进行特征提取,得到所述视角对应的样本特征图;
    利用所述注意力层对所述视角对应的样本特征图进行处理,得到所述视角对应的区域预测结果,其中,所述视角对应的区域预测结果用于表示所述视角对应的样本视角图像中的预设区域的位置;
    利用所述预测层基于至少一个所述视角对应的区域预测结果预测得到至少一个所述视角对应的第一血管分割结果。
  4. 根据权利要求3所述的方法,其特征在于,所述局部血管分割标注信息包括表示所述样本视角图像的第一图像点是否属于预设类别的第一标注信息和表示所述第一图像点是否属于所述预设区域的第二标注信息,所述预设类别包括至少一种血管类别和非血管类别;
    所述基于所述视角对应的所述第一血管分割结果与所述视角对应的局部血管分割标注信息,调整所述视角对应的分割子网络的参数,包括以下至少一个步骤:
    基于所述视角对应的区域预测结果和所述视角对应的所述第二标注信息,至少调整所述注意力层的参数;
    基于至少一个所述视角对应的第一血管分割结果、所述视角对应的所述第一标注信息,调整所述特征处理层、注意力层和预测层中至少一者的参数。
  5. 根据权利要求4所述的方法,其特征在于,所述分割子网络包括依序连接的至少一个处理单元和所述预测层,所述处理单元包括特征处理层,至少部分所述处理单元还包括连接于特征处理层之后的注意力层,所述预测层基于至少一所述注意力层输出的区域预测结果得到所述第一血管分割结果,所述注意力层的参数是基于对应至少一个所述注意力层的区域预测结果和所述视角对应的所述第二标注信息调整的;
    和/或,所述基于所述视角对应的区域预测结果和所述视角对应的所述第二标注信息,至少调整所述注意力层的参数,包括:
    利用至少一个所述注意力层输出的区域预测结果和所述视角对应的所述第二标注信息之间的差异,对应得到至少一个所述注意力层的第一损失值;
    对至少一个所述注意力层的第一损失值进行融合,得到第二损失值;
    基于所述第二损失值,调整至少一个所述注意力层的参数。
  6. 根据权利要求5所述的方法,其特征在于,所述第一损失值是利用正则化损失函数确定得到的;
    和/或,所述利用至少一个所述注意力层输出的区域预测结果和所述视角对应的所述第二标注信息之间的第一差异,对应得到至少一个所述注意力层的第一损失值,包括:
    利用至少一个所述注意力层对应的所述差异、至少一个结构权重,对应得到至少一个所述注意力层的第一损失值,其中,所述至少一个结构权重为所述注意力层的权重和/或所述注意力层所在的分割子网络的权重;
    和/或,所述对至少一个所述注意力层的第一损失值进行融合,得到第二损失值,包括:
    利用至少一个所述注意力层的损失权重对所述对至少一个所述注意力层的第一损失值进行加权处理,得到所述第二损失值。
  7. 根据权利要求6所述的方法,其特征在于,越靠近所述预测层的所述注意力层的所述损失权重越大。
  8. 根据权利要求2所述的方法,其特征在于,所述融合子网络包括权重确定层和融合输出层,调整的所述融合子网络的参数包括所述权重确定层和/或融合输出层的参数;所述利用所述融合子网络对至少一个所述视角对应的第一血管分割结果进行融合处理,得到所述样本医学图像的第二血管分割结果,包括:
    利用所述权重确定层对所述多个视角对应的第一血管分割结果进行处理,得到至少一个所述视角对应的融合权重信息;
    利用所述融合输出层基于至少一个所述视角对应的融合权重信息对所述多个视角对应的第一血管分割结果进行融合,得到所述样本医学图像的第二血管分割结果。
  9. 根据权利要求2所述的方法,其特征在于,所述全局血管分割标注信息包括表示所述样本医学图像的第二图像点是否属于预设类别的第三标注信息,所述第二血管分割结果包括表示至少一个所述第二图像点是否属于预设类别的预测信息,所述预设类别包括至少一种血管类别和非血管类别;所述基于所述第二血管分割结果与所述样本医学图像的全局血管分割标注信息,调整至少一个所述分割子网络和/或融合子网络的参数,包括:
    基于至少一个所述第二图像点与样本医学图像中血管的预设区域之间的位置关系,确定至少一个所述第二图像点的位置权重;以及
    基于至少一个所述第二图像点对应的预测信息和所述第三标注信息,得到至少一个所述第二图像点的第三损失值;
    利用至少一个所述第二图像点的位置权重对至少一个所述第二图像点的第三损失值进行加权处理,得到第四损失值;
    基于所述第四损失值,调整至少一个所述分割子网络和/或融合子网络的参数。
  10. 根据权利要求9所述的方法,其特征在于,所述基于至少一个所述第二图像点与样本医学图像中血管的预设区域之间的位置关系,确定至少一个所述第二图像点的位置权重,包括:
    确定至少一个所述第二图像点的参考距离,其中,属于所述血管类别的第二图像点的参考距离为所述第二图像点与样本医学图像中所述血管的预设区域之间的距离,属于所述非血管类别的第二图像点的参考距离为预设距离值;
    基于至少一个所述第二图像点的参考距离,确定至少一个所述第二图像点的位置权重。
  11. 根据权利要求10所述的方法,其特征在于,属于所述血管类别的第二图像点的参考距离越大,对应的位置权重越大,属于所述非血管类别的第二图像点的位置权重为预设权重值;
    和/或,所述全局血管分割标注信息还包括表示所述第二图像点是否属于血管的预设区域的第四标注信息;在所述确定至少一个所述第二图像点的参考距离之前,所述方法还包括:
    利用所述第四标注信息,确定所述预设区域在所述样本医学图像中的位置;
    利用所述第二血管分割结果或所述第三标注信息,确定所述样本医学图像中至少一个所述第二图像点为属于所述血管类别或属于所述非血管类别。
  12. 根据权利要求4或9所述的方法,其特征在于,所述预设区域为中心线,和/或,所述至少一种血管类别包括动脉和静脉中的至少一种。
  13. 根据权利要求1所述的方法,其特征在于,所述样本医学图像为对器官扫描得到的三维图像;
    和/或,所述多个视角包括横断位视角、矢状位视角、冠状位视角中的多种;
    和/或,所述获取分别从多个视角对样本医学图像提取得到的多个样本视角图像,包括:
    对于至少一个所述视角,从所述视角对所述样本医学图像提取得到所述视角的若干子样本图像,并将所述视角的若干子样本图像进行拼接,得到所述视角对应的样本视角图像。
  14. 一种图像分割方法,其特征在于,包括:
    获取分别从多个视角对目标医学图像提取得到的多个目标视角图像,其中,所述目标医学图像包含血管;
    利用图像分割模型对至少一个所述目标视角图像进行图像分割,以得到与所述目标医学图像相关的血管分割结果。
  15. 根据权利要求14所述的方法,其特征在于,所述图像分割模型包括分别与所述多个视角对应的多个分割子网络和融合子网络;所述利用图像分割模型对至少一个所述样本视角图像进行图像分割,以得到与所述目标医学图像相关的血管分割结果,包括:
    对于至少一个视角,利用所述视角对应的分割子网络对所述视角对应的目标视角图像进行图像分割,得到至少一个所述视角对应的第一血管分割结果;
    利用所述融合子网络对至少一个所述视角对应的第一血管分割结果进行融合处理,得到所述目标医学图像的第二血管分割结果。
  16. 根据权利要求15所述的方法,其特征在于,所述利用所述视角对应的分割子网络对所述视角对应的目标视角图像进行图像分割,得到至少一个所述视角对应的第一血管分割结果,包括:
    对所述视角对应的样本视角图像进行特征提取,得到所述视角对应的样本特征图;
    对所述视角对应的样本特征图进行处理,得到所述视角对应的区域预测结果,其中,所述视角对应的区域预测结果用于表示所述视角对应的样本视角图像中的预设区域的位置;
    基于所述视角对应的区域预测结果预测得到至少一个所述视角对应的第一血管分割结果;
    和/或,所述利用所述融合子网络对至少一个所述视角对应的第一血管分割结果进行融合处理,得到所述目标医学图像的第二血管分割结果,包括:
    基于所述多个视角对应的第一血管分割结果,得到至少一个所述视角对应的融合权重信息;
    基于至少一个所述视角对应的融合权重信息对所述多个视角对应的第一血管分割结果进行融合,得到所述目标医学图像的第二血管分割结果。
  17. 根据权利要求16所述的方法,其特征在于,所述对所述视角对应的样本特征图进行处理,得到所述视角对应的区域预测结果是由所述分割子网络的注意力层执行的;
    和/或,所述预设区域为血管的中心线;
    和/或,所述区域预测结果包括所述目标视角图像中至少一个第一图像点为所述预设区域的概率信息。
  18. 根据权利要求16所述的方法,其特征在于,所述视角对应的第一血管分割结果包括表示所述视角对应的目标视角图像中至少一个第一图像点是否属于预设类别的第一预测信息,所述第二血管分割结果包括表示所述目标医学图像中至少一个第二图像点是否属于预设类别的第二预测信息,所述预设类别包括至少一种血管类别和非血管类别;
    所述基于所述多个视角对应的第一血管分割结果,得到至少一个所述视角对应的融合权重信息,包括:
    对于至少一个所述视角,基于所述视角的第一血管分割结果,得到所述视角对应的至少一个所述第一图像点的融合权重;
    所述基于至少一个所述视角对应的融合权重信息对所述多个视角对应的第一血管分割结果进行融合,得到所述目标医学图像的第二血管分割结果,包括:
    对于至少一个所述第一图像点,基于所述第一图像点对应至少一个视角的融合权重对所述第一图像点对应至少一个视角的预测信息进行加权处理,得到所述目标医学图像中与所述第一图像点对应的第二图像点的第二预测信息。
  19. 根据权利要求16所述的方法,其特征在于,所述图像分割模型为利用上述权利要求1-13任一项所述的图像分割模型的训练方法训练得到的。
  20. 根据权利要求16所述的方法,其特征在于,所述目标医学图像为对器官扫描得到的三维图像;
    和/或,所述多个视角包括横断位视角、矢状位视角、冠状位视角中的多种;
    和/或,所述获取分别从多个视角对目标医学图像提取得到的多个目标视角图像,包括:
    对于至少一个所述视角,从所述视角对所述目标医学图像提取得到所述视角的若干子目标图像,并将所述视角的若干子目标图像进行拼接,得到所述视角对应的目标视角图像。
  21. 一种图像分割模型的训练装置,其特征在于,包括:
    获取模块,用于获取分别从多个视角对样本医学图像提取得到的多个样本视角图像,其中,所述样本医学图像包含血管;
    图像分割模块,用于获取分别从多个视角对样本医学图像提取得到的多个样本视角图像,其中, 所述样本医学图像包含血管;
    参数调整模块,用于基于所述血管分割结果,调整所述图像分割模型的网络参数。
  22. 一种图像分割装置,其特征在于,包括:
    获取模块,用于获取分别从多个视角对目标医学图像提取得到的多个目标视角图像,其中,所述目标医学图像包含血管;
    图像分割模块,用于利用图像分割模型对至少一个所述目标医学图像进行图像分割,以得到与所述目标医学图像相关的血管分割结果。
  23. 一种电子设备,其特征在于,包括相互耦接的存储器和处理器,所述处理器用于执行所述存储器中存储的程序指令,以实现权利要求1至13任一项所述的图像分割模型的训练方法,或实现权利要求14-20任一项所述的图像分割方法。
  24. 一种计算机可读存储介质,其上存储有程序指令,其特征在于,所述程序指令被处理器执行时实现权利要求1至13任一项所述的图像分割模型的训练方法,或实现权利要求14-20任一项所述的图像分割方法。
  25. 一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器用于实现权利要求1-13任一项所述的图像分割模型的训练方法,或实现权利要求14-20任一项所述的图像分割方法。
PCT/CN2022/093458 2021-10-29 2022-05-18 图像分割方法及相关模型的训练方法和装置、设备 WO2023071154A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111274342.9A CN113989293A (zh) 2021-10-29 2021-10-29 图像分割方法及相关模型的训练方法和装置、设备
CN202111274342.9 2021-10-29

Publications (1)

Publication Number Publication Date
WO2023071154A1 true WO2023071154A1 (zh) 2023-05-04

Family

ID=79744610

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/093458 WO2023071154A1 (zh) 2021-10-29 2022-05-18 图像分割方法及相关模型的训练方法和装置、设备

Country Status (2)

Country Link
CN (1) CN113989293A (zh)
WO (1) WO2023071154A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989293A (zh) * 2021-10-29 2022-01-28 上海商汤智能科技有限公司 图像分割方法及相关模型的训练方法和装置、设备
CN114494668B (zh) * 2022-04-13 2022-07-15 腾讯科技(深圳)有限公司 三维模型的展开方法、装置、设备及存储介质
CN115170912B (zh) * 2022-09-08 2023-01-17 北京鹰瞳科技发展股份有限公司 图像处理模型训练的方法、生成图像的方法及相关产品
CN115908457B (zh) * 2023-01-06 2023-05-23 脑玺(苏州)智能科技有限公司 低密度梗死区分割方法、分析方法、装置、系统、设备及介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10430946B1 (en) * 2019-03-14 2019-10-01 Inception Institute of Artificial Intelligence, Ltd. Medical image segmentation and severity grading using neural network architectures with semi-supervised learning techniques
CN111768418A (zh) * 2020-06-30 2020-10-13 北京推想科技有限公司 图像分割方法及装置、图像分割模型的训练方法
CN112037186A (zh) * 2020-08-24 2020-12-04 杭州深睿博联科技有限公司 一种基于多视图模型融合的冠脉血管提取方法及装置
CN112561868A (zh) * 2020-12-09 2021-03-26 深圳大学 一种基于多视角级联深度学习网络的脑血管分割方法
CN113409320A (zh) * 2021-05-18 2021-09-17 珠海横乐医学科技有限公司 基于多注意力的肝脏血管分割方法及系统
CN113989293A (zh) * 2021-10-29 2022-01-28 上海商汤智能科技有限公司 图像分割方法及相关模型的训练方法和装置、设备
CN114445376A (zh) * 2022-01-27 2022-05-06 上海商汤智能科技有限公司 图像分割方法及其模型训练方法和相关装置、设备、介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10430946B1 (en) * 2019-03-14 2019-10-01 Inception Institute of Artificial Intelligence, Ltd. Medical image segmentation and severity grading using neural network architectures with semi-supervised learning techniques
CN111768418A (zh) * 2020-06-30 2020-10-13 北京推想科技有限公司 图像分割方法及装置、图像分割模型的训练方法
CN112037186A (zh) * 2020-08-24 2020-12-04 杭州深睿博联科技有限公司 一种基于多视图模型融合的冠脉血管提取方法及装置
CN112561868A (zh) * 2020-12-09 2021-03-26 深圳大学 一种基于多视角级联深度学习网络的脑血管分割方法
CN113409320A (zh) * 2021-05-18 2021-09-17 珠海横乐医学科技有限公司 基于多注意力的肝脏血管分割方法及系统
CN113989293A (zh) * 2021-10-29 2022-01-28 上海商汤智能科技有限公司 图像分割方法及相关模型的训练方法和装置、设备
CN114445376A (zh) * 2022-01-27 2022-05-06 上海商汤智能科技有限公司 图像分割方法及其模型训练方法和相关装置、设备、介质

Also Published As

Publication number Publication date
CN113989293A (zh) 2022-01-28

Similar Documents

Publication Publication Date Title
WO2023071154A1 (zh) 图像分割方法及相关模型的训练方法和装置、设备
Usman et al. Volumetric lung nodule segmentation using adaptive roi with multi-view residual learning
US11861829B2 (en) Deep learning based medical image detection method and related device
JP7297081B2 (ja) 画像分類方法、画像分類装置、医療用電子機器、画像分類機器、及びコンピュータプログラム
CN110930416B (zh) 一种基于u型网络的mri图像前列腺分割方法
Häme et al. Semi-automatic liver tumor segmentation with hidden Markov measure field model and non-parametric distribution estimation
Tong et al. 3D deeply-supervised U-net based whole heart segmentation
US20220198230A1 (en) Auxiliary detection method and image recognition method for rib fractures based on deep learning
WO2021203795A1 (zh) 一种基于显著性密集连接扩张卷积网络的胰腺ct自动分割方法
Huang et al. Lesion-based contrastive learning for diabetic retinopathy grading from fundus images
Liu et al. Automated cardiac segmentation of cross-modal medical images using unsupervised multi-domain adaptation and spatial neural attention structure
CN111899244B (zh) 图像分割、网络模型的训练方法及装置,及电子设备
An et al. Medical image segmentation algorithm based on multilayer boundary perception-self attention deep learning model
CN112396605B (zh) 网络训练方法及装置、图像识别方法和电子设备
Huang et al. Medical image segmentation with deep atlas prior
CN114897914B (zh) 基于对抗训练的半监督ct图像分割方法
Li et al. Superpixel-guided label softening for medical image segmentation
Yang et al. Dscgans: Integrate domain knowledge in training dual-path semi-supervised conditional generative adversarial networks and s3vm for ultrasonography thyroid nodules classification
WO2023005634A1 (zh) 一种基于ct图像的肺结节良恶性诊断方法及装置
NL2029876B1 (en) Deep residual network-based classification system for thyroid cancer computed tomography (ct) images
CN115063425B (zh) 基于读片知识图谱的结构化检查所见生成方法及系统
JP7092431B2 (ja) 磁気共鳴イメージで正中矢状平面を決定するための方法、イメージ処理デバイス及び格納媒体
Zhao et al. D2a u-net: Automatic segmentation of covid-19 lesions from ct slices with dilated convolution and dual attention mechanism
Yang et al. RADCU-Net: Residual attention and dual-supervision cascaded U-Net for retinal blood vessel segmentation
CN115512110A (zh) 一种涉及跨模态注意力机制的医学图像肿瘤分割方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22885067

Country of ref document: EP

Kind code of ref document: A1