CN116128895A - Medical image segmentation method, apparatus and computer readable storage medium - Google Patents

Medical image segmentation method, apparatus and computer readable storage medium Download PDF

Info

Publication number
CN116128895A
CN116128895A CN202310086039.9A CN202310086039A CN116128895A CN 116128895 A CN116128895 A CN 116128895A CN 202310086039 A CN202310086039 A CN 202310086039A CN 116128895 A CN116128895 A CN 116128895A
Authority
CN
China
Prior art keywords
image
feature map
sample
target object
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310086039.9A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiehang Robot Co ltd
Original Assignee
Shanghai Jiehang Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiehang Robot Co ltd filed Critical Shanghai Jiehang Robot Co ltd
Priority to CN202310086039.9A priority Critical patent/CN116128895A/en
Publication of CN116128895A publication Critical patent/CN116128895A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The application relates to a medical image segmentation method, a medical image segmentation device and a storage medium. The method comprises the following steps: acquiring a medical image to be segmented of a target object, wherein the medical image to be segmented comprises a first image, a second image and a third image, and the first image, the second image and the third image are obtained by imaging the target object in different nuclear magnetic imaging modes; and extracting features of the first image, the second image and the third image through a medical image segmentation model to obtain corresponding first feature images, second feature images and third feature images, and carrying out fusion processing on the first feature images, the second feature images and the third feature images to obtain a prediction segmentation image of the target object. By adopting the method, the accuracy of dividing the focus area of the target object by the medical image division model can be improved.

Description

Medical image segmentation method, apparatus and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a medical image segmentation method, apparatus, and computer readable storage medium.
Background
With the development of artificial intelligence (Artificial Intelligence, AI) and AI-based deep learning image processing technologies, deep learning algorithms have been widely used in medical image aided diagnosis system development and application. The deep learning-based prostate organ segmentation technique can help radiologists to improve the film reading efficiency and reduce the misdiagnosis rate and missed diagnosis rate.
In the prior art, a prostate organ region and a prostate lesion region are automatically segmented by a Weighted imaging (T2 Weighted, T2W) sequence image in a nuclear magnetic resonance image. Multiple focal areas of the prostate organ may exist, but the segmentation precision of the traditional technology on the focal areas of the prostate is low, and all the focal areas cannot be found out, so that judgment of a doctor on the actual focal areas in the prostate organ is affected, and planning of an operation is affected.
Disclosure of Invention
Based on this, it is necessary to provide a medical image segmentation method, an apparatus, a computer device, a computer readable storage medium and a computer program product in view of the above technical problems.
In a first aspect, the present application provides a medical image segmentation method. The method comprises the following steps:
acquiring a medical image to be segmented of a target object, wherein the medical image to be segmented comprises a first image, a second image and a third image, and the first image, the second image and the third image are obtained by imaging the target object in different nuclear magnetic imaging modes;
and extracting features of the first image, the second image and the third image through a medical image segmentation model to obtain corresponding first feature images, second feature images and third feature images, and carrying out fusion processing on the first feature images, the second feature images and the third feature images to obtain a prediction segmentation image of the target object.
In one embodiment, the first image comprises a sequence of images based on weighted imaging during a magnetic resonance examination, the second image comprises a sequence of images based on diffusion weighted imaging during a magnetic resonance examination, and the third image comprises a sequence of images based on apparent diffusion coefficients during a magnetic resonance examination.
In one embodiment, the first image, the second image and the third image are captured at the same viewing angle of the target object.
In one embodiment, a training process for a medical image segmentation model includes:
acquiring a training image sample of a target object and a corresponding label image, wherein the training image sample comprises a first image sample, a second image sample and a third image sample, the first image sample, the second image sample and the third image sample are obtained by imaging the target object in different nuclear magnetic imaging modes, and the label image comprises a focus sketching area of the target object and a sketching area of the target object;
extracting features of the first image sample, the second image sample and the third image sample through a medical image segmentation model to be trained to obtain a corresponding first sample feature map, a corresponding second sample feature map and a corresponding third sample feature map; carrying out fusion processing on the first sample feature map, the second sample feature map and the third sample feature map to obtain a predicted sample segmentation image, wherein the predicted sample segmentation image comprises a first predicted sample segmentation image and a second predicted sample segmentation image, and the predicted sample segmentation image comprises a predicted focus sketching area of a target object and a predicted sketching area of the target object;
Based on the difference between the label image and the first prediction sample segmentation image and the second prediction sample segmentation image respectively, updating parameters of the medical image segmentation model to obtain a trained medical image segmentation model.
In one embodiment, updating parameters of the medical image segmentation model based on differences between the label image and the first and second prediction sample segmentation images, respectively, comprises:
determining a first loss value based on a difference between the label image and the first prediction sample segmentation image;
determining a second loss value based on a difference between the label image and the second prediction sample segmentation image;
based on the first loss value and the second loss value, parameters of the medical image segmentation model are updated.
In one embodiment, the fusing processing is performed on the first feature map, the second feature map and the third feature map to obtain a predicted segmented image of the target object, including:
multiplying the first feature map, the second feature map and the third feature map with respective preset weights to obtain a first weight feature map corresponding to the first feature map, a second weight feature map corresponding to the second feature map and a third weight feature map corresponding to the third feature map;
And carrying out fusion processing on the first weight feature map, the second weight feature map and the third weight feature map to obtain a prediction segmentation image of the target object.
In one embodiment, the predictive segmented image comprises a first predictive segmented image; the medical image segmentation model comprises a 3D neural network sub-model; the method for obtaining the predictive segmentation image of the target object comprises the following steps of:
carrying out global average pooling operation on the first feature map to obtain an important coefficient matrix of the first feature map;
performing fusion processing on the first weight feature map, the second weight feature map and the third weight feature map to obtain a first fusion feature map; multiplying the important coefficient matrix with the first fusion feature map to obtain a reference fusion feature map;
and inputting the reference fusion feature map into a 3D neural network sub-model to obtain a first prediction segmentation image of the target object.
In one embodiment, the predictive segmented image comprises a second predictive segmented image; the medical image segmentation model further comprises a classification network sub-model; the method for obtaining the predictive segmentation image of the target object comprises the following steps of:
Performing fusion processing on the second weight feature map and the third weight feature map to obtain a second fusion feature map;
according to the second fusion feature map, a first probability matrix and a second probability matrix are obtained through the classification network submodel, wherein the first probability matrix is used for describing the probability of the area where the focus of the target object is located, and the second probability matrix is used for describing the probability of the area where the target object is located;
and weighting the first weight feature map according to the first probability matrix and the second probability matrix to obtain a second prediction segmentation image of the target object.
In a second aspect, the present application also provides a medical image segmentation apparatus. The device comprises:
the image acquisition module is used for acquiring a medical image to be segmented of a target object, wherein the medical image to be segmented comprises a first image, a second image and a third image, and the first image, the second image and the third image are obtained by imaging the target object in different nuclear magnetic imaging modes;
the feature extraction module is used for extracting features of the first image, the second image and the third image through the medical image segmentation model to obtain corresponding first feature images, second feature images and third feature images, and carrying out fusion processing on the first feature images, the second feature images and the third feature images to obtain a prediction segmentation image of the target object.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a medical image to be segmented of a target object, wherein the medical image to be segmented comprises a first image, a second image and a third image, and the first image, the second image and the third image are obtained by imaging the target object in different nuclear magnetic imaging modes;
and extracting features of the first image, the second image and the third image through a medical image segmentation model to obtain corresponding first feature images, second feature images and third feature images, and carrying out fusion processing on the first feature images, the second feature images and the third feature images to obtain a prediction segmentation image of the target object.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a medical image to be segmented of a target object, wherein the medical image to be segmented comprises a first image, a second image and a third image, and the first image, the second image and the third image are obtained by imaging the target object in different nuclear magnetic imaging modes;
And extracting features of the first image, the second image and the third image through a medical image segmentation model to obtain corresponding first feature images, second feature images and third feature images, and carrying out fusion processing on the first feature images, the second feature images and the third feature images to obtain a prediction segmentation image of the target object.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring a medical image to be segmented of a target object, wherein the medical image to be segmented comprises a first image, a second image and a third image, and the first image, the second image and the third image are obtained by imaging the target object in different nuclear magnetic imaging modes;
and extracting features of the first image, the second image and the third image through a medical image segmentation model to obtain corresponding first feature images, second feature images and third feature images, and carrying out fusion processing on the first feature images, the second feature images and the third feature images to obtain a prediction segmentation image of the target object.
The medical image segmentation method, the medical image segmentation device, the computer equipment, the storage medium and the computer program product are used for acquiring a medical image to be segmented of a target object, wherein the medical image to be segmented comprises a first image, a second image and a third image, and the first image, the second image and the third image are obtained by imaging the target object in different nuclear magnetic imaging modes; and extracting features of the first image, the second image and the third image through a medical image segmentation model to obtain corresponding first feature images, second feature images and third feature images, and carrying out fusion processing on the first feature images, the second feature images and the third feature images to obtain a prediction segmentation image of the target object. Because the second image and the third image have differences in the display degree of the focus compared with the first image, the feature images corresponding to the second image and the third image are fused onto the feature image of the first image, the features of the focus area extracted from the second image and the third image can be fused into the feature image of the first image, and the characterization of the focus area in the feature image of the first image can be enhanced, so that the identification outlining effect of the medical image segmentation model on the focus of the target object is improved, and the precision of the medical image segmentation model for segmenting the focus area is improved.
Drawings
FIG. 1 is a flow chart of a method of segmentation of medical images in one embodiment;
FIG. 2 is a schematic representation of MRI in one embodiment;
FIG. 3 is a schematic diagram of an extracted feature map in one embodiment;
FIG. 4 is a schematic diagram of a 3D U-Net neural network in one embodiment;
FIG. 5 is a schematic diagram of image transformation processing of a medical image in one embodiment;
FIG. 6 is a flow diagram of calculating a second loss value according to one embodiment;
FIG. 7 is a schematic diagram of a traditional Chinese medicine image segmentation model in one embodiment;
FIG. 8 is a schematic diagram of a predicted sample segmentation image in one embodiment;
FIG. 9 is a flow chart of weighted fusion of feature images of sequences in one embodiment;
FIG. 10 is a schematic diagram illustrating calculation of a weighted fusion matrix for feature maps of images of sequences in one embodiment;
FIG. 11 is a schematic diagram of a probability matrix in one embodiment;
FIG. 12 is a flow chart of a method of segmentation of medical images in one embodiment;
FIG. 13 is a block diagram showing the structure of a medical image segmentation apparatus according to an embodiment;
fig. 14 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a medical image segmentation method is provided, where the method is applied to a terminal for illustration, it is understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
101. acquiring a medical image to be segmented of a target object, wherein the medical image to be segmented comprises a first image, a second image and a third image, and the first image, the second image and the third image are obtained by imaging the target object in different nuclear magnetic imaging modes;
102. and extracting features of the first image, the second image and the third image through a medical image segmentation model to obtain corresponding first feature images, second feature images and third feature images, and carrying out fusion processing on the first feature images, the second feature images and the third feature images to obtain a prediction segmentation image of the target object.
Wherein the target object may be a tissue or organ of an animal or human body. For example, the target object may be a prostate organ, and the first, second and third images are obtained by imaging the target object using different nuclear magnetic resonance imaging modes, which means that the first, second and third images are obtained by different magnetic resonance imaging (Nuclear Magnetic Resonance Imaging, MRI). The second image and the third image are lesion areas for assisting in determining the target object.
Specifically, a first image, a second image and a third image of a target object are respectively input into different convolution neural network channels of a medical image segmentation model, focal region features in the target object are respectively extracted to obtain a first feature image corresponding to the first image, a second feature image corresponding to the second image and a third feature image corresponding to the third image, then the first feature image, the second feature image and the third feature image are subjected to weighted fusion to obtain fused feature images, the fused feature images are input into a neural network, focal features of the target object are extracted through the neural network, and a prediction segmentation image comprising a prediction focal sketching region and a prediction sketching region of the target object is output.
The method for obtaining the predictive segmentation image of the target object comprises the following steps of:
in the medical image segmentation model, carrying out fusion processing on the first feature map, the second feature map and the third feature map to obtain a fusion feature map; and then processing the fusion feature map to obtain a prediction segmentation image of the target object.
Taking a target object as a prostate organ as an example, processing the fusion feature map to obtain a predicted segmented image of the target object, including: setting 1 to the pixel value of the pixel point corresponding to the probability value with the probability value larger than the first preset value in the probability matrix corresponding to the focus area of the fusion feature image, setting 0 to the pixel value of the other pixel points to obtain the first area feature image corresponding to the focus area, setting 1 to the pixel value of the pixel point corresponding to the probability value with the probability value larger than the second preset value in the probability matrix corresponding to the prostate organ area of the fusion feature image, and setting 0 to the pixel value of the other pixel points to obtain the second area feature image corresponding to the prostate organ area. And loading the first region feature map into the second region feature map, summing pixel values of corresponding pixel points to obtain a third region feature map, and performing image processing on the third region feature to obtain a focus region of the prostate organ and a prostate organ region. An image including a lesion region of a prostate organ and a region of the prostate organ is set as a predictive divided image of a target object.
In one example, after obtaining a segmented image including a lesion area of a prostate organ and a prostate organ area, respectively searching for an outer boundary coordinate point of the segmented image by using a boundary algorithm with the segmented image as a binary image, and when searching for a contour coordinate point of the prostate organ area, determining boundary point coordinates of the prostate organ area a by using the prostate organ area a and the lesion area B as an integral area T; when determining the boundary point of the focal region B, the region other than the focal region B needs to be taken as an integral region U, so as to determine the coordinates of the boundary point of the focal region B.
According to the method provided by the embodiment of the invention, the focus area in the prostate organ can be completely segmented through the medical image segmentation model, and the accuracy of determining the focus area in the prostate organ is improved, so that the accuracy of judging the focus area in the prostate organ by a doctor is improved, and the safety of operation planning is further improved.
In combination with the foregoing embodiments, in one embodiment, the first image includes a sequence image based on a weighted imaging method in a magnetic resonance examination, the second image includes a sequence image based on a diffusion weighted imaging method in a magnetic resonance examination, and the third image includes a sequence image based on apparent diffusion coefficients in a magnetic resonance examination.
The first image comprises a weighted imaging (T2 weighted, T2W) sequence image, the sequence repetition time in the weighted imaging mode is long, the echo time is long, the T2 weighted imaging mode is formed, the gray level of the image is mainly determined by the T2 relaxation speed of the tissue, and the situation of displaying tissue lesions is better and the image is also a cross section image of the tissue.
The second image comprises a diffusion weighted imaging (Diffusion Weighted Imaging, DWI) sequence image, which is a differential imaging using the degree and direction of water diffusion between normal and pathological tissues, and is often used for diagnosing tumors because of the high signal generated by the restricted diffusion of free water in malignant tumors due to the dense growth of normal malignant tumor cells.
The third image comprises an apparent diffusion coefficient (Apparent diffusion coefficient, ADC) sequence image, wherein the apparent diffusion coefficient sequence image is an image measured on the basis of diffusion imaging (DWI), and is mainly used for describing the speed of water molecular diffusion in a tissue structure, and a tumor area presents a dark phenomenon on the imaging opposite to the DWI sequence image and is commonly used for judging tumors together with the DWI sequence image.
According to the method provided by the embodiment of the invention, the shooting view angles of the first image, the second image and the third image are unified, so that the shooting contents of the first image, the second image and the third image are kept consistent, and the fusion efficiency and the accuracy of the first feature map, the second feature map and the third feature map can be improved.
Of course, the first, second and third images may also be other types of medical images. It should be noted that, in the embodiment of the present invention, the first image is segmented by the medical image segmentation model, that is, the obtained prediction segmentation image is obtained based on the first image segmentation. The second image and the third image are used in a medical image segmentation model to assist in segmentation of the first image.
According to the method provided by the embodiment of the invention, the first image is segmented through the medical image segmentation model, and the second image and the third image are subjected to auxiliary segmentation, so that the segmentation accuracy of the predictive segmented image can be improved.
In combination with the foregoing embodiments, in one embodiment, the first image, the second image, and the third image are taken from the same perspective of the target object.
The first image, the second image and the third image are nuclear magnetic resonance imaging obtained by nuclear magnetic resonance examination shooting of the same part of the target object at the same shooting view angle. For example, the first image is a T2W sequence image of a cross section of a prostate organ, the second image is a DWI sequence image of a cross section of a prostate organ, and the third image is an ADC sequence image of a cross section of a prostate organ.
The medical image to be segmented may be a nuclear magnetic resonance imaging of the sagittal plane (SAG), COR coronal plane (COR) or cross-section (TRA). Fig. 2 is a schematic diagram of sagittal, coronal, and transverse planes of a T2W sequence image, DWI sequence image, and ADC sequence image, where SAG represents the sagittal, COR is the coronal, TRA is the transverse plane imaging, T1W and T2W represent T1, T2 weighted mri, respectively, and b represents intensity.
In combination with the foregoing embodiments, in one embodiment, before performing feature extraction on the first image, the second image, and the third image, the method includes:
and performing image transformation processing on the medical image to be segmented to obtain a tensor-form medical image matrix, wherein the tensor-form medical image matrix contains image pixel values.
Taking a target object as a prostate organ as an example, performing image transformation processing on the image to be segmented to obtain the image to be segmented, wherein the image comprises image pixel values and the image form is tensor form. Wherein the image to be segmented refers to a feature matrix in tensor form.
According to the method provided by the embodiment of the invention, the image information of the medical image to be segmented is obtained by carrying out image transformation processing on the medical image to be segmented, so that the complexity of the medical image to be segmented can be reduced, and the segmentation efficiency of the medical image segmentation model is improved.
In combination with the foregoing embodiments, in one embodiment, a training process for a medical image segmentation model includes:
acquiring a training image sample of a target object and a corresponding label image, wherein the training image sample comprises a first image sample, a second image sample and a third image sample, the first image sample, the second image sample and the third image sample are obtained by imaging the target object in different nuclear magnetic imaging modes, and the label image comprises a focus sketching area of the target object and a sketching area of the target object;
wherein the target object may be a tissue or organ of an animal or human body. For example, the target object may be a prostate organ, and the first image sample, the second image sample and the third image sample are obtained by imaging the target object by using different nuclear magnetic resonance imaging modes, which means that the first image sample, the second image sample and the third image sample are all obtained by different nuclear magnetic resonance imaging modes. The second image sample and the third image sample are lesion areas for assisting in determining the target object. In the present invention, the image samples are all medical images.
The label image refers to an area sketching image obtained by sketching a focus area and a target object area of a target object in a first image sample by a doctor; correspondingly, the label image comprises a focus sketching area and a target object sketching area. For example, the target object is a prostate organ, the first image sample is a T2W sequence image sample, and the label image is an image of a doctor after delineating a focus region and a target region in the T2W sequence image sample.
In the present invention, the nuclear magnetic resonance image sequence, the magnetic resonance imaging, and the magnetic resonance image refer to images obtained by nuclear magnetic resonance examination.
Extracting features of the first image sample, the second image sample and the third image sample through a medical image segmentation model to be trained to obtain a corresponding first sample feature map, a corresponding second sample feature map and a corresponding third sample feature map; and carrying out fusion processing on the first sample feature map, the second sample feature map and the third sample feature map to obtain a predicted sample segmentation image, wherein the predicted sample segmentation image comprises a first predicted sample segmentation image and a second predicted sample segmentation image, and the predicted sample segmentation image comprises a predicted focus sketching area of a target object and a predicted sketching area of the target object.
Specifically, the medical image segmentation model comprises three convolutional neural network submodels of channels, and after a first image sample, a second image sample and a third image sample are respectively input into each convolutional neural network channel, a first sample feature map corresponding to the first image sample, a second sample feature map corresponding to the second image sample and a third sample feature map corresponding to the third image sample are respectively obtained.
It should be noted that before the training image sample is input into the medical image segmentation model, the training image sample and the label image need to be subjected to image transformation processing, so as to obtain the training image sample and the label image after the image transformation processing, and then the medical image segmentation model to be trained is trained according to the training image sample and the label image after the image transformation processing. The image transformation processing comprises image size normalization processing and Gaussian filter filtering processing, and nuclear magnetic resonance imaging is carried out after the normalization processing and the Gaussian filter filtering processing, and the nuclear magnetic resonance imaging is converted into tensor form to obtain a numerical matrix of the nuclear magnetic resonance imaging.
Inputting a first image sample, a second image sample and a third image sample of a target object into different convolution neural network channels of a medical image segmentation model to be trained, respectively extracting focus region features in the target object to obtain a first sample feature map, a second sample feature map and a third sample feature map, then carrying out weighted fusion on the first sample feature map, the second sample feature map and the third sample feature map to obtain a fused sample feature map, and continuously extracting focus features of the target object through a neural network on the fused sample feature map to output a predicted sample segmentation image comprising a predicted focus sketching region and a predicted sketching region of the target object.
For example, three nuclear magnetic image samples of the prostate organ T2W, DWI, ADC are input into different convolution neural network channels of a medical image segmentation model to be trained, focal region features in the prostate organ are respectively extracted to obtain three channel output sample feature images, and the three channel output sample feature images are subjected to weighted fusion to obtain a fused sample feature image; and then, carrying out feature extraction on the fused sample feature map through a neural network to obtain focus features of the prostate organ and features of the prostate organ, and finally outputting a prediction sample segmentation image containing the prostate organ region and focus regions in the prostate organ.
Based on the difference between the label image and the first prediction sample segmentation image and the second prediction sample segmentation image respectively, updating parameters of the medical image segmentation model to obtain a trained medical image segmentation model.
Specifically, the difference values between the label image and the first sample prediction segmentation image and the second sample prediction segmentation image are respectively used, the medical image segmentation model is trained for the next time according to the difference values, and the medical image segmentation model corresponding to the smallest difference value in all the difference values is used as the medical image segmentation model after training, so that the trained medical image segmentation model is obtained.
According to the method provided by the embodiment of the invention, as the focus areas have different signal intensities in the images which are not imaged, the medical image segmentation model is trained through the image samples of various imaging modes, so that the identification outlining capability of the medical image segmentation model on the focus areas can be improved, and the segmentation precision of the medical image segmentation model is improved.
In combination with the above embodiments, in one embodiment, the first image sample comprises an image sample based on a magnetic resonance examination T2W sequence image, the second image sample comprises an image sample based on a magnetic resonance examination DWI sequence image, and the third image sample comprises an image sample based on a magnetic resonance examination ADC sequence image.
Specifically, the first image sample is a T2W sequence image, the second image sample is a diffusion-weighted imaging (DWI) sequence image, and the third image sample is an apparent diffusion coefficient (Apparent Diffusion Coefficient, ADC) sequence image.
In addition, the first image sample, the second image sample and the third image sample are all nuclear magnetic resonance imaging obtained by carrying out nuclear magnetic resonance examination shooting on the same part of the target object at the same angle. For example, the first image sample is a T2W sequence image of a cross section of a prostate organ, the second image sample is a DWI sequence image of a cross section of a prostate organ, and the third image sample is an ADC sequence image of a cross section of a prostate organ. The second image sample and the third image sample are lesion areas for assisting in determining the target object.
Fig. 3 is a schematic diagram of feature extraction of a first image sample, a second image sample, and a third image sample, in one example, and in fig. 3, the convolutional neural network is a 3D U-Net neural network.
FIG. 4 is a schematic diagram of the structure of a 3D U-Net neural network. U-Net is a convolutional neural network structure in a medical image segmentation model, which comprises an input layer, a convolutional layer, an up-sampling layer, a down-sampling layer and a final output layer, wherein the input layer is used for carrying out channel number transformation on input features. The downsampling is to reduce the image size (meanwhile, the number of channels is increased after each downsampling), the upsampling is opposite to the downsampling, the transform scale of the upsampling and downsampling can increase the number of channels to realize clear characterization of the features, and shallow feature information is also utilized. The feature map splicing is to splice the output of the first layer and the last but one layer after up-sampling, so that the feature recycling is realized, and the whole U-shaped structure is presented. Fig. 5 is a schematic diagram of image transformation processing of a medical image in one example.
According to the method provided by the embodiment of the invention, the characteristics of the T2W, DWI, ADC sequence images are respectively extracted through the three-dimensional convolutional neural network channel, and compared with the extraction of the characteristic images of one type of sequence images, the method can extract various types of characteristic images through a medical image segmentation model, so that the accuracy of extracting the focus area of the target object according to the characteristic images is improved.
In combination with the foregoing embodiments, in one embodiment, updating parameters of the medical image segmentation model based on differences between the label image and the first and second prediction sample segmentation images, respectively, includes:
determining a first loss value based on a difference between the label image and the first prediction sample segmentation image;
determining a second loss value based on a difference between the label image and the second prediction sample segmentation image;
based on the first loss value and the second loss value, parameters of the medical image segmentation model are updated.
Wherein the loss function used to calculate the first loss value and the second loss value may be the same or different. For example, the first loss value (or the second loss value) may be calculated by determining a similarity between the label image and the first prediction sample segmentation image (or the second prediction sample segmentation image).
It should be noted that, in the embodiment of the present invention, the first loss value and the second loss value refer to the difference values in the foregoing.
The similarity calculation method of the label image and the first prediction segmentation image comprises the following steps:
Dice=2*∑(T*P/∑(T+P)(1);
DiceLoss1=1-Dice(2);
in the formulas (1) and (2), T represents the label image, P represents the first prediction sample divided image, dice represents the similarity between the label image and the first prediction sample divided image, and DiceLoss1 is the first loss value. Similarly, the second loss value may be calculated by this method.
In one example, a schematic flow chart of calculating the second loss value DiceLoss2 is shown in fig. 6.
Specifically, the first Loss value DiceLoss1 and the second Loss value DiceLoss2 are jointly acted on the training of the medical image segmentation model, wherein the weight of the first Loss value DiceLoss1 and the weight of the second Loss value DiceLoss2 of the medical image segmentation model to be trained are the same, namely the Loss value of the medical image segmentation model is loss=diceloss 1+diceloss2. In the training process, the weights of the first loss value DiceLoss1 and the second loss value DiceLoss2 can be adjusted by themselves. And returning the Loss value Loss to the medical image segmentation model, updating parameters of the medical image segmentation model, determining the minimum Loss value Loss after the completion of multiple times of training, and selecting the medical image segmentation model corresponding to the minimum Loss value as the trained medical image segmentation model. FIG. 7 is a schematic structural diagram of a medical image segmentation model in one example.
Fig. 8 is a schematic diagram of a predicted sample segmentation image obtained by a medical image segmentation model in a training process, in fig. 7, a is a first image sample in a training image sample, B is a label image, and c is a predicted sample segmentation image, wherein a region a in B and c is a prostate organ region where no lesion appears, and B region is a lesion region in a prostate organ.
According to the method provided by the embodiment of the invention, the first Loss value DiceLoss1 calculated by the output of the first prediction sample segmentation image and the second Loss value DiceLoss2 calculated by the output of the first prediction sample segmentation image are used for adjusting the weights of the first Loss value Loss1 and the second Loss value DiceLoss2 in the Loss value Loss, so that the Loss value Loss of the model is determined, the convergence speed of the medical image segmentation model can be accelerated, and the training efficiency of the medical image segmentation model is improved.
In combination with the foregoing embodiments, in one embodiment, performing fusion processing on the first feature map, the second feature map, and the third feature map to obtain a predicted segmented image of the target object includes:
multiplying the first feature map, the second feature map and the third feature map with respective preset weights to obtain a first weight feature map corresponding to the first feature map, a second weight feature map corresponding to the second feature map and a third weight feature map corresponding to the third feature map;
and carrying out fusion processing on the first weight feature map, the second weight feature map and the third weight feature map to obtain a prediction segmentation image of the target object.
The preset weights of the first feature map, the second feature map and the third feature map are set in advance, the preset weight corresponding to the first feature map is larger than the preset weight corresponding to the second feature map and the preset weight corresponding to the third feature map, and in addition, the preset weight corresponding to the second feature map is equal to the preset weight corresponding to the third feature map. For example, the preset weight corresponding to the first feature map is 0.5, and the preset weights corresponding to the second feature map and the third feature map are both 0.25.
According to the method provided by the embodiment of the invention, the feature images of different types of sequence images can be fused into one feature image by carrying out fusion processing on the first feature image, the second feature image and the third feature image, so that the diversity of feature types in the corresponding feature images can be improved, and the representation of the focus area of the target object is further enhanced. In addition, the preset weights corresponding to the first feature map are larger than the corresponding preset weights of the second feature map and the corresponding preset weights of the third feature map, so that the judgment accuracy of the medical image segmentation model on the focus area on the first feature map corresponding to the first feature map can be enhanced, and the segmentation accuracy of the enhanced medical image segmentation model is improved.
In combination with the above embodiments, in one embodiment, the predictive segmented image comprises a first predictive segmented image; the medical image segmentation model comprises a 3D neural network sub-model; the method for obtaining the predictive segmentation image of the target object comprises the following steps of:
carrying out global average pooling operation on the first feature map to obtain an important coefficient matrix of the first feature map;
Performing fusion processing on the first weight feature map, the second weight feature map and the third weight feature map to obtain a first fusion feature map; multiplying the important coefficient matrix with the first fusion feature map to obtain a reference fusion feature map;
and inputting the reference fusion feature map into a 3D neural network sub-model to obtain a first prediction segmentation image of the target object.
Wherein the first, second and third feature maps each comprise a plurality of feature maps.
Performing global average pooling operation on the first feature map to obtain an important coefficient matrix of the first feature map, wherein the important coefficient matrix is as follows: inputting the first feature map F into a global average pooling network, performing global average pooling network operation processing to obtain a single value M corresponding to each feature map in the first feature map, performing 1X 1 convolution learning on M to obtain an important coefficient M of each feature map in the first feature map, and constructing an important coefficient matrix of the first feature map according to the important coefficient M of each feature map in the first feature map.
Taking a first image as a T2W sequence image of a cross section of a prostatic organ, a second image as a DWI sequence image of the cross section of the prostatic organ, and a third image as an ADC sequence image of the cross section of the prostatic organ as an example, and carrying out weighted fusion on the first weight feature image, the second weight feature image and the third weight feature image, wherein the T2W sequence image is used as a main body when observing the prostatic organ or the focus of the prostatic organ due to high specific gravity, the ADC sequence image and the DWI sequence image are used as auxiliary bodies, the feature images corresponding to the ADC sequence image are endowed with high weight, the feature images corresponding to the ADC sequence image and the DWI sequence image are endowed with low weight, and the weighted first weight feature image, the weighted second weight feature image and the weighted third weight feature image are subjected to additive fusion to obtain a first fusion feature image. And then multiplying the first fusion feature map by an important coefficient matrix to obtain a reference fusion feature map, and inputting the reference fusion feature map into a 3D neural network sub-model to obtain a first prediction segmentation image.
FIG. 9 is a flow chart of weighted fusion of feature images of each sequence image, wherein ω 0 、ω 1 、ω 2 The weights of the feature images corresponding to the T2W sequence image, the feature images corresponding to the DWI sequence image and the feature images corresponding to the ADC sequence image are respectively preset.
FIG. 10 is a schematic diagram showing the calculation of a weighted fusion matrix for feature images of each sequence of images, the first fused feature image after fusion
Figure BDA0004069051870000121
Wherein omega 0 、ω 1 、ω 2 And respectively carrying out weight weighting on the element A in the feature images of each sequence image by utilizing point multiplication on the respective preset weights of the feature images corresponding to the T2W sequence image, the feature image matrix corresponding to the DWI sequence image and the feature image corresponding to the ADC sequence image, and adding the positions of the elements corresponding to the weighted three weight feature images to obtain a feature image matrix of the fused T2W.
After obtaining the feature map matrix of the fused T2W, performing 1*1 convolution learning on the important coefficient matrix of the first fused feature map [ [ m1], [ m2], [ mn ] ] and multiplying the important coefficient matrix with the feature map matrix of the T2W to obtain a reference fused feature map. In fig. 10, m1 refers to a single value of a first feature map in the feature maps corresponding to the T2W sequence images, m2 refers to a single value of a 2 nd feature map in the feature maps corresponding to the T2W sequence images, and mn refers to a single value of an n th feature map in the feature maps corresponding to the T2W sequence images. M1 refers to the important coefficient of the first feature map in the feature maps corresponding to the T2W sequence images, M2 refers to the important coefficient of the 2 nd feature map in the feature maps corresponding to the T2W sequence images, and Mn refers to the important coefficient of the nth feature map in the feature maps corresponding to the T2W sequence images.
According to the method provided by the embodiment of the invention, the feature images of different types of sequence images can be fused to obtain the first fused feature image by fusion processing of the first feature image, the second feature image and the third feature image, and the reference fused feature image is obtained by multiplying the first fused feature image by the important coefficient matrix of the first feature image with dominant position, so that the features in the first feature image can be highlighted in the reference fused feature image, the representation of the focus area of the target object in the first feature image is enhanced, and the precision of acquiring the focus area of the target object is improved. In addition, the reference fusion feature map changes along with the change of the first feature map, so that the medical image segmentation model can be adaptively adjusted when the feature maps are subjected to weighted fusion, and the application range of the medical image segmentation model is expanded.
In combination with the above embodiments, in one embodiment, the predictive cut image includes a second predictive cut image; the medical image segmentation model further comprises a classification network sub-model; the method for obtaining the predictive segmentation image of the target object comprises the following steps of:
Performing fusion processing on the second weight feature map and the third weight feature map to obtain a second fusion feature map;
according to the second fusion feature map, a first probability matrix and a second probability matrix are obtained through the classification network submodel, wherein the first probability matrix is used for describing the probability of the area where the focus of the target object is located, and the second probability matrix is used for describing the probability of the area where the target object is located;
and weighting the first weight feature map according to the first probability matrix and the second probability matrix to obtain a second prediction segmentation image of the target object.
Wherein the classification network sub-model may be a sigmoid activation function. Specifically, after adding and fusing the second weight feature map and the third weight feature map, obtaining a second fused feature map, and then processing the second fused feature map through a sigmoid activation function, obtaining a first probability matrix for describing an area where a focus is located and a second probability matrix for describing an area where a target object is located. And respectively carrying out dot multiplication operation on the first probability matrix and the second probability matrix and the first feature map to obtain a second prediction segmentation image.
FIG. 11 is a schematic diagram of a probability matrix of a label image and any predictive segmented image in one example.
According to the method provided by the embodiment of the invention, the second fusion feature map can be obtained by carrying out fusion processing on the second feature map and the third feature map, and the focus area of the target object and the characterization of the target object in the first feature map can be enhanced by the probability matrix of the area of the target object in the second fusion feature map and the probability matrix of the focus, so that the accuracy of determining the area of the target object and the focus area of the target object from the first feature map is improved.
In one embodiment, a medical image segmentation method, as shown in fig. 12, further comprises:
1201. acquiring T2W, DWI and ADC sequence images of a prostate organ;
1202. respectively inputting T2W, DWI and ADC sequence images of the prostate organ into different convolutional neural network channels in the trained medical image segmentation model;
1203. respectively extracting the characteristics of focus areas in the prostate organs based on the trained medical image segmentation model;
1204. and (3) carrying out weighted fusion on the feature images output by the convolutional neural network channels, inputting the fused feature images into the convolutional neural network, extracting focus features of the prostate organ, and finally outputting focus areas of the prostate organ on the T2W sequence images.
According to the method provided by the embodiment of the invention, the focus area in the prostate organ is identified through the trained medical image segmentation model, so that the risk of misjudgment of a doctor on the focus area in the prostate organ can be reduced, the cure rate of prostate operation is further improved, and the damage to a patient caused by inaccurate focus position determination is reduced.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a medical image segmentation device for realizing the above related medical image segmentation method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitations in one or more embodiments of the medical image segmentation device provided below may be referred to above for the limitations of the medical image segmentation method, and will not be repeated here.
In one embodiment, as shown in fig. 13, there is provided a medical image segmentation apparatus including: an image acquisition module 1301 and a feature extraction module 1302, wherein:
the image acquisition module 1301 is configured to acquire a medical image to be segmented of a target object, where the medical image to be segmented includes a first image, a second image, and a third image, and the first image, the second image, and the third image are obtained by imaging the target object in different nuclear magnetic imaging modes;
the feature extraction module 1302 performs feature extraction on the first image, the second image and the third image through the medical image segmentation model to obtain a corresponding first feature map, a corresponding second feature map and a corresponding third feature map, and performs fusion processing on the first feature map, the second feature map and the corresponding third feature map to obtain a predicted segmented image of the target object.
In one embodiment, image acquisition module 1301 includes: the first image comprises a sequence image obtained based on a weighted imaging mode in magnetic resonance examination, the second image comprises a sequence image obtained based on a diffusion weighted imaging mode in magnetic resonance examination, and the third image comprises a sequence image based on apparent diffusion coefficients in magnetic resonance examination.
In one embodiment, image acquisition module 1301 further comprises: the first image, the second image and the third image are obtained by shooting the target object under the same visual angle.
In one embodiment, the feature extraction module 1302 includes: the image transformation sub-module is used for carrying out image transformation processing on the medical image to be segmented to obtain a tensor-form medical image matrix, wherein the tensor-form medical image matrix contains image pixel values.
In one embodiment, the feature extraction module 1302 further comprises:
the image acquisition sub-module is used for acquiring a training image sample of the target object and a corresponding label image, wherein the training image sample comprises a first image sample, a second image sample and a third image sample, the first image sample, the second image sample and the third image sample are obtained by imaging the target object in different nuclear magnetic imaging modes, and the label image comprises a focus sketching area of the target object and a sketching area of the target object;
The feature extraction submodule is used for extracting features of the first image sample, the second image sample and the third image sample through a medical image segmentation model to be trained to obtain a corresponding first sample feature map, a corresponding second sample feature map and a corresponding third sample feature map; carrying out fusion processing on the first sample feature map, the second sample feature map and the third sample feature map to obtain a predicted sample segmentation image, wherein the predicted sample segmentation image comprises a first predicted sample segmentation image and a second predicted sample segmentation image, and the predicted sample segmentation image comprises a predicted focus sketching area of a target object and a predicted sketching area of the target object;
and the parameter updating and extracting sub-module is used for updating the parameters of the medical image segmentation model based on the difference between the label image and the first prediction sample segmentation image and the second prediction sample segmentation image respectively to obtain a trained medical image segmentation model.
In one embodiment, the parameter update extraction sub-module includes:
a first loss determination unit configured to determine a first loss value based on a difference between the label image and the first prediction sample divided image;
a second loss determination unit configured to determine a second loss value based on a difference between the label image and the second prediction sample divided image;
And a parameter updating unit for updating parameters of the medical image segmentation model based on the first loss value and the second loss value.
In one embodiment, the feature extraction module 1302 further comprises:
the phase multiplication sub-module is used for multiplying the first feature map, the second feature map and the third feature map with respective preset weights respectively to obtain a first weight feature map corresponding to the first feature map, a second weight feature map corresponding to the second feature map and a third weight feature map corresponding to the third feature map;
and the fusion processing sub-module is used for carrying out fusion processing on the first weight feature map, the second weight feature map and the third weight feature map to obtain a prediction segmentation image of the target object.
In one embodiment, the fusion processing sub-module further comprises:
the first fusion processing unit is used for carrying out global average pooling operation on the first feature map to obtain an important coefficient matrix of the first feature map; performing fusion processing on the first weight feature map, the second weight feature map and the third weight feature map to obtain a first fusion feature map; multiplying the important coefficient matrix with the first fusion feature map to obtain a reference fusion feature map;
and the input unit is used for inputting the reference fusion feature map into the 3D neural network sub-model to obtain a first prediction segmentation image of the target object.
In one embodiment, the fusion processing sub-module further comprises:
the second fusion processing unit is used for carrying out fusion processing on the second weight characteristic diagram and the third weight characteristic diagram to obtain a second fusion characteristic diagram; according to the second fusion feature map, a first probability matrix and a second probability matrix are obtained through the classification network submodel, wherein the first probability matrix is used for describing the probability of the area where the focus of the target object is located, and the second probability matrix is used for describing the probability of the area where the target object is located;
the respective modules in the above-described medical image segmentation apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 14. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a medical image segmentation method. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 14 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring a medical image to be segmented of a target object, wherein the medical image to be segmented comprises a first image, a second image and a third image, and the first image, the second image and the third image are obtained by imaging the target object in different nuclear magnetic imaging modes;
and extracting features of the first image, the second image and the third image through a medical image segmentation model to obtain corresponding first feature images, second feature images and third feature images, and carrying out fusion processing on the first feature images, the second feature images and the third feature images to obtain a prediction segmentation image of the target object.
In one embodiment, the processor when executing the computer program further performs the steps of:
the first image comprises a sequence image obtained based on a weighted imaging mode in magnetic resonance examination, the second image comprises a sequence image obtained based on a diffusion weighted imaging mode in magnetic resonance examination, and the third image comprises a sequence image based on apparent diffusion coefficients in magnetic resonance examination.
In one embodiment, the processor when executing the computer program further performs the steps of:
the first image, the second image and the third image are obtained by shooting the target object under the same visual angle.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring a training image sample of a target object and a corresponding label image, wherein the training image sample comprises a first image sample, a second image sample and a third image sample, the first image sample, the second image sample and the third image sample are obtained by imaging the target object in different nuclear magnetic imaging modes, and the label image comprises a focus sketching area of the target object and a sketching area of the target object;
extracting features of the first image sample, the second image sample and the third image sample through a medical image segmentation model to be trained to obtain a corresponding first sample feature map, a corresponding second sample feature map and a corresponding third sample feature map; carrying out fusion processing on the first sample feature map, the second sample feature map and the third sample feature map to obtain a predicted sample segmentation image, wherein the predicted sample segmentation image comprises a first predicted sample segmentation image and a second predicted sample segmentation image, and the predicted sample segmentation image comprises a predicted focus sketching area of a target object and a predicted sketching area of the target object;
Based on the difference between the label image and the first prediction sample segmentation image and the second prediction sample segmentation image respectively, updating parameters of the medical image segmentation model to obtain a trained medical image segmentation model.
In one embodiment, the processor when executing the computer program further performs the steps of: determining a first loss value based on a difference between the label image and the first prediction sample segmentation image; determining a second loss value based on a difference between the label image and the second prediction sample segmentation image; based on the first loss value and the second loss value, parameters of the medical image segmentation model are updated.
In one embodiment, the processor when executing the computer program further performs the steps of:
multiplying the first feature map, the second feature map and the third feature map with respective preset weights to obtain a first weight feature map corresponding to the first feature map, a second weight feature map corresponding to the second feature map and a third weight feature map corresponding to the third feature map;
and carrying out fusion processing on the first weight feature map, the second weight feature map and the third weight feature map to obtain a prediction segmentation image of the target object.
In one embodiment, the processor when executing the computer program further performs the steps of:
Carrying out global average pooling operation on the first feature map to obtain an important coefficient matrix of the first feature map;
performing fusion processing on the first weight feature map, the second weight feature map and the third weight feature map to obtain a first fusion feature map; multiplying the important coefficient matrix with the first fusion feature map to obtain a reference fusion feature map;
and inputting the reference fusion feature map into a 3D neural network sub-model to obtain a first prediction segmentation image of the target object.
In one embodiment, the processor when executing the computer program further performs the steps of:
performing fusion processing on the second weight feature map and the third weight feature map to obtain a second fusion feature map;
according to the second fusion feature map, a first probability matrix and a second probability matrix are obtained through the classification network submodel, wherein the first probability matrix is used for describing the probability of the area where the focus of the target object is located, and the second probability matrix is used for describing the probability of the area where the target object is located;
and weighting the first weight feature map according to the first probability matrix and the second probability matrix to obtain a second prediction segmentation image of the target object.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
Acquiring a medical image to be segmented of a target object, wherein the medical image to be segmented comprises a first image, a second image and a third image, and the first image, the second image and the third image are obtained by imaging the target object in different nuclear magnetic imaging modes;
and extracting features of the first image, the second image and the third image through a medical image segmentation model to obtain corresponding first feature images, second feature images and third feature images, and carrying out fusion processing on the first feature images, the second feature images and the third feature images to obtain a prediction segmentation image of the target object.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the first image comprises a sequence image obtained based on a weighted imaging mode in magnetic resonance examination, the second image comprises a sequence image obtained based on a diffusion weighted imaging mode in magnetic resonance examination, and the third image comprises a sequence image based on apparent diffusion coefficients in magnetic resonance examination.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the first image, the second image and the third image are obtained by shooting the target object under the same visual angle.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a training image sample of a target object and a corresponding label image, wherein the training image sample comprises a first image sample, a second image sample and a third image sample, the first image sample, the second image sample and the third image sample are obtained by imaging the target object in different nuclear magnetic imaging modes, and the label image comprises a focus sketching area of the target object and a sketching area of the target object;
extracting features of the first image sample, the second image sample and the third image sample through a medical image segmentation model to be trained to obtain a corresponding first sample feature map, a corresponding second sample feature map and a corresponding third sample feature map; carrying out fusion processing on the first sample feature map, the second sample feature map and the third sample feature map to obtain a predicted sample segmentation image, wherein the predicted sample segmentation image comprises a first predicted sample segmentation image and a second predicted sample segmentation image, and the predicted sample segmentation image comprises a predicted focus sketching area of a target object and a predicted sketching area of the target object;
based on the difference between the label image and the first prediction sample segmentation image and the second prediction sample segmentation image respectively, updating parameters of the medical image segmentation model to obtain a trained medical image segmentation model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a first loss value based on a difference between the label image and the first prediction sample segmentation image; determining a second loss value based on a difference between the label image and the second prediction sample segmentation image; based on the first loss value and the second loss value, parameters of the medical image segmentation model are updated.
In one embodiment, the computer program when executed by the processor further performs the steps of:
multiplying the first feature map, the second feature map and the third feature map with respective preset weights to obtain a first weight feature map corresponding to the first feature map, a second weight feature map corresponding to the second feature map and a third weight feature map corresponding to the third feature map;
and carrying out fusion processing on the first weight feature map, the second weight feature map and the third weight feature map to obtain a prediction segmentation image of the target object.
In one embodiment, the computer program when executed by the processor further performs the steps of:
carrying out global average pooling operation on the first feature map to obtain an important coefficient matrix of the first feature map; performing fusion processing on the first weight feature map, the second weight feature map and the third weight feature map to obtain a first fusion feature map; multiplying the important coefficient matrix with the first fusion feature map to obtain a reference fusion feature map; and inputting the reference fusion feature map into a 3D neural network sub-model to obtain a first prediction segmentation image of the target object.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing fusion processing on the second weight feature map and the third weight feature map to obtain a second fusion feature map;
according to the second fusion feature map, a first probability matrix and a second probability matrix are obtained through the classification network submodel, wherein the first probability matrix is used for describing the probability of the area where the focus of the target object is located, and the second probability matrix is used for describing the probability of the area where the target object is located;
and weighting the first weight feature map according to the first probability matrix and the second probability matrix to obtain a second prediction segmentation image of the target object.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method of medical image segmentation, the method comprising:
acquiring a medical image to be segmented of a target object, wherein the medical image to be segmented comprises a first image, a second image and a third image, and the first image, the second image and the third image are obtained by imaging the target object in different nuclear magnetic imaging modes;
And extracting features of the first image, the second image and the third image through a medical image segmentation model to obtain a corresponding first feature image, a corresponding second feature image and a corresponding third feature image, and carrying out fusion processing on the first feature image, the second feature image and the corresponding third feature image to obtain a prediction segmentation image of the target object.
2. The method of claim 1, wherein the first image comprises a sequence image based on weighted imaging in a magnetic resonance examination, the second image comprises a sequence image based on diffusion weighted imaging in a magnetic resonance examination, and the third image comprises a sequence image based on apparent diffusion coefficients in a magnetic resonance examination.
3. The method of claim 1, wherein the first image, the second image, and the third image are captured at a same viewing angle of the target object.
4. The method of claim 1, wherein the training process of the medical image segmentation model comprises:
acquiring a training image sample of a target object and a corresponding label image, wherein the training image sample comprises a first image sample, a second image sample and a third image sample, the first image sample, the second image sample and the third image sample are obtained by imaging the target object in different nuclear magnetic imaging modes, and the label image comprises a focus sketching area of the target object and a sketching area of the target object;
Extracting features of the first image sample, the second image sample and the third image sample through a medical image segmentation model to be trained to obtain a corresponding first sample feature map, a corresponding second sample feature map and a corresponding third sample feature map; performing fusion processing on the first sample feature map, the second sample feature map and the third sample feature map to obtain a predicted sample segmentation image, wherein the predicted sample segmentation image comprises a first predicted sample segmentation image and a second predicted sample segmentation image, and the predicted sample segmentation image comprises a predicted focus sketching area of the target object and a predicted sketching area of the target object;
and updating parameters of the medical image segmentation model based on differences between the label image and the first prediction sample segmentation image and the second prediction sample segmentation image respectively to obtain a trained medical image segmentation model.
5. The method of claim 4, wherein updating parameters of the medical image segmentation model based on differences between the label image and the first and second prediction sample segmentation images, respectively, comprises:
Determining a first loss value based on a difference between the label image and a first prediction sample segmentation image;
determining a second loss value based on a difference between the label image and a second prediction sample segmentation image;
based on the first loss value and the second loss value, parameters of the medical image segmentation model are updated.
6. The method according to claim 1, wherein the fusing the first feature map, the second feature map, and the third feature map to obtain the predicted segmented image of the target object includes:
multiplying the first feature map, the second feature map and the third feature map with respective preset weights respectively to obtain a first weight feature map corresponding to the first feature map, a second weight feature map corresponding to the second feature map and a third weight feature map corresponding to the third feature map;
and carrying out fusion processing on the first weight feature map, the second weight feature map and the third weight feature map to obtain a prediction segmentation image of the target object.
7. The method of claim 6, wherein the predictive segmented image comprises a first predictive segmented image; the medical image segmentation model comprises a 3D neural network sub-model; the fusing processing is performed on the first weight feature map, the second weight feature map and the third weight feature map to obtain a predicted segmented image of the target object, including:
Performing global average pooling operation on the first feature map to obtain an important coefficient matrix of the first feature map;
performing fusion processing on the first weight feature map, the second weight feature map and the third weight feature map to obtain a first fusion feature map; multiplying the important coefficient matrix with the first fusion feature map to obtain a reference fusion feature map;
and inputting the reference fusion feature map into the 3D neural network sub-model to obtain a first prediction segmentation image of the target object.
8. The method of claim 6, wherein the predictive segmented image comprises a second predictive segmented image; the medical image segmentation model further comprises a classification network sub-model; the fusing processing is performed on the first weight feature map, the second weight feature map and the third weight feature map to obtain a predicted segmented image of the target object, including:
performing fusion processing on the second weight feature map and the third weight feature map to obtain a second fusion feature map;
according to the second fusion feature map, a first probability matrix and a second probability matrix are obtained through the classification network sub-model, wherein the first probability matrix is used for describing the probability of the area where the focus of the target object is located, and the second probability matrix is used for describing the probability of the area where the target object is located;
And weighting the first weight feature map according to the first probability matrix and the second probability matrix to obtain a second prediction segmentation image of the target object.
9. A medical image segmentation apparatus, the apparatus comprising:
the image acquisition module is used for acquiring a medical image to be segmented of a target object, wherein the medical image to be segmented comprises a first image, a second image and a third image, and the first image, the second image and the third image are obtained by imaging the target object in different nuclear magnetic imaging modes;
and the feature extraction module is used for extracting features of the first image, the second image and the third image through a medical image segmentation model to obtain a corresponding first feature image, a corresponding second feature image and a corresponding third feature image, and carrying out fusion processing on the first feature image, the second feature image and the corresponding third feature image to obtain a prediction segmentation image of the target object.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 8.
CN202310086039.9A 2023-01-18 2023-01-18 Medical image segmentation method, apparatus and computer readable storage medium Pending CN116128895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310086039.9A CN116128895A (en) 2023-01-18 2023-01-18 Medical image segmentation method, apparatus and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310086039.9A CN116128895A (en) 2023-01-18 2023-01-18 Medical image segmentation method, apparatus and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116128895A true CN116128895A (en) 2023-05-16

Family

ID=86302551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310086039.9A Pending CN116128895A (en) 2023-01-18 2023-01-18 Medical image segmentation method, apparatus and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116128895A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993762A (en) * 2023-09-26 2023-11-03 腾讯科技(深圳)有限公司 Image segmentation method, device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993762A (en) * 2023-09-26 2023-11-03 腾讯科技(深圳)有限公司 Image segmentation method, device, electronic equipment and storage medium
CN116993762B (en) * 2023-09-26 2024-01-19 腾讯科技(深圳)有限公司 Image segmentation method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US20210225027A1 (en) Image region localization method, image region localization apparatus, and medical image processing device
CN111161275B (en) Method and device for segmenting target object in medical image and electronic equipment
CN111640120B (en) Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network
US10853409B2 (en) Systems and methods for image search
CN109035261B (en) Medical image processing method and device, electronic device and storage medium
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
CN112348818B (en) Image segmentation method, device, equipment and storage medium
CN115965750B (en) Vascular reconstruction method, vascular reconstruction device, vascular reconstruction computer device, and vascular reconstruction program
CN115861248A (en) Medical image segmentation method, medical model training method, medical image segmentation device and storage medium
CN107516314B (en) Medical image hyper-voxel segmentation method and device
CN113487536A (en) Image segmentation method, computer device and storage medium
CN116128895A (en) Medical image segmentation method, apparatus and computer readable storage medium
CN113313728B (en) Intracranial artery segmentation method and system
CN114693671A (en) Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning
Qin et al. Joint dense residual and recurrent attention network for DCE-MRI breast tumor segmentation
CN113177953B (en) Liver region segmentation method, liver region segmentation device, electronic equipment and storage medium
Zhang et al. Multiple morphological constraints-based complex gland segmentation in colorectal cancer pathology image analysis
CN115546270A (en) Image registration method, model training method and equipment for multi-scale feature fusion
CN114299010A (en) Method and device for segmenting brain tumor image, computer equipment and storage medium
CN110570417B (en) Pulmonary nodule classification device and image processing equipment
CN114972382A (en) Brain tumor segmentation algorithm based on lightweight UNet + + network
CN115546089A (en) Medical image segmentation method, pathological image processing method, device and equipment
CN113379770A (en) Nasopharyngeal carcinoma MR image segmentation network construction method, image segmentation method and device
US20230237647A1 (en) Ai driven longitudinal liver focal lesion analysis
CN117115187B (en) Carotid artery wall segmentation method, carotid artery wall segmentation device, carotid artery wall segmentation computer device, and carotid artery wall segmentation storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination