CN116958551A - Image segmentation method, electronic device and storage medium - Google Patents

Image segmentation method, electronic device and storage medium Download PDF

Info

Publication number
CN116958551A
CN116958551A CN202310919972.XA CN202310919972A CN116958551A CN 116958551 A CN116958551 A CN 116958551A CN 202310919972 A CN202310919972 A CN 202310919972A CN 116958551 A CN116958551 A CN 116958551A
Authority
CN
China
Prior art keywords
image
aneurysm
segmentation
blood vessel
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310919972.XA
Other languages
Chinese (zh)
Inventor
方刚
林付梁
秦岚
杨光明
印胤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Union Strong Beijing Technology Co ltd
Original Assignee
Union Strong Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Union Strong Beijing Technology Co ltd filed Critical Union Strong Beijing Technology Co ltd
Priority to CN202310919972.XA priority Critical patent/CN116958551A/en
Publication of CN116958551A publication Critical patent/CN116958551A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image segmentation method, electronic equipment and a storage medium. The image segmentation method comprises the following steps: acquiring a contrast image to be segmented; inputting the contrast image to be segmented into an image segmentation model to obtain an aneurysm probability map and a blood vessel segmentation probability map which are output by the image segmentation model; training based on a contrast image sample set to obtain an image segmentation model; the contrast image sample set comprises an aneurysm label and a blood vessel segmentation label; and determining an aneurysm class segmentation map according to the aneurysm probability threshold value and the aneurysm probability map. By utilizing the technical scheme of the application, the aneurysm image and the blood vessel segmentation image can be simultaneously and accurately segmented, and the rapid and accurate positioning of the aneurysm blood vessel segmentation position can be realized.

Description

Image segmentation method, electronic device and storage medium
Technical Field
The present application relates generally to the field of image processing technology. More particularly, the present application relates to an image segmentation method, an electronic device, and a storage medium.
Background
Currently, nuclear magnetic resonance is a conventional and necessary image scanning means in screening projects such as aneurysms. Aneurysms are very difficult to recognize in a scanned image because of their large volume change range, irregular occurrence positions, and various morphologies. In addition, in the scene of physical examination of large hospitals and groups, medical staff needs to examine a large number of images, and the workload is very heavy, and faces a great challenge.
The prior art means is to directly identify in the 3D scanning image or to process and identify after converting the 3D scanning image into a 2D maximum density projection image. The former has large searching range, low processing speed and low recognition accuracy. The latter may not be apparent in the morphology of a particular aneurysm, such as a small aneurysm, due to occlusion. Moreover, the above two techniques cannot solve the problems of segmenting the aneurysm image, segmenting the blood vessel segmented image and positioning the aneurysm in the blood vessel segmented manner at the same time.
In view of the foregoing, it is desirable to provide an image segmentation method so as to accurately segment an aneurysm image and a blood vessel segmentation image at the same time, and achieve rapid and accurate positioning of the aneurysm blood vessel segmentation position.
Disclosure of Invention
To solve at least one or more of the technical problems mentioned above, the present application proposes, in various aspects, an image segmentation method, an electronic device, and a storage medium. The image segmentation method can accurately segment the aneurysm image and the blood vessel segmentation image simultaneously, and realizes rapid and accurate positioning of the aneurysm blood vessel segmentation position.
In a first aspect, the present application provides an image segmentation method comprising: acquiring a contrast image to be segmented; inputting the contrast image to be segmented into an image segmentation model to obtain an aneurysm probability map and a blood vessel segmentation probability map which are output by the image segmentation model; training based on a contrast image sample set to obtain an image segmentation model; the contrast image sample set comprises an aneurysm label and a blood vessel segmentation label; and determining an aneurysm class segmentation map according to the aneurysm probability threshold value and the aneurysm probability map.
In some embodiments, training to obtain an image segmentation model based on the contrast image sample set includes: acquiring a contrast image sample set; carrying out data labeling processing on each imaging sample image in the imaging sample set to obtain each labeling sample image; each labeling sample image comprises an aneurysm label and a blood vessel segmentation label; preprocessing each labeling sample image to obtain each preprocessed sample image; carrying out data enhancement processing on each preprocessed sample image to obtain a target image sample set; and inputting the target image sample set into the initial segmentation model for training to obtain an image segmentation model.
In some embodiments, performing data annotation processing on each of the imaging sample images in the imaging sample set comprises: carrying out blood vessel segmentation extraction on each imaging sample image through a threshold extraction algorithm to obtain blood vessel segmentation information; performing three-dimensional vascular reduction on vascular segmentation information corresponding to each imaging sample image to obtain a three-dimensional vascular model corresponding to each imaging sample image; determining an aneurysm region of interest in a three-dimensional blood vessel model corresponding to each of the imaging sample images; determining aneurysm pixel coordinates according to the three-dimensional blood vessel model and the aneurysm region of interest; and carrying out data labeling processing on the pixel coordinates of the aneurysm to obtain the aneurysm label.
In some embodiments, after performing a vessel segmentation extraction on each of the artifact sample images by a threshold extraction algorithm to obtain vessel segmentation information, the method further comprises: and carrying out data labeling processing according to the blood vessel segmentation information to obtain blood vessel segmentation labels.
In some embodiments, the preprocessing includes a gray scale normalization process and a resampling process; wherein preprocessing each annotated sample image comprises: carrying out gray level statistics and resolution statistics on each marked sample image to obtain gray level statistics information and resolution statistics information; determining a gray variance and a gray mean value according to the gray statistical information; carrying out gray scale normalization processing according to the gray scale variance and the gray scale mean value; determining a target resolution according to the resolution statistics; and resampling according to the target resolution.
In some embodiments, the initial segmentation model includes an encoder and a decoder; wherein inputting the target image sample set into the initial segmentation model for training comprises: inputting a target sample image in a target image sample set into an encoder to obtain an aneurysm prediction map and a blood vessel segmentation prediction map which are output by a decoder; determining a loss function value based on the objective loss function, the aneurysm prediction map and the vessel segmentation prediction map; and optimizing model parameters of the initial segmentation model according to the loss function value, and determining whether to output the image segmentation model or not based on the loss function value.
In some embodiments, the decoder comprises N shared deconvolution layers, M aneurysm deconvolution layers, and M vessel segmentation deconvolution layers, and the encoder comprises m+n convolution layers; the shared deconvolution layer, the aneurysm deconvolution layer and the blood vessel segmentation deconvolution layer respectively comprise a deconvolution core and a Relu activation layer; the convolution layer comprises a convolution kernel and a Relu activation layer; inputting the target sample image in the target image sample set into the encoder, obtaining the aneurysm prediction map and the blood vessel segmentation prediction map output by the decoder comprises: inputting a target sample image in a target image sample set into an encoder for downsampling to obtain first intermediate feature data; inputting the first intermediate characteristic data into an N-layer shared deconvolution layer for up-sampling to obtain second intermediate characteristic data; and respectively inputting the second intermediate characteristic data into the M-layer aneurysm deconvolution layer and the M-layer blood vessel segmentation deconvolution layer to obtain an aneurysm prediction graph and a blood vessel segmentation prediction graph.
In some embodiments, determining the loss function value based on the objective loss function, the aneurysm prediction map, and the vessel segmentation prediction map comprises: determining a first loss value according to an aneurysm prediction probability value, an aneurysm prediction category, a target loss function and an aneurysm mark of each pixel point in the aneurysm prediction graph; determining a second loss value according to the blood vessel segmentation prediction probability value, the blood vessel prediction segmentation category, the target loss function and the blood vessel segmentation label of each pixel point in the blood vessel segmentation prediction graph; a loss function value is determined from the first loss value and the second loss value.
In some embodiments, determining whether to output the image segmentation model based on the loss function value comprises: if the training iteration number reaches the preset number, and/or if the loss function value obtained by continuously k times in each loss function value is not reduced, stopping updating the model parameters of the initial segmentation model and outputting the image segmentation model.
In some embodiments, after determining the aneurysm class segmentation map from the aneurysm probability threshold value and the aneurysm probability map, further comprising: and determining a blood vessel segmentation class segmentation map according to the blood vessel segmentation probability threshold value and the blood vessel segmentation probability map.
In a second aspect, the present application provides an electronic device comprising:
a processor; and a memory having stored thereon program code for image segmentation, which when executed by the processor, causes the electronic device to implement the method as described above.
In a third aspect, the present application provides a non-transitory machine-readable storage medium having stored thereon program code for image segmentation, which when executed by a processor, causes the implementation of the method as described above.
The technical scheme provided by the application can comprise the following beneficial effects:
Through the image segmentation method, the electronic device and the storage medium provided by the embodiment of the application, the image to be segmented is acquired, and then the image to be segmented is input into the image segmentation model, so that the aneurysm probability map and the blood vessel segmentation probability map output by the image segmentation model are obtained. The image segmentation model is obtained based on a contrast image sample set, and the contrast image sample set contains aneurysm labels and blood vessel segmentation labels, so that the image segmentation model can learn aneurysm characteristics and blood vessel segmentation characteristics from a large number of contrast sample images, the image segmentation model can carry out image segmentation on different contrast images to be segmented after training is finished, and an aneurysm probability map and a blood vessel segmentation probability map are output.
Further, an aneurysm class segmentation map is determined according to the aneurysm probability threshold value and the aneurysm probability map, so that automatic screening of aneurysms in clinically generated contrast images to be segmented is realized, the detection difficulty of the aneurysms is reduced, the diagnosis efficiency of medical staff on the aneurysms is improved, the workload of the medical staff is reduced, and the misdiagnosis and missed diagnosis risk is reduced.
In general, the method and the device can accurately segment the aneurysm image and the blood vessel segmentation image simultaneously, realize quick and accurate positioning of the aneurysm blood vessel segmentation position, and improve the detection accuracy of the aneurysm.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. In the drawings, embodiments of the application are illustrated by way of example and not by way of limitation, and like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 illustrates an exemplary flow chart of an image segmentation method according to some embodiments of the application;
FIG. 2 illustrates an exemplary flow chart of an image segmentation method according to further embodiments of the present application;
FIG. 3 illustrates an exemplary flow chart of an image segmentation method according to further embodiments of the application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. For simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. Furthermore, the application has been set forth in numerous specific details in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the embodiments described herein. Moreover, this description should not be taken as limiting the scope of the embodiments described herein. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be understood that the possible terms "first" or "second" and the like in the claims, specification and drawings of the present disclosure are used for distinguishing between different objects and not for describing a particular sequential order. The terms "comprises" and "comprising" when used in the specification and claims of the present application are taken to specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification and claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the present specification and claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Currently, nuclear magnetic resonance is a conventional and necessary image scanning means in screening projects such as aneurysms. Aneurysms are very difficult to recognize in a scanned image because of their large volume change range, irregular occurrence positions, and various morphologies. In addition, in the scene of physical examination of large hospitals and groups, medical staff needs to examine a large number of images, and the workload is very heavy, and faces a great challenge. The prior art means is to directly identify in the 3D scanning image or to process and identify after converting the 3D scanning image into a 2D maximum density projection image. The former has large searching range, low processing speed and low recognition accuracy. The latter may not be apparent in the morphology of a particular aneurysm, such as a small aneurysm, due to occlusion. Moreover, the above two techniques cannot solve the problems of segmenting the aneurysm image, segmenting the blood vessel segmented image and positioning the aneurysm in the blood vessel segmented manner at the same time.
In view of the foregoing, it is desirable to provide an image segmentation method so as to accurately segment an aneurysm image and a blood vessel segmentation image at the same time, and achieve rapid and accurate positioning of the aneurysm blood vessel segmentation position.
Specific embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating an image segmentation method according to some embodiments of the present application, referring to fig. 1, the image segmentation method according to the embodiment of the present application may include:
in step S101, a contrast image to be segmented is acquired. In the embodiment of the present application, the aforementioned contrast image to be segmented refers to an original clinical contrast image from which an aneurysm image needs to be segmented. The contrast image to be segmented may be a Magnetic Resonance Angiography (MRA) image or a TOF (Time of Flight) magnetic resonance angiography image, for example. In practical application, the mode of the contrast image to be segmented needs to be determined according to the practical application situation, and the application is not limited in this respect.
It is understood that aneurysms are abnormal tissue on a blood vessel that has a causal relationship with the position of the blood vessel. Therefore, the segmented image of the blood vessel is included in the segmented contrast image, and the segmented image of the blood vessel necessarily includes the ROI (Region of Interest ) of the aneurysm image.
In step S102, a contrast image to be segmented is input into an image segmentation model, and an aneurysm probability map and a blood vessel segmentation probability map output by the image segmentation model are obtained. In the embodiment of the application, the image segmentation model is obtained based on the training of a contrast image sample set. It will be appreciated that each of the aforementioned imaging sample images in the imaging sample set may be sample image data of different modalities acquired by a scanning means such as magnetic resonance imaging MRA or TOF magnetic resonance imaging.
In an embodiment of the present application, each of the imaging sample images may include a segmented image of a blood vessel, and the segmented image of the blood vessel may also include an ROI of the aneurysm image. Therefore, the embodiment of the application can improve the detection accuracy of the aneurysm by marking and integrating each blood vessel segment and the aneurysm and utilizing the information such as anatomical relation among the blood vessel segments and the aneurysm. Therefore, the contrast image sample set in the embodiment of the application comprises the aneurysm labeling and the blood vessel segmentation labeling, so that the image segmentation model can learn the aneurysm characteristics and the blood vessel segmentation characteristics from a large number of contrast sample images, and can carry out image segmentation on different contrast images to be segmented after training is finished, and an aneurysm probability map and a blood vessel segmentation probability map are output.
In step S103, an aneurysm class segmentation map is determined from the aneurysm probability threshold value and the aneurysm probability map. In the embodiment of the application, the aneurysm probability map output by the image segmentation model can be compared with the aneurysm probability threshold value, so that the aneurysm class segmentation map is obtained according to the comparison result. It can be understood that the value of the aneurysm probability threshold value needs to be determined according to the actual application situation, and the value of the aneurysm probability threshold value is 0.5 in an exemplary manner, the application is not limited in this aspect, and further the position with the probability greater than 0.5 in the aneurysm probability map can be segmented, so as to obtain the aneurysm class segmentation map.
Further, morphological parameters of the aneurysm may be calculated from the aneurysm class segmentation map, including but not limited to aneurysm volume, etc. Positioning information of the blood vessel segment where the aneurysm is located can also be determined according to the aneurysm class segmentation map. Thereby can provide more abundant diagnosis reference information for medical personnel, assist medical personnel to carry out the high-efficient treatment to the aneurysm.
In some application scenarios, a medical staff may need to view not only the aneurysm class segmentation map, but also the blood vessel segmentation class segmentation map to make an auxiliary judgment, for example, determine positioning information of a blood vessel segment where an aneurysm is located according to the aneurysm class segmentation map. Thus, after the aneurysm class segmentation map is determined from the aneurysm probability threshold value and the aneurysm probability map, the vessel segment class segmentation map may also be determined from the vessel segment probability threshold value and the vessel segment probability map. Specifically, the blood vessel segmentation probability map output by the image segmentation model can be compared with a blood vessel segmentation probability threshold value, so that a blood vessel segmentation class segmentation map is obtained according to the comparison result. It can be understood that the value of the blood vessel segmentation probability threshold needs to be determined according to the actual application situation, and the value of the blood vessel segmentation probability threshold can be 0.5 in an exemplary manner, the application is not limited in this aspect, and further the position with the probability greater than 0.5 in the blood vessel segmentation probability map can be segmented, so as to obtain the blood vessel segmentation class segmentation map.
According to the embodiment of the application, the image to be segmented is acquired, and then is input into the image segmentation model, so that the aneurysm probability map and the blood vessel segmentation probability map output by the image segmentation model are obtained. The image segmentation model is obtained based on a contrast image sample set, and the contrast image sample set contains aneurysm labels and blood vessel segmentation labels, so that the image segmentation model can learn aneurysm characteristics and blood vessel segmentation characteristics from a large number of contrast sample images, the image segmentation model can carry out image segmentation on different contrast images to be segmented after training is finished, and an aneurysm probability map and a blood vessel segmentation probability map are output. Further, an aneurysm class segmentation map is determined according to the aneurysm probability threshold value and the aneurysm probability map, so that automatic screening of aneurysms in clinically generated contrast images to be segmented is realized, the detection difficulty of the aneurysms is reduced, the diagnosis efficiency of medical staff on the aneurysms is improved, the workload of the medical staff is reduced, and the misdiagnosis and missed diagnosis risk is reduced. In general, the method and the device can accurately segment the aneurysm image and the blood vessel segmentation image simultaneously, realize quick and accurate positioning of the aneurysm blood vessel segmentation position, and improve the detection accuracy of the aneurysm.
In some embodiments, the training step of the image segmentation model may be further designed. The training step of the image segmentation model will be described in detail below in connection with fig. 2. Fig. 2 is a flowchart illustrating an image segmentation method according to other embodiments of the present application, referring to fig. 2, the image segmentation method according to the embodiment of the present application may include:
in step S201, a contrast image sample set is acquired. Since the number of pixels of sample image data of different modalities acquired by a scanning means such as MRA or TOF magnetic resonance imaging is large, the direct use of the sample image data may cause an excessive amount of computation and a low computation efficiency. Therefore, in the embodiment of the application, the image block-taking processing can be performed on the sample image data, specifically, the position of the image block-taking processing in the sample image data is random, but a certain probability is required to sample the foreground and the background respectively, the foreground can be regarded as the aneurysm, the background can be regarded as the blood vessel segmentation, and thus the problem of unbalanced block-taking type of the aneurysm pixels relative to the blood vessel segmentation pixels can be solved. As an example, the image block capturing resolution may be set to [0.5,0.4,0.4], and the image block capturing size may be set to [56,224,224], and in practical applications, various block capturing parameters in the image block capturing process need to be determined according to practical application conditions, for example, the above-mentioned image block capturing resolution and image block capturing size, and the present application is not limited in this respect. And then the images obtained through the image block-taking processing form a contrast image sample set.
It will be appreciated that, correspondingly, before the contrast image to be segmented is input into the image segmentation model, the image blocking processing may also be performed on the contrast image to be segmented, so as to form a plurality of image blocks, and the plurality of image blocks are input into the image segmentation model for processing. After the processing is finished, all the images output by the image segmentation model can be calculated and spliced through a sliding window technology (sliding window), so that a final full-image result is obtained.
In step S202, data labeling processing is performed on each of the imaging sample images in the imaging sample set, so as to obtain each labeled sample image. Specifically, first, a blood vessel segmentation extraction can be performed on each of the imaging sample images through a threshold extraction algorithm, so as to obtain blood vessel segmentation information. As an example, assuming that a threshold extraction algorithm is used to implement the vessel segmentation extraction, a gray threshold may be preset, and if the gray threshold is greater than the gray threshold, it is determined that the pixel position corresponding to the gray belongs to the vessel.
Then, data labeling processing can be performed according to the blood vessel segmentation information, and blood vessel segmentation labeling is obtained.
And then, carrying out three-dimensional blood vessel reduction on the blood vessel segmentation information corresponding to each imaging sample image to obtain a three-dimensional blood vessel model corresponding to each imaging sample image. As an example, three-dimensional vascular reduction can be performed on the vascular segmentation information corresponding to each of the imaging sample images through a marchangcubes algorithm, so as to obtain a three-dimensional vascular model corresponding to each of the imaging sample images. MarchingCubes is a surface rendering algorithm, which is an algorithm that can be used for three-dimensional reconstruction of medical images.
Furthermore, the region of interest of the aneurysm may be determined in the three-dimensional vessel model corresponding to each of the imaging of the phantom sample. As an example, the aneurysm region of interest may be determined in the three-dimensional blood vessel model corresponding to each of the imaging sample images according to pre-stored aneurysm morphology data, so as to select the aneurysm region of interest, and further determine the aneurysm pixel coordinates within a selected range according to the three-dimensional blood vessel model and the aneurysm region of interest.
And finally, carrying out data labeling processing on the pixel points corresponding to the pixel coordinates of the aneurysm to obtain the aneurysm label. So that each labeling sample image contains an aneurysm label and a blood vessel segmentation label.
It should be understood that the foregoing description of the data labeling process is merely exemplary, and in practical applications, the manner of implementing the data labeling process is various, for example, a region growing algorithm may be adopted, and the data labeling process needs to be performed by selecting a suitable manner according to the actual application, which is not limited in this aspect of the present application.
In step S203, each of the labeled sample images is preprocessed, and each of the preprocessed sample images is obtained. In the embodiment of the present application, the foregoing preprocessing may include, but is not limited to, gray scale normalization processing and resampling processing, and preferably may also include vascular enhancement filtering processing.
Specifically, gray statistics and resolution statistics can be performed on each labeling sample image to obtain gray statistics information and resolution statistics information.
Then, the gray variance and the gray mean value can be determined according to the gray statistical information, and gray normalization processing is performed according to the gray variance and the gray mean value. In the embodiment of the application, the gray scale normalization processing can be realized by adopting mean variance normalization. Mean variance normalization, also called normalization, classifies all data into a distribution with mean 0 and variance 1, i.e., ensures that the mean of the resulting normalized solution set is 0 and the variance is 1. As an example, the gradation normalization processing can be performed by the following formula (1):
wherein G is i For the normalized solution corresponding to the ith gray value in the gray statistical information, x i The i-th gray value in the gray statistical information is represented by mu, mu is a gray mean value, and S is a gray variance.
Then, a target resolution may be determined according to the resolution statistics, and then resampling processing may be performed according to the target resolution. In an embodiment of the present application, the target resolution may be exemplarily determined to be 0.600×0.344×0.344, and the algorithm used in the resampling process may include, but is not limited to, a nearest neighbor algorithm, a bilinear interpolation algorithm, a bicubic (cubic) convolution interpolation algorithm, a multisampling algorithm, and the like. It will be appreciated that in practical applications, the target resolution is determined and the algorithm used for resampling is selected according to the actual application, and the application is not limited in this respect.
In step S204, data enhancement processing is performed on each of the preprocessed sample images, to obtain a target image sample set. In the embodiment of the application, specifically, each preprocessed sample image can be subjected to processing including rotation, scaling, flipping, blurring, gama transformation and the like, so as to achieve the effect of data enhancement. The image obtained after the data enhancement processing constitutes a target image sample set.
In step S205, the target image sample set is input into the initial segmentation model to be trained, and an image segmentation model is obtained. In the embodiment of the application, each target sample image in the target image sample set not only comprises an aneurysm label and a blood vessel segmentation label, but also carries out pretreatment and data enhancement treatment on each target sample image in the target image sample set, so that an initial segmentation model can learn aneurysm characteristics and blood vessel segmentation characteristics based on a large number of target sample images, and the image segmentation model can have the capability of carrying out image segmentation on different contrast images to be segmented and outputting an aneurysm probability map and a blood vessel segmentation probability map.
In some embodiments, the initial segmentation model contains an encoder and decoder, and the process of inputting the target image sample set into the initial segmentation model for training can be further designed. The process of inputting the target image sample set into the initial segmentation model for training will be described in detail below in connection with fig. 3. Fig. 3 is a flowchart illustrating an exemplary image segmentation method according to still other embodiments of the present application, and referring to fig. 3, the image segmentation method according to the embodiment of the present application may include:
In step S301, a target sample image in the target image sample set is input to the encoder, and an aneurysm prediction map and a blood vessel segmentation prediction map output from the decoder are obtained. In an embodiment of the application, the decoder comprises N layers of shared deconvolution layers, M layers of aneurysm deconvolution layers and M layers of vascular segmentation deconvolution layers, and the encoder comprises M+N layers of convolution layers; the shared deconvolution layer, the aneurysm deconvolution layer and the blood vessel segmentation deconvolution layer respectively comprise a deconvolution core and a Relu activation layer; the convolution layer comprises a convolution kernel and a Relu activation layer. The Relu activation layer refers to a Relu activation function layer, and the Relu activation function is a linear rectification function, also called a modified linear unit, and is a commonly used activation function in an artificial neural network. The activation function is introduced to increase the nonlinearity of the neural network model, and each convolution layer without the activation function is equivalent to matrix multiplication.
First, a target sample image in a target image sample set may be input into an encoder for downsampling to obtain first intermediate feature data. In an embodiment of the present application, in the present application, the convolution kernel included in each convolution layer may is a kernel size of 3×3×3 is kernel size (convolution kernel size) of 3 x 3. It will be appreciated that in practical applications, parameters of the convolution kernel need to be set according to practical application conditions, and the present application is not limited in this respect. The tensor size of the input and output of each convolution layer can be expressed as [ N, C, H, W ], where N is the number of training data samples (Batch size), C is the number of channels (channels) of the network, H is the height of the feature map, and W is the width of the feature map. The Batch size can be set according to the video memory size, and the larger the setting is, the better the convergence is. The input channel number of the first layer of convolution layer can be 1, the output channel number can be 96, and the input channel number and the output channel number of each subsequent layer of convolution layer can be 96. It will be appreciated that in practical applications, the channel number may be adjusted according to practical application, and the present application is not limited in this respect. Illustratively, assuming that the input target sample image is a contrast image of 256×256 in size, in the case where the Batch size is set to 4, the channel number is set to 96, and the number of convolution layers is 5, the input tensor of the first layer of the encoder is 4×1×256×256, and the output feature vectors of the subsequent convolution layers of each layer are 4×96×254× 254,4 ×96×252× 252,4 ×96×250× 250,4 ×96×248×248, and 4×96×246×246 in order.
Then, the first intermediate feature data may be input into the N-layer shared deconvolution layer for upsampling to obtain second intermediate feature data. Further, the second intermediate feature data may be input into the M-layer aneurysm deconvolution layer and the M-layer vessel segmentation deconvolution layer, respectively, to obtain an aneurysm prediction map and a vessel segmentation prediction map. It can be understood that it is equivalent to splitting out the deconvolution layer branch for the vessel segmentation training and the deconvolution layer branch for the aneurysm segmentation training, so that the aneurysm prediction map and the vessel segmentation prediction map can be output by using these two branches, respectively. In the embodiment of the application, the shared deconvolution layer, the aneurysm deconvolution layer and the blood vessel segmentation deconvolution layer can be the same deconvolution layer, deconvolution is the inverse operation of the convolution process, so that the parameters of each deconvolution layer are corresponding to the parameters of the deconvolution layer, and therefore, the channel number size of the last deconvolution layer can be 1 for outputting an aneurysm prediction graph or a blood vessel segmentation prediction graph in the training process, and the size of the channel number is consistent with the size of an input target sample image. Illustratively, assuming that the input target sample image is a contrast image of 256×256 in size, in the case where the Batch size is set to 4, the channel number is set to 96, and the number of deconvolution layers (the sum of the layers of the shared deconvolution layer and the aneurysm deconvolution layer or the sum of the layers of the shared deconvolution layer and the blood vessel segmentation deconvolution layer) is 5, the tensor size of the intermediate feature map input by the first deconvolution layer of the decoder is 4×96×246×246, and the output feature vectors of the subsequent deconvolution layers of each layer are 4×96×248× 248,4 ×96×250× 250,4 ×252× 252,4 ×96×254×254 and 4×1×256×256 in order.
As an example, N may take a value of 2, M may take a value of 3, and in practical application, the values of M and N need to be determined according to practical application conditions, which is not limited in this aspect of the application.
In step S302, a loss function value is determined based on the objective loss function, the aneurysm prediction map, and the blood vessel segmentation prediction map. Specifically, a first loss value may be determined according to an aneurysm prediction probability value, an aneurysm prediction class, a target loss function, and an aneurysm label for each pixel point in the aneurysm prediction map, and then a second loss value may be determined according to a blood vessel segmentation prediction probability value, a blood vessel prediction segmentation class, a target loss function, and a blood vessel segmentation label for each pixel point in the blood vessel segmentation prediction map. As an example, the objective Loss function may be a function formed by summing a CE Loss function and a dic Loss function. The target loss function can be expressed by the following formula (2):
wherein p is i An aneurysm prediction probability value for each pixel point in the aneurysm prediction graph or a blood vessel segmentation prediction probability value for each pixel point in the blood vessel segmentation prediction graph, wherein C represents an aneurysm prediction category or a blood vessel prediction segmentation category, g i And labeling corresponding aneurysms or blood vessel segments of each pixel, wherein B is the total number of the pixels.
Finally, a loss function value can be determined from the first loss value and the second loss value. In the embodiment of the present application, the first loss value and the second loss value may be weighted and summed, which may be represented by the following formula (3):
Total_Loss= a 1 ×Segs_Loss+a 2 ×AN_Loss (3)
wherein Total_Loss is a Loss function value, a 1 For the first weight, segs_loss is the second Loss value, a 2 For the second weight, AN_Loss is the first Loss value. For example, the first weight may be set to 0.6, and the second weight may be set to 0.4, where in practical application, the values of the first weight and the second weight need to be determined according to the practical application situation, and the present application is not limited in this respect.
In step S303, model parameters of the initial segmentation model are optimized according to the loss function value, and it is determined whether to output the image segmentation model based on the loss function value. Specifically, if the training iteration number reaches the preset number, and/or if the loss function value obtained by continuously k times in each loss function value is not reduced, updating the model parameters of the initial segmentation model and outputting the image segmentation model may be stopped. Where k is a positive integer, preferably, k may take a value of 5, and the value of k needs to be set according to the actual application situation, and the present application is not limited in this aspect.
Corresponding to the embodiment of the application function implementation method, the application also provides electronic equipment for executing the image segmentation method and corresponding embodiments.
Fig. 4 shows a block diagram of a hardware configuration of an electronic device 400 that can implement the image segmentation method of an embodiment of the present application. As shown in fig. 4, electronic device 400 may include a processor 410 and a memory 420. In the electronic apparatus 400 of fig. 4, only constituent elements related to the present embodiment are shown. Thus, it will be apparent to those of ordinary skill in the art that: electronic device 400 may also include common constituent elements that are different from those shown in fig. 4. Such as: a fixed point arithmetic unit.
Electronic device 400 may correspond to a computing device having various processing functions, such as functions for generating a neural network, training or learning a neural network, quantifying a floating point type neural network as a fixed point type neural network, or retraining a neural network. For example, the electronic device 400 may be implemented as various types of devices, such as a Personal Computer (PC), a server device, a mobile device, and so forth.
The processor 410 controls all functions of the electronic device 400. For example, the processor 410 controls all functions of the electronic device 400 by executing program code stored in the memory 420 on the electronic device 400. The processor 410 may be implemented by a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), an Application Processor (AP), an artificial intelligence processor chip (IPU), etc. provided in the electronic device 400. However, the present application is not limited thereto.
In some embodiments, processor 410 may include an input/output (I/O) unit 411 and a computing unit 412. The I/O unit 411 may be used to receive various data, such as a contrast image to be segmented. Illustratively, the calculating unit 412 may be configured to input the contrast image to be segmented received via the I/O unit 411 into an image segmentation model, resulting in an aneurysm probability map and a vessel segmentation probability map output by the image segmentation model; and determining an aneurysm class segmentation map according to the aneurysm probability threshold value and the aneurysm probability map. This aneurysm class segmentation map may for example be output by the I/O unit 411. The output data may be provided to memory 420 for reading by other devices (not shown) or may be provided directly to other devices for use.
The memory 420 is hardware for storing various data processed in the electronic device 400. For example, the memory 420 may store processed data and data to be processed in the electronic device 400. The memory 420 may store data sets involved in the image segmentation method process that the processor 410 has processed or is to process, e.g., contrast images to be segmented, etc. Further, the memory 420 may store applications, drivers, etc. to be driven by the electronic device 400. For example: the memory 420 may store various programs related to the image segmentation method to be performed by the processor 410. The memory 420 may be a DRAM, but the present application is not limited thereto. The memory 420 may include at least one of volatile memory or nonvolatile memory. The nonvolatile memory may include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), flash memory, phase change RAM (PRAM), magnetic RAM (MRAM), resistive RAM (RRAM), ferroelectric RAM (FRAM), and the like. Volatile memory can include Dynamic RAM (DRAM), static RAM (SRAM), synchronous DRAM (SDRAM), PRAM, MRAM, RRAM, ferroelectric RAM (FeRAM), and the like. In an embodiment, the memory 420 may include at least one of a Hard Disk Drive (HDD), a Solid State Drive (SSD), a high density flash memory (CF), a Secure Digital (SD) card, a Micro-secure digital (Micro-SD) card, a Mini-secure digital (Mini-SD) card, an extreme digital (xD) card, a cache (caches), or a memory stick.
In summary, specific functions implemented by the memory 420 and the processor 410 of the electronic device 400 provided in the embodiments of the present disclosure may be explained in comparison with the foregoing embodiments in the present disclosure, and may achieve the technical effects of the foregoing embodiments, which will not be repeated herein.
In this embodiment, the processor 410 may be implemented in any suitable manner. For example, the processor 410 may take the form of, for example, a microprocessor or processor, and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a programmable logic controller, and an embedded microcontroller, among others.
It should also be appreciated that any of the modules, units, components, servers, computers, terminals, or devices illustrated herein that execute instructions may include or otherwise access a computer readable medium, such as a storage medium, computer storage medium, or data storage device (removable) and/or non-removable) such as a magnetic disk, optical disk, or magnetic tape. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
While various embodiments of the present application have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous modifications, changes, and substitutions will occur to those skilled in the art without departing from the spirit and scope of the application. It should be understood that various alternatives to the embodiments of the application described herein may be employed in practicing the application. The appended claims are intended to define the scope of the application and are therefore to cover all equivalents or alternatives falling within the scope of these claims.

Claims (12)

1. An image segmentation method, comprising:
acquiring a contrast image to be segmented;
inputting the contrast image to be segmented into an image segmentation model to obtain an aneurysm probability map and a blood vessel segmentation probability map which are output by the image segmentation model, wherein the image segmentation model is obtained based on contrast image sample set training; and
and determining an aneurysm class segmentation map according to the aneurysm probability threshold value and the aneurysm probability map.
2. The image segmentation method as set forth in claim 1, wherein the contrast image sample set includes an aneurysm label and a vessel segmentation label, and wherein the training based on the contrast image sample set to obtain the image segmentation model comprises:
Acquiring the contrast image sample set;
carrying out data labeling processing on each imaging sample image in the imaging sample set to obtain each labeling sample image, wherein each labeling sample image comprises the aneurysm labeling and the blood vessel segmentation labeling;
preprocessing each labeling sample image to obtain each preprocessed sample image;
carrying out data enhancement processing on each preprocessed sample image to obtain a target image sample set; and
and inputting the target image sample set into an initial segmentation model for training to obtain the image segmentation model.
3. The image segmentation method according to claim 2, wherein the performing data labeling processing on each of the imaging sample images in the imaging sample set includes:
carrying out blood vessel segmentation extraction on each imaging sample image through a threshold extraction algorithm to obtain blood vessel segmentation information;
performing three-dimensional vascular reduction on vascular segmentation information corresponding to each imaging sample image to obtain a three-dimensional vascular model corresponding to each imaging sample image;
determining an aneurysm region of interest in a three-dimensional blood vessel model corresponding to each of the imaging sample images;
Determining aneurysm pixel coordinates according to the three-dimensional blood vessel model and the aneurysm region of interest; and
and carrying out data labeling processing on the pixel coordinates of the aneurysm to obtain the aneurysm label.
4. The image segmentation method according to claim 3, wherein after the vessel segmentation extraction is performed on each of the imaging sample images by the threshold extraction algorithm to obtain vessel segmentation information, the method further comprises:
and carrying out data labeling processing according to the blood vessel segmentation information to obtain the blood vessel segmentation label.
5. The image segmentation method according to claim 2, characterized in that the preprocessing includes a gray scale normalization process and a resampling process; wherein, the preprocessing each labeling sample image comprises:
carrying out gray level statistics and resolution statistics on each marked sample image to obtain gray level statistics information and resolution statistics information;
determining a gray variance and a gray mean value according to the gray statistical information;
carrying out gray scale normalization processing according to the gray scale variance and the gray scale mean;
determining a target resolution according to the resolution statistics; and
and carrying out resampling processing according to the target resolution.
6. The image segmentation method as set forth in claim 2, wherein the initial segmentation model comprises an encoder and a decoder; wherein the inputting the target image sample set into an initial segmentation model for training comprises:
inputting target sample images in the target image sample set into the encoder to obtain an aneurysm prediction map and a blood vessel segmentation prediction map which are output by the decoder;
determining a loss function value based on the target loss function, the aneurysm prediction map and the vessel segmentation prediction map; and
and optimizing model parameters of the initial segmentation model according to the loss function value, and determining whether to output the image segmentation model or not based on the loss function value.
7. The image segmentation method as set forth in claim 6, wherein the decoder comprises N shared deconvolution layers, M aneurysm deconvolution layers, and M vessel segmentation deconvolution layers, the encoder comprising m+n convolution layers; wherein the shared deconvolution layer, the aneurysm deconvolution layer and the vessel segmentation deconvolution layer respectively comprise a deconvolution core and a Relu activation layer; the convolution layer comprises a convolution kernel and a Relu activation layer;
The inputting the target sample image in the target image sample set into the encoder, and obtaining the aneurysm prediction map and the blood vessel segmentation prediction map output by the decoder includes:
inputting target sample images in the target image sample set into the encoder for downsampling to obtain first intermediate feature data;
inputting the first intermediate characteristic data into the N layers of shared deconvolution layers for up-sampling to obtain second intermediate characteristic data;
and respectively inputting the second intermediate characteristic data into the M-layer aneurysm deconvolution layer and the M-layer blood vessel segmentation deconvolution layer to obtain the aneurysm prediction map and the blood vessel segmentation prediction map.
8. The image segmentation method as set forth in claim 6, wherein the determining a loss function value based on the objective loss function, the aneurysm prediction map and the vessel segmentation prediction map comprises:
determining a first loss value according to an aneurysm prediction probability value, an aneurysm prediction category, the target loss function and the aneurysm marking of each pixel point in the aneurysm prediction graph;
determining a second loss value according to the blood vessel segment prediction probability value, the blood vessel prediction segment class, the target loss function and the blood vessel segment label of each pixel point in the blood vessel segment prediction graph;
And determining the loss function value according to the first loss value and the second loss value.
9. The image segmentation method as set forth in claim 6, wherein the determining whether to output the image segmentation model based on the loss function value comprises:
if the training iteration number reaches the preset number, and/or if the loss function value obtained by continuously k times in each loss function value is not reduced, stopping updating the model parameters of the initial segmentation model and outputting the image segmentation model.
10. The image segmentation method as set forth in claim 1, further comprising, after the determining an aneurysm class segmentation map from an aneurysm probability threshold value and the aneurysm probability map:
and determining a blood vessel segmentation class segmentation map according to the blood vessel segmentation probability threshold and the blood vessel segmentation probability map.
11. An electronic device, comprising:
a processor; and
a memory having stored thereon program code for image segmentation, which when executed by the processor, causes the electronic device to implement the method of any of claims 1-10.
12. A non-transitory machine readable storage medium having stored thereon program code for image segmentation, which when executed by a processor, causes the method of any of claims 1-10 to be implemented.
CN202310919972.XA 2023-07-25 2023-07-25 Image segmentation method, electronic device and storage medium Pending CN116958551A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310919972.XA CN116958551A (en) 2023-07-25 2023-07-25 Image segmentation method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310919972.XA CN116958551A (en) 2023-07-25 2023-07-25 Image segmentation method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN116958551A true CN116958551A (en) 2023-10-27

Family

ID=88447239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310919972.XA Pending CN116958551A (en) 2023-07-25 2023-07-25 Image segmentation method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN116958551A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012166A (en) * 2021-03-19 2021-06-22 北京安德医智科技有限公司 Intracranial aneurysm segmentation method and device, electronic device, and storage medium
CN113066061A (en) * 2021-03-24 2021-07-02 同心医联科技(北京)有限公司 Aneurysm detection method, system, terminal and medium based on MRA
CN113436166A (en) * 2021-06-24 2021-09-24 深圳市铱硙医疗科技有限公司 Intracranial aneurysm detection method and system based on magnetic resonance angiography data
WO2022245946A1 (en) * 2021-05-18 2022-11-24 Daniel Ezra Walzman Orientable intravascular devices and methods
US11538163B1 (en) * 2022-01-06 2022-12-27 Rowan University Training a neural network for a predictive aortic aneurysm detection system
US20230134402A1 (en) * 2020-06-30 2023-05-04 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for determining blood vessel parameters

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230134402A1 (en) * 2020-06-30 2023-05-04 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for determining blood vessel parameters
CN113012166A (en) * 2021-03-19 2021-06-22 北京安德医智科技有限公司 Intracranial aneurysm segmentation method and device, electronic device, and storage medium
CN113066061A (en) * 2021-03-24 2021-07-02 同心医联科技(北京)有限公司 Aneurysm detection method, system, terminal and medium based on MRA
WO2022245946A1 (en) * 2021-05-18 2022-11-24 Daniel Ezra Walzman Orientable intravascular devices and methods
CN113436166A (en) * 2021-06-24 2021-09-24 深圳市铱硙医疗科技有限公司 Intracranial aneurysm detection method and system based on magnetic resonance angiography data
US11538163B1 (en) * 2022-01-06 2022-12-27 Rowan University Training a neural network for a predictive aortic aneurysm detection system

Similar Documents

Publication Publication Date Title
CN111080660B (en) Image segmentation method, device, terminal equipment and storage medium
CN111598862B (en) Breast molybdenum target image segmentation method, device, terminal and storage medium
CN110689525B (en) Method and device for identifying lymph nodes based on neural network
CN110223300A (en) CT image abdominal multivisceral organ dividing method and device
CN112862830B (en) Multi-mode image segmentation method, system, terminal and readable storage medium
CN112446892A (en) Cell nucleus segmentation method based on attention learning
CN112991365B (en) Coronary artery segmentation method, system and storage medium
CN112396605B (en) Network training method and device, image recognition method and electronic equipment
CN113936011A (en) CT image lung lobe image segmentation system based on attention mechanism
CN114742802B (en) Pancreas CT image segmentation method based on 3D transform mixed convolution neural network
CN116245832B (en) Image processing method, device, equipment and storage medium
CN110570394A (en) medical image segmentation method, device, equipment and storage medium
CN114693671B (en) Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning
CN115861250A (en) Self-adaptive data set semi-supervised medical image organ segmentation method and system
CN117115184A (en) Training method and segmentation method of medical image segmentation model and related products
CN114972382A (en) Brain tumor segmentation algorithm based on lightweight UNet + + network
CN117437423A (en) Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement
CN116664513A (en) Intracranial aneurysm detection method, device and equipment based on nuclear magnetic resonance image
CN113807354B (en) Image semantic segmentation method, device, equipment and storage medium
CN116958551A (en) Image segmentation method, electronic device and storage medium
CN116563305A (en) Segmentation method and device for abnormal region of blood vessel and electronic equipment
CN110310314A (en) Method for registering images, device, computer equipment and storage medium
CN112561802B (en) Interpolation method of continuous sequence images, interpolation model training method and system thereof
CN112862785B (en) CTA image data identification method, device and storage medium
CN115359005A (en) Image prediction model generation method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination