CN116912214B - Method, apparatus and storage medium for segmenting aneurysm detection image - Google Patents

Method, apparatus and storage medium for segmenting aneurysm detection image Download PDF

Info

Publication number
CN116912214B
CN116912214B CN202310890952.4A CN202310890952A CN116912214B CN 116912214 B CN116912214 B CN 116912214B CN 202310890952 A CN202310890952 A CN 202310890952A CN 116912214 B CN116912214 B CN 116912214B
Authority
CN
China
Prior art keywords
unet3
aneurysm
model
image
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310890952.4A
Other languages
Chinese (zh)
Other versions
CN116912214A (en
Inventor
张鸿祺
耿介文
王雅栋
赵智群
于舒
秦岚
黄煜飞
杨光明
印胤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuanwu Hospital
Original Assignee
Xuanwu Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuanwu Hospital filed Critical Xuanwu Hospital
Priority to CN202310890952.4A priority Critical patent/CN116912214B/en
Publication of CN116912214A publication Critical patent/CN116912214A/en
Application granted granted Critical
Publication of CN116912214B publication Critical patent/CN116912214B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure discloses a method, apparatus and storage medium for segmenting an aneurysm detection image. The method comprises the following steps: acquiring an aneurysm detection image; processing the aneurysm detection image by using a Unet3+ rough segmentation model to obtain a region of interest; and processing the region of interest by using the Unet3+ subdivision model to obtain an aneurysm characteristic image. According to the method, the region of interest is segmented from the aneurysm detection image through the Unet3+ rough segmentation model, the search range is reduced, and then the aneurysm characteristic image is segmented from the region of interest through the Unet3+ rough segmentation model. The automatic segmentation of the aneurysm feature image is efficiently completed by utilizing the excellent feature extraction performance of the Unet3+ rough segmentation model and the Unet3+ fine segmentation model, the segmentation precision is enhanced by secondary segmentation combining rough granularity and fine granularity, the aneurysm segmentation accuracy is improved, and the screening result accuracy is further ensured.

Description

Method, apparatus and storage medium for segmenting aneurysm detection image
Technical Field
The present disclosure relates generally to the field of medical image processing technology. More particularly, the present disclosure relates to a method, apparatus, and storage medium for segmenting an aneurysm detection image.
Background
Aneurysms are manifestations of a restricted or diffuse expansion or distension of an arterial wall due to lesions or lesions of the arterial wall. Aneurysms are characterized by a wide range of sizes, varying locations, and varying morphology, can be as small as two millimeters in diameter, as large as tens of millimeters, and can occur anywhere in the whole cerebral blood vessel. In addition, since the aneurysm has a capsule shape and a form with an ascus, it is very difficult to recognize, and the possibility of missed diagnosis is greatly increased.
Currently, nuclear magnetic resonance angiography (MRA, magnetic Resonance Angiography) is generally used clinically as an important means for aneurysm screening, and a doctor needs to analyze based on a large number of medical images to identify characteristics of the aneurysm, thereby completing the screening.
In view of the characteristics of wide size range, variable positions and various forms of aneurysms, the aneurysms are difficult to distinguish from the medical image background only by naked eyes, and false detection and omission detection are easy to cause, so that the accuracy of screening results is affected.
In view of the foregoing, it is desirable to provide an aneurysm detection image segmentation scheme, so as to quickly and accurately segment clear aneurysm features, especially a sac-type aneurysm, from an image, provide a reliable basis for aneurysm screening, and improve accuracy of screening results.
Disclosure of Invention
To address at least one or more of the technical problems mentioned above, the present disclosure proposes, in various aspects, an aneurysm detection image segmentation scheme.
In a first aspect, the present disclosure provides a method for segmenting an aneurysm detection image comprising: acquiring an aneurysm detection image; processing the aneurysm detection image by using a Unet3+ rough segmentation model to obtain a region of interest; and processing the region of interest by using the Unet3+ subdivision model to obtain an aneurysm characteristic image.
In some embodiments, wherein prior to processing the aneurysm detection image with the unat3+ rough segmentation model, the method further comprises: training a Unet3+ neural network by using the aneurysm detection image sample to generate a Unet3+ rough segmentation model; processing an aneurysm detection image sample by using a Unet3+ rough segmentation model to obtain a region-of-interest sample; and training the Unet3+ neural network with the region of interest sample to generate a Unet3+ subdivision model.
In some embodiments, wherein the loss function of the unat3+ coarse partition model is a weighted sum of the first Dice loss function and the first Focal loss function, and the loss function of the unat3+ fine partition model is a weighted sum of the second Dice loss function and the second Focal loss function.
In some embodiments, wherein processing the aneurysm detection image with the unet3+ rough segmentation model to obtain the region of interest comprises: downsampling an aneurysm detection image at a coding layer of the Unet3+ rough segmentation model to obtain image feature maps with different scales; fusing a small-scale image feature map from an encoding layer, a same-scale image feature map and a large-scale image feature map from another decoding layer at a decoding layer of the Unet3+ coarse segmentation model to obtain a coarse segmentation feature image; extracting a region of interest based on the rough segmentation feature image; wherein processing the region of interest using the unat3+ subdivision model to obtain an aneurysm feature image comprises: downsampling a region of interest at a coding layer of the Unet3+ subdivision model to obtain region feature maps of different scales; and fusing the small-scale region feature map from the coding layer, the same-scale region feature map and the large-scale region feature map from the other decoding layer at the decoding layer of the Unet3+ sub-segmentation model to obtain an aneurysm feature image.
In some embodiments, the loss function L of the Unet3+ coarse partition model Unet-1 The following are provided: wherein ω is a first Dice loss function L Dice-1 (1-omega) is the first Focal loss function L Focal-1 Weights, y i Predicted value, t, of ith pixel point of training sample of Unet3+ coarse segmentation model i The real value of the ith pixel point of the training sample of the Unet3+ coarse segmentation model is epsilon, the adjusting parameter of the Unet3+ coarse segmentation model is epsilon, the total number of the pixel points of the training sample of the Unet3+ coarse segmentation model is N, and alpha t And gamma is the adjustable factor of Unet3+ coarse division model, p t Reflects the proximity degree of the middle predicted value and the true value of the Unet3+ coarse segmentation model, p t The larger the specification, the more accurate the classification; wherein the Unet3+ sub-division model has a loss function L Unet-2 The following are provided: /> Wherein σ is the second Dice loss function L Dice-2 (1-sigma) is the second Focal loss function L Focal-2 Weights of u j Predicted value of jth pixel point of training sample of Unet3+ subdivision model, r j True value of j-th pixel of training sample for Unet3+ subdivision model,/v>For the adjustment parameters of the Unet3+ fine-division model, M is the total number of pixels of a training sample of the Unet3+ fine-division model, and beta t And τ is the adjustable factor of the Unet3+ subdivision model, q t Reflects the proximity degree of the predicted value and the true value in the Unet3+ subdivision model, q t The larger the specification the more accurate the classification.
In some embodiments, wherein prior to training the unattunet3+ neural network with the aneurysm detection image sample, the method comprises: carrying out aneurysm data labeling and data enhancement processing on the three-dimensional medical image; and obtaining an aneurysm detection image sample based on the processed three-dimensional medical image.
In some embodiments, wherein obtaining an aneurysm detection image sample based on the processed three-dimensional medical image comprises: randomly accessing a plurality of pixel point positions on the three-dimensional medical image; taking the pixel point position accessed each time as the center, collecting a plurality of image blocks with the resolution of [ 0.5+/-0.1,0.4 +/-0.08,0.4 +/-0.08 ] and the size of [56,224,224 ]; and taking the image blocks as aneurysm detection image samples.
In some embodiments, wherein after obtaining the region of interest sample, the method further comprises: randomly accessing a plurality of pixel point positions on the region of interest sample; and collecting a plurality of regional blocks with the resolution of [ 0.25+/-0.05,0.2 +/-0.04,0.2 +/-0.04 ] and the size of [56,224,224] by taking the pixel point position accessed each time as the center; wherein training the unat3+ neural network using the region of interest samples comprises: the unat3+ neural network is trained using several regional blocks.
In a second aspect, the present disclosure provides an electronic device comprising: a processor; and a memory storing program instructions for segmenting an aneurysm detection image, which when executed by the processor cause the device to implement the method according to any of the first aspects.
In a third aspect, the present disclosure provides a computer-readable storage medium having stored thereon computer-readable instructions for segmenting an aneurysm detection image, the computer-readable instructions, when executed by one or more processors, implementing the method of any of the first aspects.
By the method for segmenting the aneurysm detection image provided above, the embodiment of the disclosure segments the region of interest from the aneurysm detection image by the unattribute model, reduces the search range, and then segments the aneurysm feature image from the region of interest by the unattribute model. The automatic segmentation of the aneurysm feature image is efficiently completed by utilizing the excellent feature extraction performance of the Unet3+ rough segmentation model and the Unet3+ fine segmentation model, the segmentation precision is enhanced by secondary segmentation combining rough granularity and fine granularity, the aneurysm segmentation accuracy is improved, and the screening result accuracy is further ensured.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 illustrates an exemplary flowchart of a method of segmentation of an aneurysm detection image according to some embodiments of the present disclosure;
fig. 2 shows an exemplary block diagram of a unet3+ neural network;
FIG. 3 illustrates an exemplary flow chart of a model training method of some embodiments of the present disclosure;
FIG. 4 illustrates an exemplary flow chart of a method of preprocessing an image sample in accordance with some embodiments of the present disclosure;
FIG. 5 illustrates an exemplary flow chart of a method for blocking an image sample in accordance with some embodiments of the present disclosure;
fig. 6 shows an exemplary block diagram of the electronic device of an embodiment of the present disclosure.
Detailed Description
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the disclosure. Based on the embodiments in this disclosure, all other embodiments that may be made by those skilled in the art without the inventive effort are within the scope of the present disclosure.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present disclosure is for the purpose of describing particular embodiments only, and is not intended to be limiting of the disclosure. As used in the specification and claims of this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the present disclosure and claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Exemplary application scenarios
Aneurysms are characterized by a wide range of sizes, varying locations, and varying morphology. In particular, small aneurysms can have diameters of 2mm or less, which are very difficult to identify in whole brain medical images. The aneurysm features in the whole brain medical image are difficult to distinguish by naked eyes, accurate tumor body contours are difficult to separate even if the features are extracted by a machine, and the risk of missed detection is high.
Currently, nmr angiography is still clinically an important tool for aneurysm screening, and a doctor needs to examine a large number of medical images of a patient to complete the aneurysm screening. The manual screening scheme based on a large number of medical images not only needs to consume a great deal of time of doctors, but also can introduce manual errors to influence the reliability and accuracy of screening results.
Exemplary application scenario
In view of this, the embodiment of the disclosure provides an aneurysm detection image segmentation scheme, which combines two segmentation models with different granularities, namely a unattribute segmentation model and a unattribute segmentation model, to perform secondary segmentation on an aneurysm detection image, so that the accuracy of aneurysm feature segmentation can be enhanced, the accuracy of aneurysm segmentation is improved, and the accuracy of screening results is further ensured.
Fig. 1 illustrates an exemplary flowchart of a method 100 of segmentation of an aneurysm detection image according to some embodiments of the present disclosure.
As shown in fig. 1, in step S101, an aneurysm detection image is acquired.
In this embodiment, the aneurysm detection image may be obtained by a magnetic resonance angiography technique, and further, the aneurysm detection image in this embodiment may be a time-of-flight angiography image (TOF MRA, time Of Flight MRA).
In step S102, the aneurysm detection image is processed using the unat3+ rough segmentation model to obtain a region of interest.
In this embodiment, the unat3+ coarse segmentation model adopts a coding and decoding structure based on a unat3+ neural network, in the coding and decoding structure, each decoding layer fuses small-scale and same-scale feature maps from the coding layer and large-scale feature maps from the decoding layer, and the feature maps capture fine-granularity semantics and coarse-granularity semantics under the full scale, so that the segmentation accuracy of the model can be improved by fusing aneurysm features of different scales.
To facilitate understanding the process of merging multi-scale features by the unat3+ coarse segmentation model, an exemplary description of the segmentation process of the unat3+ coarse segmentation model will be described below with reference to fig. 2, where fig. 2 shows an exemplary structure diagram of the unat3+ neural network.
As shown in fig. 2, the unet3+ neural network includes encoding layers X1 to X5 and decoding layers FX1 to FX4, between which there is a jump connection, so that feature maps of different scales can be aggregated in one decoding layer. In each coding layer of the unet3+ neural network, after the input image is convolved twice 3*3, data normalization processing is performed immediately, then a ReLU activation function operation is performed, and finally a downsampling operation is performed, so that the resolution of the extracted feature map is reduced by one time, and it is noted that in the last coding layer, namely, in the bottommost layer of the unet3+ neural network, the downsampling operation is not performed after the convolution is completed. In unet3+ neural networks, the feature map of each decoding layer is composed by stitching through the feature maps of 5 scales. Taking the decoding layer FX3 as an example, the feature map of FX3 is formed by splicing small-scale feature maps from coding layers X1 and X2 lower than the decoding layer FX3 and co-scale feature maps of X3 of the same layer and large-scale feature maps of FX4 and FX5 higher than the decoding layer FX3 in the decoding layer respectively through some operations, wherein the small-scale features provide fine granularity semantics, the large-scale features provide coarse granularity semantics, and therefore the full-scale feature maps are used to the greatest extent, and the segmentation precision is improved.
Specifically, the feature map of the coding layer X1 needs to perform a maximum pool non-overlapping operation to achieve the same resolution as FX3, so as to facilitate subsequent splicing operation, and then sequentially perform 3*3 convolution and activation function operation; the feature map of coding layer X2 also requires a max-pool non-overlap operation to achieve the same resolution as FX3, followed by a 3*3 convolution and activation function operation in sequence; the encoding layer X3 is the same layer as the decoding layer FX3, so the feature map of the encoding layer X3 only needs to perform 3*3 convolution and activation function operations; the feature map of the decoding layer FX4 is first subjected to a bilinear upsampling operation to increase resolution, and then sequentially subjected to 3*3 convolution and activation function operations; the profile of the decoding layer FX5 also requires a bi-linear up-sampling operation to increase resolution before performing 3*3 convolution and activation function operations in sequence.
Based on the decoding and encoding structure of the unet3+ neural network, the process of obtaining the region of interest by using the unet3+ rough segmentation model in this embodiment is specifically as follows:
firstly, downsampling an aneurysm detection image at a coding layer of a Unet3+ rough segmentation model to obtain image feature images with different scales;
then, merging the small-scale image feature map from the coding layer, the same-scale image feature map and the large-scale image feature map from the other decoding layer at the decoding layer of the Unet3+ coarse segmentation model to obtain a coarse segmentation feature image;
finally, the region of interest is extracted based on the rough segmentation feature image.
In this process, a region of interest with a suitable size may be selected based on the rough segmentation feature image, and it should be noted that the size of the region of interest may be set according to the actual situation, which is not limited only herein.
In some embodiments, the penalty function of the unat3+ coarse segmentation model is a weighted sum of the first Dice penalty function and the first Focal penalty function, and the Dice penalty function is a metric function for evaluating the similarity of two samples, and when the model is used for image segmentation, the core idea is to calculate the overlapping degree between the prediction segmentation result and the real segmentation result, so as to evaluate the accuracy of the model.
The Dice loss function has good robustness for the situation of unbalance of the positive example and the negative example, and can particularly relieve the negative influence caused by unbalance of the foreground and the background in the sample. In this embodiment, foreground-background imbalance refers to the case where most regions in the image do not contain the characteristics of an aneurysm, and only a small portion of the regions contain the characteristics of an aneurysm. However, since the training based on the Dice loss function focuses on the mining of the foreground region, there is a problem of loss saturation, and thus the embodiment also introduces a Focal loss function when constructing the loss function of the unat3+ coarse-division model.
The Focal loss function is a loss function based on two kinds of cross entropy, and the core idea is that the weight of easily distinguished samples in the training process can be dynamically reduced through a dynamic scaling factor, so that the gravity center is rapidly focused on the samples which are difficult to distinguish.
Compared to cross entropy loss, the Focal loss function does not change for samples with inaccurate classification, and the loss is smaller for samples with accurate classification. Therefore, when the method is used for image segmentation, the Focal loss function is equivalent to increasing the weight of samples with inaccurate segmentation in the loss function, so that the loss function tends to be difficult to segment samples, and the accuracy of the difficult-to-segment samples is improved.
Specifically, the loss function L of the Unet3+ coarse partition model Unet-1 Can be expressed as follows:
wherein ω is a first Dice loss function L Dice-1 (1-omega) is the first Focal loss function L Focal-1 Weights, y i Predicted value, t, of ith pixel point of training sample of Unet3+ coarse segmentation model i The real value of the ith pixel point of the training sample of the Unet3+ coarse segmentation model is epsilon, the adjusting parameter of the Unet3+ coarse segmentation model is epsilon, the total number of the pixel points of the training sample of the Unet3+ coarse segmentation model is N, and alpha t And gamma is the adjustable factor of Unet3+ coarse division model, p t Reflects the proximity degree of the middle predicted value and the true value of the Unet3+ coarse segmentation model, p t The larger the specification the more accurate the classification.
In step S103, the region of interest is processed using the unat3+ subdivision model to obtain an aneurysm feature image.
In this step, the input of the unattribute+sub-division model is a region of interest acquired based on the unattribute+rough division model. According to the method for segmenting the aneurysm detection image, the rough position of the aneurysm is determined by using the Unet3+ rough segmentation model so as to narrow the searching range of the aneurysm, then fine contour segmentation is performed in the determined rough position by using the Unet3+ fine segmentation model, and further the aneurysm characteristic image with clear aneurysm edge and accurate aneurysm position is obtained.
Further, after TOF MRA is input into the Unet3+ coarse segmentation model, the Unet3+ coarse segmentation model is utilized to obtain a region of interest, the region of interest is input into the Unet3+ fine segmentation model for processing, and the Unet3+ fine segmentation model outputs the aneurysm characteristic image and can also output the aneurysm position coordinate and probability value thereof, the size of the aneurysm and other information.
In the embodiment, the unat3+ fine-segmentation model still adopts a coding and decoding structure based on a unat3+ neural network, and the segmentation accuracy of the model is improved by fusing full-scale aneurysm characteristics, and compatible fine-granularity semantics and coarse-granularity semantics.
The codec structure of the unet3+ neural network has been described in detail above, and will not be described in detail herein.
The process of obtaining the aneurysm characteristic image by utilizing the Unet3+ subdivision model based on the encoding and decoding structure of the UNet3+ neural network is specifically as follows:
firstly, downsampling a region of interest at a coding layer of a Unet3+ subdivision model to obtain region feature maps with different scales;
then, at the decoding layer of the Unet3+ sub-division model, a small-scale region feature map from the encoding layer, a same-scale region feature map and a large-scale region feature map from another decoding layer are fused to obtain an aneurysm feature image.
Further, the loss function of the unat3+ subdivision model is a weighted sum of the second Dice loss function and the second Focal loss function, in particular, the loss function L of the unat3+ subdivision model Unet-2 Can be expressed as follows:
wherein σ is the second Dice loss function L Dice-2 (1-sigma) is the second Focal loss function L Focal-2 Weights of u j Predicted value of jth pixel point of training sample of Unet3+ subdivision model, r j The true value of the j-th pixel point of the training sample of the unat3+ subdivision model,for the adjustment parameters of the Unet3+ fine-division model, M is the total number of pixels of a training sample of the Unet3+ fine-division model, and beta t And τ is the adjustable factor of the Unet3+ subdivision model, q t Reflects the proximity degree of the predicted value and the true value in the Unet3+ subdivision model, q t The larger the specification the more accurate the classification.
In the Unet3+ coarse division model, ω, ε, α were t And the value of gamma can be adjusted in the training process of the Unet3+ coarse segmentation model so as to enable the segmentation performance of the Unet3+ coarse segmentation model to meet the requirement; in the Unet3+ subdivision model, sigma,β t And the value of tau can also be adjusted in the training process of the Unet3+ fine-division model so as to enable the segmentation performance of the Unet3+ fine-division model to meet the requirement.
It can be appreciated that the present disclosure enables a highly accurate, highly generalized aneurysm segmentation model after cascading two models, namely a unattribute 3+ rough segmentation model and a unattribute 3+ fine segmentation model. Before the segmentation task is performed by using the segmentation model, training and parameter tuning can be performed on the Unet3+ rough segmentation model and the Unet3+ fine segmentation model by using a model training method as shown in fig. 3. FIG. 3 illustrates an exemplary flow chart of a model training method 300 of some embodiments of the present disclosure.
As shown in fig. 3, in step S301, the unattribute+neural network is trained using the aneurysm detection image samples to generate a unattribute+rough segmentation model.
In this embodiment, the aneurysm detection image sample used in step S301 is provided with an aneurysm data annotation, which may comprise aneurysm position coordinates.
When the Unet3+ neural network is trained, an aneurysm detection image sample is input into the Unet3+ neural network, the aneurysm detection image sample is output to a normalized exponential function layer after passing through a plurality of layers of same-resolution convolution layers, then a loss function of a Unet3+ rough segmentation model is calculated, and a dynamic loss function gradient is transmitted through a feedback network until the Unet3+ rough segmentation model meeting the requirement is obtained.
Further, the loss function of the unat3+ coarse partition model may be a weighted sum of the first Dice loss function and the first Focal loss function. The specific expression of the loss function of the unat3+ coarse segmentation model is already described in the foregoing embodiments, and will not be described in detail herein.
In step S302, the aneurysm detection image sample is processed using the unet3+ rough segmentation model to obtain a region of interest sample.
In this embodiment, the aneurysm detection image sample is processed by using the trained unet3+ rough segmentation model, so that a rough segmentation feature image of the aneurysm may be obtained, and a region-of-interest sample with a suitable size may be selected in the rough segmentation feature image, where the size of the region-of-interest sample may be set according to the actual situation, and the embodiment of the disclosure is not limited in this aspect.
In step S303, the unet3+ neural network is trained with the region of interest samples to generate unet3+ subdivision models.
Similar to the training process of the unattribute model, the embodiment can input a region-of-interest sample into the unattribute model, the aneurysm detection image sample passes through a plurality of layers of same-resolution convolution layers and then is output to a normalized exponential function layer, then a loss function of the unattribute model is calculated, and a gradient of the loss function is transferred through a feedback network until the unattribute model meeting the requirement is obtained.
Further, the loss function of the unat3+ subdivision model may be a weighted sum of the second Dice loss function and the second Focal loss function. The specific expression of the loss function of the unat3+ subdivision model is already described in the previous embodiment, and will not be further described herein.
The training effect of the unat3+ coarse segmentation model depends on the training sample used for training, and in order to improve the training effect of the model, a certain processing needs to be performed on the three-dimensional medical image before the step S301 in the foregoing embodiment is performed, so as to form a reliable aneurysm detection image sample.
Some embodiments of the present disclosure provide a method of preprocessing an image sample as shown in fig. 4, fig. 4 shows an exemplary flowchart of a method 400 of preprocessing an image sample of some embodiments of the present disclosure.
As shown in fig. 4, in step S401, aneurysm data labeling and data enhancement processing are performed on the three-dimensional medical image.
In this embodiment, the aneurysm data annotation may include aneurysm location coordinates, and the data enhancement may include, but is not limited to: image rotation, image scaling, image flipping, image blurring, gama enhancement, and gray scale normalization, among others. Wherein the number of samples can be increased by image rotation, image scaling and image inversion, the number of sample counter examples can be increased by image blurring, and the quality of samples can be improved by gama enhancement and gray scale normalization.
In step S402, an aneurysm detection image sample is obtained based on the processed three-dimensional medical image.
Because the three-dimensional medical image has more pixels, in order to improve training efficiency, in some embodiments, a block-taking manner may be used for training.
Illustratively, the execution of step S402 may be as shown in fig. 5, fig. 5 showing an exemplary flowchart of a method 500 for chunking an image sample according to some embodiments of the present disclosure. It will be appreciated that the method 500 for blocking image samples is one specific implementation of step S402 described above, and thus the features described above in connection with fig. 4 may be similarly applied thereto.
As shown in fig. 5, in step S501, a plurality of pixel positions on a three-dimensional medical image are accessed.
Step S501 may be accessed based on a variety of ways. In one embodiment, step S501 may randomly access pixel locations on the three-dimensional medical image. In another embodiment, assuming that an image block containing a complete aneurysm is regarded as foreground data and an image block not containing an aneurysm is regarded as background data, a certain access probability may be set for the foreground data and the background data, respectively. Those skilled in the art will appreciate that the disclosed embodiments are not limited in this respect.
In step S502, a plurality of image blocks with a preset size are collected with the pixel point position accessed each time as the center.
In this embodiment, the preset size includes a resolution size and a size, specifically, the resolution size of the image block is [0.25± 0.05,0.2 ± 0.04,0.2 ±0.04], one selectable value is [0.25,0.2,0.2], and the preset size is [56,224,224].
In step S503, a plurality of image blocks are taken as an aneurysm detection image sample.
Similarly to the unat3+ coarse-segmentation model, in order to improve training efficiency, the unat3+ neural network may be trained in some embodiments in a block-taking manner to obtain the unat3+ fine-segmentation model.
Illustratively, the process of training the unat3+ neural network by means of blocking is as follows:
firstly, accessing a plurality of pixel point positions on a sample of a region of interest;
then, taking the pixel point position accessed each time as a center, collecting a plurality of regional blocks with the resolution of [ 0.25+/-0.05,0.2 +/-0.04,0.2 +/-0.04 ] and the size of [56,224,224 ];
finally, the Unet3+ neural network is trained using a number of regional blocks.
When the positions of a plurality of pixels on the sample of the region of interest are accessed, the positions of the pixels on the three-dimensional medical image can be accessed randomly, and certain access probabilities can be set for foreground data and background data respectively. Those skilled in the art will appreciate that the disclosed embodiments are not limited in this respect.
In summary, the embodiments of the present disclosure provide a method for segmenting an aneurysm detection image, which uses a unanet3+ coarse segmentation model to narrow the search range of aneurysm features, and then further uses a unanet3+ fine segmentation model to perform secondary segmentation, so as to complete higher-precision aneurysm segmentation based on a segmentation granularity gradient from coarse granularity to fine granularity, provide a more reliable and accurate basis for aneurysm screening, and further improve the accuracy of screening results.
Corresponding to the foregoing functional embodiments, an electronic device 600 as shown in fig. 6 is also provided in the presently disclosed embodiments. Fig. 6 shows an exemplary block diagram of an electronic device 600 of an embodiment of the disclosure.
The electronic device 600 shown in fig. 6 includes: a processor 610; and a memory 620 having stored thereon executable program instructions which, when executed by the processor 610, cause the electronic device to implement any of the methods as described above.
In the electronic apparatus 600 of fig. 6, only constituent elements related to the present embodiment are shown. Thus, it will be apparent to those of ordinary skill in the art that: the electronic device 600 may also include common constituent elements that are different from those shown in fig. 6.
The processor 610 may control the operation of the electronic device 600. For example, the processor 610 controls the operation of the electronic device 600 by executing programs stored in the memory 620 on the electronic device 600. The processor 610 may be implemented by a Central Processing Unit (CPU), an Application Processor (AP), an artificial intelligence processor chip (IPU), etc. provided in the electronic device 600. However, the present disclosure is not limited thereto. In this embodiment, the processor 610 may be implemented in any suitable manner. For example, the processor 610 may take the form of, for example, a microprocessor or processor, and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), a programmable logic controller, and an embedded microcontroller, among others.
Memory 620 may be used to store hardware for various data, instructions that are processed in electronic device 600. For example, the memory 620 may store processed data and data to be processed in the electronic device 600. The memory 620 may store data sets that have been processed or are to be processed by the processor 610. Further, the memory 620 may store applications, drivers, etc. to be driven by the electronic device 600. For example: the memory 620 may store various programs related to data enhancement, image segmentation, etc., to be executed by the processor 610. The memory 620 may be a DRAM, but the present disclosure is not limited thereto. The memory 620 may include at least one of volatile memory or nonvolatile memory. The nonvolatile memory may include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), flash memory, phase change RAM (PRAM), magnetic RAM (MRAM), resistive RAM (RRAM), ferroelectric RAM (FRAM), and the like. Volatile memory can include Dynamic RAM (DRAM), static RAM (SRAM), synchronous DRAM (SDRAM), PRAM, MRAM, RRAM, ferroelectric RAM (FeRAM), and the like. In an embodiment, the memory 620 may include at least one of a Hard Disk Drive (HDD), a Solid State Drive (SSD), a high density flash memory (CF), a Secure Digital (SD) card, a Micro-secure digital (Micro-SD) card, a Mini-secure digital (Mini-SD) card, an extreme digital (xD) card, a cache (caches), or a memory stick.
In summary, specific functions implemented by the memory 620 and the processor 610 of the electronic device 600 provided in the embodiments of the present disclosure may be explained in comparison with the foregoing embodiments of the present disclosure, and may achieve the technical effects of the foregoing embodiments, which will not be repeated herein.
Alternatively, the present disclosure may also be implemented as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon computer program instructions (or computer programs, or computer instruction codes) which, when executed by a processor of an electronic device (or electronic device, server, etc.), cause the processor to perform part or all of the steps of the above-described methods according to the present disclosure.
While various embodiments of the present disclosure have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous modifications, changes, and substitutions will occur to those skilled in the art without departing from the spirit and scope of the present disclosure. It should be understood that various alternatives to the embodiments of the disclosure described herein may be employed in practicing the disclosure. The appended claims are intended to define the scope of the disclosure and are therefore to cover all equivalents or alternatives falling within the scope of these claims.

Claims (9)

1. A method for segmenting an aneurysm detection image, comprising:
respectively accessing a plurality of pixel point positions positioned in a foreground region and a background region on a three-dimensional medical image according to a preset access probability, wherein the foreground region contains an aneurysm, and the background region does not contain the aneurysm;
collecting a plurality of image blocks by taking the position of each accessed pixel point as the center;
taking the image blocks as aneurysm detection image samples;
training a unattunet3+ neural network by using the aneurysm detection image sample to generate the unattunet3+ rough segmentation model;
processing the aneurysm detection image sample by using the Unet3+ rough segmentation model to obtain a region-of-interest sample;
training a Unet3+ neural network by using the region-of-interest sample to generate the Unet3+ subdivision model; and
acquiring an aneurysm detection image;
processing the aneurysm detection image by using a Unet3+ rough segmentation model to obtain a region of interest;
and processing the region of interest by using a Unet3+ subdivision model to obtain an aneurysm characteristic image.
2. The method of claim 1, wherein the loss function of the unat3+ coarse partition model is a weighted sum of a first Dice loss function and a first Focal loss function, and wherein the loss function of the unat3+ fine partition model is a weighted sum of a second Dice loss function and a second Focal loss function.
3. The method of claim 1, wherein processing the aneurysm detection image using a unet3+ rough segmentation model to obtain a region of interest comprises:
downsampling the aneurysm detection image at a coding layer of the Unet3+ coarse segmentation model to obtain image feature maps with different scales;
fusing a small-scale image feature map from an encoding layer, a same-scale image feature map and a large-scale image feature map from another decoding layer at a decoding layer of the Unet3+ coarse segmentation model to obtain a coarse segmentation feature image; and
extracting the region of interest based on the rough segmentation feature image;
wherein processing the region of interest using a unat3+ subdivision model to obtain an aneurysm feature image comprises:
downsampling the region of interest at the coding layer of the Unet3+ sub-division model to obtain region feature maps of different scales; and
and fusing the small-scale regional characteristic map from the coding layer, the same-scale regional characteristic map and the large-scale regional characteristic map from the other decoding layer at the decoding layer of the Unet3+ sub-segmentation model to obtain the aneurysm characteristic image.
4. The method according to claim 2, wherein the unattribute function L of the unattribute model Unet-1 The following are provided:
wherein ω is a first Dice loss function L Dice-1 (1-omega) is the first Focal loss function L Focal-1 Weights, y i Predicted value, t, of ith pixel point of training sample of Unet3+ coarse segmentation model i The real value of the ith pixel point of the training sample of the Unet3+ coarse segmentation model is epsilon, the adjusting parameter of the Unet3+ coarse segmentation model is epsilon, N is the total number of pixel points of the training sample of the Unet3+ coarse segmentation model, alpha t And gamma is an adjustable factor of the Unet3+ coarse segmentation model, p t Reflects the proximity degree of the predicted value and the true value in the Unet3+ coarse segmentation model, p t The larger the specification, the more accurate the classification;
wherein the loss function L of the Unet3+ sub-division model Unet-2 The following are provided:
wherein σ is the second Dice loss function L Dice-2 (1-sigma) is the second Focal loss function L Focal-2 Weights of u j Predicted value r of jth pixel point of training sample of the Unet3+ subdivision model j For the true value of the j-th pixel point of the training sample of the unat3+ subdivision model,for the adjustment parameters of the Unet3+ fine-division model, M is the total number of pixels of the training sample of the Unet3+ fine-division model, and beta t And τ is an adjustable factor of the Unet3+ subdivision model, q t Reflects the proximity degree of the predicted value and the true value in the Unet3+ subdivision model, q t The larger the specification the more accurate the classification.
5. The method of claim 1, wherein before accessing the plurality of pixel locations on the three-dimensional medical image at the foreground region and the background region, respectively, with a predetermined access probability, the method comprises:
and labeling the aneurysm data and enhancing the data of the three-dimensional medical image.
6. The method of claim 5, wherein capturing a number of image blocks centered on a pixel location of each access comprises:
and taking the pixel position accessed each time as a center, and collecting a plurality of image blocks with the resolution of [ 0.5+/-0.1,0.4 +/-0.08,0.4 +/-0.08 ] and the size of [56,224,224].
7. The method of claim 1, wherein after obtaining the region of interest sample, the method further comprises:
randomly accessing a plurality of pixel point positions on the region of interest sample; and
taking the pixel point position accessed each time as the center, collecting a plurality of regional blocks with the resolution of [ 0.25+/-0.05,0.2 +/-0.04,0.2 +/-0.04 ] and the size of [56,224,224 ];
wherein training a unat3+ neural network using the region of interest sample comprises:
and training the Unet3+ neural network by using the plurality of area blocks.
8. An electronic device, comprising:
a processor; and
a memory storing program instructions for segmenting an aneurysm detection image, which program instructions, when executed by the processor, cause the device to carry out the method according to any of claims 1-7.
9. A computer readable storage medium having stored thereon computer readable instructions for segmenting an aneurysm detection image, the computer readable instructions, when executed by one or more processors, implementing the method of any of claims 1-7.
CN202310890952.4A 2023-07-19 2023-07-19 Method, apparatus and storage medium for segmenting aneurysm detection image Active CN116912214B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310890952.4A CN116912214B (en) 2023-07-19 2023-07-19 Method, apparatus and storage medium for segmenting aneurysm detection image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310890952.4A CN116912214B (en) 2023-07-19 2023-07-19 Method, apparatus and storage medium for segmenting aneurysm detection image

Publications (2)

Publication Number Publication Date
CN116912214A CN116912214A (en) 2023-10-20
CN116912214B true CN116912214B (en) 2024-03-22

Family

ID=88367708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310890952.4A Active CN116912214B (en) 2023-07-19 2023-07-19 Method, apparatus and storage medium for segmenting aneurysm detection image

Country Status (1)

Country Link
CN (1) CN116912214B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117637185B (en) * 2024-01-25 2024-04-23 首都医科大学宣武医院 Image-based craniopharyngeal tube tumor treatment auxiliary decision-making method, system and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080657A (en) * 2019-12-13 2020-04-28 北京小白世纪网络科技有限公司 CT image organ segmentation method based on convolutional neural network multi-dimensional fusion
CN113269764A (en) * 2021-06-04 2021-08-17 重庆大学 Automatic segmentation method and system for intracranial aneurysm, sample processing method and model training method
CN113436166A (en) * 2021-06-24 2021-09-24 深圳市铱硙医疗科技有限公司 Intracranial aneurysm detection method and system based on magnetic resonance angiography data
CN114283164A (en) * 2022-03-02 2022-04-05 华南理工大学 Breast cancer pathological section image segmentation prediction system based on UNet3+
CN114677671A (en) * 2022-02-18 2022-06-28 深圳大学 Automatic identifying method for old ribs of preserved szechuan pickle based on multispectral image and deep learning
CN114926477A (en) * 2022-05-16 2022-08-19 东北大学 Brain tumor multi-modal MRI (magnetic resonance imaging) image segmentation method based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080657A (en) * 2019-12-13 2020-04-28 北京小白世纪网络科技有限公司 CT image organ segmentation method based on convolutional neural network multi-dimensional fusion
CN113269764A (en) * 2021-06-04 2021-08-17 重庆大学 Automatic segmentation method and system for intracranial aneurysm, sample processing method and model training method
CN113436166A (en) * 2021-06-24 2021-09-24 深圳市铱硙医疗科技有限公司 Intracranial aneurysm detection method and system based on magnetic resonance angiography data
CN114677671A (en) * 2022-02-18 2022-06-28 深圳大学 Automatic identifying method for old ribs of preserved szechuan pickle based on multispectral image and deep learning
CN114283164A (en) * 2022-03-02 2022-04-05 华南理工大学 Breast cancer pathological section image segmentation prediction system based on UNet3+
CN114926477A (en) * 2022-05-16 2022-08-19 东北大学 Brain tumor multi-modal MRI (magnetic resonance imaging) image segmentation method based on deep learning

Also Published As

Publication number Publication date
CN116912214A (en) 2023-10-20

Similar Documents

Publication Publication Date Title
US9152926B2 (en) Systems, methods, and media for updating a classifier
US9330336B2 (en) Systems, methods, and media for on-line boosting of a classifier
CN116912214B (en) Method, apparatus and storage medium for segmenting aneurysm detection image
CN109509177B (en) Method and device for recognizing brain image
CN111104925B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN111291825A (en) Focus classification model training method and device, computer equipment and storage medium
CN111161279A (en) Medical image segmentation method and device and server
CN111461145B (en) Method for detecting target based on convolutional neural network
CN110781980B (en) Training method of target detection model, target detection method and device
CN114758137B (en) Ultrasonic image segmentation method and device and computer readable storage medium
CN114140683A (en) Aerial image target detection method, equipment and medium
CN115631112B (en) Building contour correction method and device based on deep learning
CN114820535A (en) Image detection method and device for aneurysm, computer device and storage medium
CN112614111B (en) Video tampering operation detection method and device based on reinforcement learning
CN112053363B (en) Retina blood vessel segmentation method, retina blood vessel segmentation device and model construction method
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN115082405A (en) Training method, detection method, device and equipment of intracranial focus detection model
CN114529965A (en) Character image clustering method and device, computer equipment and storage medium
CN114998980A (en) Iris detection method and device, electronic equipment and storage medium
CN115049927A (en) SegNet-based SAR image bridge detection method and device and storage medium
Xiang et al. An object detection algorithm combining FPN structure with DETR
CN117893522A (en) Training method of aneurysm segmentation model, aneurysm region segmentation method and product
CN116863146B (en) Method, apparatus and storage medium for extracting hemangio features
CN117746193B (en) Label optimization method and device, storage medium and electronic equipment
CN116862930B (en) Cerebral vessel segmentation method, device, equipment and storage medium suitable for multiple modes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant