CN110570394B - Medical image segmentation method, device, equipment and storage medium - Google Patents

Medical image segmentation method, device, equipment and storage medium Download PDF

Info

Publication number
CN110570394B
CN110570394B CN201910707182.9A CN201910707182A CN110570394B CN 110570394 B CN110570394 B CN 110570394B CN 201910707182 A CN201910707182 A CN 201910707182A CN 110570394 B CN110570394 B CN 110570394B
Authority
CN
China
Prior art keywords
medical image
capsule
segmented
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910707182.9A
Other languages
Chinese (zh)
Other versions
CN110570394A (en
Inventor
胡战利
贺阳素
吴垠
梁栋
杨永峰
刘新
郑海荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201910707182.9A priority Critical patent/CN110570394B/en
Priority to PCT/CN2019/110402 priority patent/WO2021017168A1/en
Publication of CN110570394A publication Critical patent/CN110570394A/en
Application granted granted Critical
Publication of CN110570394B publication Critical patent/CN110570394B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The embodiment of the invention discloses a medical image segmentation method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring a medical image to be segmented comprising a target segmentation area; inputting the medical image to be segmented into a trained neural network model so that the trained neural network model performs image segmentation on the medical image to be segmented to obtain the target segmented region, wherein the neural network model comprises a contraction module and an expansion module, and the contraction module is used for downsampling the medical image to be segmented through a capsule convolution layer formed by capsule convolutions with different step sizes to extract feature images with different sizes; the expansion module is configured to gradually recover the received different sized feature maps from the contraction module by a capsule deconvolution layer to generate a target sized feature map. The method solves the problem that the image segmentation method in the prior art consumes longer time.

Description

Medical image segmentation method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the field of medical image processing, in particular to a medical image segmentation method, a device, equipment and a storage medium.
Background
In order to facilitate clinical diagnosis of a doctor, it is generally necessary to segment a medical image to extract a target organ tissue from an original medical image, so that the doctor can obtain detailed information of the target organ tissue, thereby improving the accuracy of clinical diagnosis of the doctor.
The medical image segmentation method in the prior art mainly comprises a manual segmentation method and a segmentation method based on machine learning, but in any segmentation method, the image segmentation can be finished only by taking a long time, so that the application of the existing image segmentation method in the relatively complex organ tissue segmentation is limited.
Disclosure of Invention
The embodiment of the invention provides a medical image segmentation method, a device, equipment and a storage medium, which are used for solving the problem that the image segmentation method in the prior art is long in time consumption.
In a first aspect, an embodiment of the present invention provides a medical image segmentation method, including:
acquiring a medical image to be segmented comprising a target segmentation area;
inputting the medical image to be segmented into a trained neural network model so that the trained neural network model performs image segmentation on the medical image to be segmented to obtain the target segmented region, wherein the neural network model comprises a contraction module and an expansion module, and the contraction module is used for downsampling the medical image to be segmented through a capsule convolution layer formed by capsule convolutions with different step sizes to extract feature images with different sizes; the expansion module is configured to gradually recover the received different sized feature maps from the contraction module by a capsule deconvolution layer to generate a target sized feature map.
In a second aspect, an embodiment of the present invention further provides a medical image segmentation apparatus, including:
an acquisition section for acquiring a medical image to be segmented including a target segmentation region;
an output part, configured to input the medical image to be segmented into a trained neural network model, so that the trained neural network model performs image segmentation on the medical image to be segmented to obtain the target segmented region, where the neural network model includes a contraction module and an expansion module, and the contraction module is configured to downsample the medical image to be segmented through a capsule convolution layer formed by capsule convolutions with different step sizes to extract feature maps with different sizes; the expansion module is configured to gradually recover the received different sized feature maps from the contraction module by a capsule deconvolution layer to generate a target sized feature map.
In a third aspect, an embodiment of the present invention further provides an apparatus, including:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the image segmentation method as set forth in the first aspect.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer executable instructions which, when executed by a computer processor, are used to perform the image segmentation method as described in the first aspect.
According to the technical scheme of the image segmentation method provided by the embodiment of the invention, the medical image to be segmented is subjected to image segmentation through the trained neural network model to obtain the target segmentation area, the neural network model comprises a contraction module and an expansion module, and the contraction module is used for downsampling the medical image to be segmented through a capsule convolution layer formed by capsule convolutions with different step sizes so as to extract feature images with different sizes; the expansion module is used for gradually recovering the received feature maps of different sizes from the contraction module through the capsule deconvolution layer to generate a feature map of a target size. Model parameters are reduced through improvement of the neural network model structure, so that the speed and accuracy of image segmentation of the improved neural network model are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image segmentation method according to a first embodiment of the present invention;
FIG. 2A is a schematic illustration of a medical image provided in accordance with a first embodiment of the present invention;
FIG. 2B is a schematic diagram of a first edge image according to an embodiment of the present invention;
FIG. 2C is a schematic diagram of a second edge image according to an embodiment of the present invention;
fig. 2D is a schematic diagram of hough circle positioning according to an embodiment of the invention;
FIG. 3 is a flowchart of image segmentation according to a first embodiment of the present invention;
FIG. 4 is a schematic diagram of a neural network model according to an embodiment of the present invention;
fig. 5 is a block diagram of a medical image segmentation apparatus according to a second embodiment of the present invention;
fig. 6 is a block diagram of a device according to a third embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described by means of implementation examples with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Fig. 1 is a flowchart of a medical image segmentation method according to an embodiment of the present invention. The technical scheme of the embodiment is suitable for the situation that the medical image to be segmented is segmented based on the trained neural network model to obtain the target segmented region. The method can be implemented by the image segmentation device provided by the embodiment of the invention, and the device can be implemented in a software and/or hardware mode and is configured to be applied in a processor. The method specifically comprises the following steps:
s101, acquiring a medical image to be segmented containing a target segmentation area.
The medical image to be segmented is an image directly participating in image segmentation, can be a complete medical image, and can also be a partial medical image containing a target segmentation area, wherein the medical image is a common clinical diagnosis image, such as a CT (Computed Tomography, CT for short, electronic computed tomography) image, an MRI (Magnetic Resonance Imaging, MRI for short, magnetic resonance imaging) image, a PET (Positron Emission Computed Tomography, PET for short, positron emission tomography) image and the like.
In the latter case, the present embodiment preferably first clips the clinical diagnostic medical image to obtain a medical image to be segmented including the target segmentation region. Specifically, the medical image is cut with the center point of the target segmentation area on the medical image as the center, so as to generate the medical image to be segmented, which contains the target segmentation area. It will be appreciated that the size of the medical image to be segmented is smaller than the size of the acquired medical image, and that the ratio of the size of the medical image to be segmented to the size of the original medical image is generally determined according to the size of the target segmentation region. Illustratively, taking the left ventricle as an example, the size of the medical image to be segmented is preferably set to one half of the size of the original medical image. For example, the existing medical image size is typically 512×512, and the medical image to be segmented is 256×256.
The method for determining the center point can be selected as follows: firstly, carrying out three-dimensional Fourier transform on the medical image, and carrying out inverse Fourier transform on first harmonic waves of a three-dimensional Fourier transform result to obtain a first edge image; performing preset edge detection on the first edge image to obtain a cross-sectional profile image of the target segmentation area; the center point of the cross-sectional profile in the cross-sectional profile image is taken as the center point of the target segmented region on the medical image.
It will be appreciated that the gray value at each pixel location varies over time as the heart beats at frequency, and that over time the gray value of each pixel varies over a wide range, thereby distinguishing the heart from other surrounding structures. Taking MR images as an example, in MR image acquisition, cardiac MR images of multiple cardiac cycles are typically acquired, so that the short axis cardiac MR image of a slice of the image encompasses the entire cardiac cycle, and each slice image can be considered as a two-dimensional image that varies over time. The present embodiment thus performs a three-dimensional fourier transform along the time axis on each slice. Wherein, the definition of three-dimensional fourier transform is:
Figure GDA0002233105970000051
wherein f (t, x, y) is an lxmxn matrix, x=0, 1, …, M-1; y=0, 1, …, N-1; t=0, 1, …, L-1.
Since the heart cycle motion is on the same frequency, the MR image (as in fig. 2A) can obtain a first edge image carrying contour information using an inverse fourier transform on the first harmonic of the three-dimensional fourier transform after the three-dimensional fourier transform, see fig. 2B.
After the first edge image is obtained, edge information of the first edge image is preferably extracted by Canny edge detection to generate a second edge image (see fig. 2C) including edge information of the target divided region and edge information of other regions. After the second edge image is determined, the center position of the target segmentation area is determined.
Taking the left ventricle as an example, considering that the left ventricle cross section is approximately "circular", the present embodiment extracts the circle information of the second edge image by hough circle detection to obtain a cross section profile image of the target segmented region (see fig. 2D), and then takes the center point of the cross section profile in the cross section profile image as the center point of the target segmented region on the medical image. It may be appreciated that the second edge image may include a plurality of circles, and in order to improve accuracy of determining the cross-sectional profile, in this embodiment, it is preferable to determine P scores of the detected hough circles first, and then use the hough circle with the largest P score as the cross-sectional profile image of the target segmentation area, where P is a super parameter. After the hough circle is determined, a center point of a cross-sectional profile of the target segmentation area is determined through a Gaussian kernel function and combined with the maximum value of the LV likelihood surface, and a medical image to be segmented with a fixed size is cut out from the medical image by taking the center point as the center of the left ventricle (see figure 3).
Wherein, the Gaussian function defines:
Figure GDA0002233105970000061
wherein x is 0 And y 0 Is the center coordinate of Hough circle, sigma x Sum sigma y Is the variance, which is set to a fixed value, a is the cumulative value of the peaks of the hough circle.
S102, inputting a medical image to be segmented into a trained neural network model, so that the trained neural network model performs image segmentation on the medical image to be segmented to obtain a target segmentation area, wherein the neural network model comprises a contraction module and an expansion module, and the contraction module is used for downsampling the medical image to be segmented through a capsule convolution layer formed by capsule convolutions with different step sizes to extract feature images with different sizes; the expansion module is used for gradually recovering the received feature maps of different sizes from the contraction module through the capsule deconvolution layer to generate a feature map of a target size.
In order to facilitate a doctor to obtain detailed information of a target segmentation area, image segmentation is generally required to be performed on a medical image to be segmented so as to extract the target segmentation area from the medical image to be segmented, and for this embodiment, a trained neural network model is introduced, and the medical image to be segmented is segmented by using the trained neural network model. Referring to fig. 3, specifically, the medical image to be segmented is input to the trained neural network model, so that the trained neural network model performs image segmentation on the medical image to be segmented to obtain a target segmented region image.
As shown in fig. 4, the contraction module of the embodiment at least includes three capsule convolution layers, the capsule convolution layers at least include two capsule convolutions with two steps connected, and the capsule convolution with a large step is located after the capsule convolution with a small step; the expansion module at least comprises three capsule deconvolution layers, wherein the capsule deconvolution layers at least comprise a capsule convolution and a capsule deconvolution which are connected, and the capsule convolution is a capsule convolution with a small step length; the number of capsule convolution layers is the same as the number of capsule deconvolution layers. The step size of the capsule convolution with a large step size is preferably 2, and the step size of the capsule convolution with a small step size is preferably 1.
Illustratively, as shown in fig. 4, the neural network model of the present embodiment has a structure similar to, but different from, U-net, and performs deconvolution operation using a capsule deconvolution layer instead of a convolution layer and a pooling layer by using a capsule deconvolution layer. The capsule convolution layer is a shrink phase and the capsule deconvolution layer is an expansion phase, the shrink phase being composed of capsule convolution layers for extracting image features, each capsule convolution layer using a 5 x 5 convolution kernel. After each step-size 1 capsule convolution, the feature map is downsampled by step-size 2 capsule convolution to enable the network to learn features globally. Each step of the dilation phase includes up-sampling of the feature map and 4 x 4 capsule deconvolution, halving the number of feature channels and connecting with the corresponding feature map from the systolic path. And finally, a target segmentation area is obtained by using a 3-layer 1 multiplied by 1 convolution operation.
It will be appreciated that the contraction module is preceded by a convolutional network module comprising a convolutional layer such that the image of the input model is passed through the two-dimensional convolutional layer to produce 16 feature maps of equal size, which reform a four-dimensional (128 x 1 x 16) tensor as input to the contraction stage. The neural network model shown in fig. 4 has 16 layers in total, and includes 4 convolution layers, 9 capsule convolution layers and 3 capsule deconvolution layers. It will be appreciated that in practical use, the number of convolution layers, capsule convolution layers and capsule deconvolution layers may be adjusted according to the specific situation, but it is required that the number of layers of the capsule convolution layers is the same as the number of layers of the capsule deconvolution layers.
According to the technical scheme of the image segmentation method provided by the embodiment of the invention, the medical image to be segmented is subjected to image segmentation through the trained neural network model to obtain the target segmentation area, the neural network model comprises a contraction module and an expansion module, and the contraction module is used for downsampling the medical image to be segmented through a capsule convolution layer formed by capsule convolutions with different step sizes so as to extract feature images with different sizes; the expansion module is used for gradually recovering the received feature maps of different sizes from the contraction module through the capsule deconvolution layer to generate a feature map of a target size. Model parameters are reduced through improvement of the neural network model structure, so that the speed and accuracy of image segmentation of the improved neural network model are improved.
Example two
The second embodiment of the invention provides an image segmentation method. On the basis of the above embodiments, the description of the neural network model structure is added.
The convolutional neural network of the embodiment comprises a convolutional network module, a contraction module and an expansion module, wherein the convolutional network module is used for sequentially carrying out two-dimensional convolution and nonlinear activation optimization on medical images to be segmented. Wherein, the two-dimensional convolution is linear operation, and the formula is as follows:
Figure GDA0002233105970000081
wherein I and j are pixel positions of the medical image to be segmented, I and K l And respectively obtaining a medical image to be segmented and a first convolution kernel, wherein m and n are respectively the width and the height of the convolution kernels, and optimizing the two-dimensional convolution result through nonlinear activation optimization after two-dimensional convolution calculation. The formula of nonlinear activation optimization is as follows:
Figure GDA0002233105970000091
wherein S is l (i, j) is the output of the first dimension of the two-dimensional convolution of the last step, and f (x) is the output of the nonlinear activation.
Since the features of the left ventricle have a two-dimensional structure, a four-dimensional (128×128×1×16) tensor is obtained after the medical image to be segmented is input to the convolutional network module, that is, 16 feature maps, and the 16 feature maps are taken as the input of the contraction module.
For the puncturing module, the core of the capsule convolution is the best match of the output from the lower capsule convolution layer to the output of the higher capsule convolution layer. In the first capsule convolution layer, there is a set of capsule types
Figure GDA0002233105970000092
For each capsule type, there is c= { C 11 ,...C 1w ,...C h1 ,...C hw The }, is h×w z-dimensional capsules, each capsule +.>
Figure GDA0002233105970000093
Will receive vector prediction->
Figure GDA0002233105970000094
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure GDA0002233105970000095
Figure GDA0002233105970000096
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure GDA0002233105970000097
is matrix weight +.>
Figure GDA0002233105970000098
Is a low-level feature->
Figure GDA0002233105970000099
For the routing coefficients in the routing algorithm, +.>
Figure GDA00022331059700000910
Is the vector corresponding to the capsule, wherein +.>
Figure GDA00022331059700000911
The calculation formula of (2) is as follows:
Figure GDA00022331059700000912
where k is the number of capsule types.
The nonlinear transformation formula is as follows:
Figure GDA00022331059700000913
wherein S is xy The I is S xy Is a norm of (c).
In the expansion module, the capsule deconvolution layer is input with a step size of 0, and the convolution operation is carried out after the boundary is filled with 0, and a convolution formula is shown in a formula (3). The parameters of the capsule deconvolution layer in this embodiment are also updated based on the dynamic routing algorithm.
For dynamic routing algorithms, parameters:
Figure GDA0002233105970000101
d、l、k h and k w 。/>
Figure GDA0002233105970000102
The output of the capsule convolution for the lower layer; d is the number of route iterations for k at layer l h ×k w All capsule types in->
Figure GDA0002233105970000103
The capsule xy is centered on (x, y) of layer l+1. The process of updating each parameter in the neural network model of this embodiment is:
Figure GDA0002233105970000104
wherein the arrow indicates the assignment.
On the d-th iteration, capsule type at layer i:
Figure GDA0002233105970000105
/>
convolution xy for all capsules of layer l+1:
Figure GDA0002233105970000106
convolution xy for all capsules of layer l+1:
Figure GDA0002233105970000107
all capsule types for layer i
Figure GDA0002233105970000109
And all capsules xy of layer l+1: />
Figure GDA0002233105970000108
Finally return to V xy
It will be appreciated that a large number of training samples are required before image segmentation using the neural network model, each of which identifies a target segmentation region, and then the neural network model is trained using the training samples to obtain a trained neural network model. After the trained neural network model is obtained, the trained neural network model can be used for image segmentation of the medical image to be segmented.
The neural network model of the present embodiment outputs a probability map, which specifies the target probability for each pixel, through Softmax, and then obtains a threshold using an Otsu adaptive threshold algorithm that divides the probability map into two categories with minimum variance. After determining the class of each pixel of the medical image to be segmented, a target segmentation region is determined based on a morphological image processing method. It is particularly optional to first mark the connected areas (if two pixels are adjacent, they are considered to be in the area of the interconnection, or they have the same value in the binary image). All pixels in the connection region are marked with the same value, called "connection region mark". Secondly, according to the size of the connected region, the region below the threshold is regarded as a background region, and the region above or equal to the threshold is regarded as a target region, and finally the target region is subjected to a closed operation to fill the small holes in the target region, connect adjacent target regions and smooth the boundary while avoiding significant changes in the region thereof, thereby determining the final target segmentation region. After the target segmentation area is determined, the target segmentation area is extracted from the medical image to be segmented.
Based on the trained neural network model corresponding to the neural network model of the present embodiment, the trained neural network model corresponding to the SegCaps neural network model performs image segmentation processing on the same medical graph, and statistics of each index data of the segmentation processing result is shown in table 1 below. It can be seen from the table that, each index data of the image segmentation result of the trained neural network model corresponding to the neural network model in the embodiment of the present invention is better than each index data of the image segmentation result of the trained neural network model corresponding to the SegCaps neural network. Wherein, dice is the similarity, and Jaccard is also called Jaccard similarity coefficient (Jaccard similarity coefficient) for comparing the similarity and the difference between the limited sample sets. MSD is the average surface distance. HD is Hausdorff distance.
TABLE 1 statistical index summary of image segmentation results
Figure GDA0002233105970000111
Compared with the neural network model in the prior art, particularly the capsule network model in the prior art, through reasonable configuration of capsule convolution and non-capsule convolution, network model parameters are greatly reduced, so that the calculation amount of image segmentation is reduced, and the accuracy of image segmentation is improved while the calculation amount of image segmentation is reduced.
Example III
Fig. 5 is a block diagram of a medical image segmentation apparatus according to a third embodiment of the present invention. The apparatus is used for executing the medical image segmentation method provided in any of the above embodiments, and the apparatus may be implemented in software or hardware. The device comprises:
an acquisition section 21 for acquiring a medical image to be segmented including a target segmentation area;
an output section 22 for inputting the medical image to be segmented into a trained neural network model to cause the trained neural network model to perform image segmentation on the medical image to be segmented to obtain a target segmented region, wherein the neural network model includes a contraction module and an expansion module, the contraction module is used for downsampling the medical image to be segmented through a capsule convolution layer composed of capsule convolutions of different steps to extract feature maps of different sizes; the expansion module is used for gradually recovering the received feature maps of different sizes from the contraction module through the capsule deconvolution layer to generate a feature map of a target size.
Wherein the acquisition section includes:
an acquisition unit for acquiring a medical image including a target divided region and determining a center point of the target divided region on the medical image;
and the determining unit is used for cutting the medical image with the center point as the center to generate a medical image to be segmented containing the target segmentation area, wherein the size of the medical image to be segmented is smaller than that of the medical image.
Wherein the determining unit includes:
a first edge image subunit, configured to perform three-dimensional fourier transform on the medical image, and perform inverse fourier transform on a first harmonic of a result of the three-dimensional fourier transform to obtain a first edge image;
an edge detection subunit, configured to perform preset edge detection on the first edge image to obtain a cross-sectional profile image of the target segmentation area;
and the central point subunit is used for taking the central point of the cross section outline in the cross section outline image as the central point of the target segmentation area on the medical image.
The edge detection subunit is specifically configured to perform Canny edge detection on the first edge image to obtain a second edge image; and carrying out Hough circle detection on the second edge image to obtain a cross-sectional contour image of the target segmentation area.
The capsule convolution layer comprises at least three groups of capsule convolution combinations formed by capsule convolutions with two step sizes, and the capsule convolutions with large step sizes are positioned after the capsule convolutions with small step sizes; the capsule deconvolution layer comprises at least three groups of capsule deconvolution combinations consisting of capsule deconvolution and capsule convolution, and the capsule convolution is capsule convolution with small step length; the number of capsule convolution combinations is the same as the number of capsule deconvolution combinations.
The neural network model comprises a convolution network module positioned in front of the contraction module, and the convolution network module is used for sequentially carrying out two-dimensional convolution and nonlinear activation optimization on the medical image to be segmented.
The capsule convolution layer and the capsule deconvolution layer are used for parameter adjustment based on a dynamic routing algorithm.
According to the technical scheme of the medical image segmentation device provided by the embodiment of the invention, the medical image to be segmented is segmented through the trained neural network model to obtain the target segmentation area, the neural network model comprises a contraction module and an expansion module, and the contraction module is used for downsampling the medical image to be segmented through a capsule convolution layer formed by capsule convolutions with different step sizes to extract feature images with different sizes; the expansion module is used for gradually recovering the received feature maps of different sizes from the contraction module through the capsule deconvolution layer to generate a feature map of a target size. The image segmentation speed, accuracy and universality of the neural network model are improved through improvement of the neural network model structure.
The image segmentation device provided by the embodiment of the invention can execute the image segmentation method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 6 is a schematic structural diagram of an apparatus according to a fourth embodiment of the present invention, and as shown in fig. 6, the apparatus includes a processor 301, a memory 302, an input device 303, and an output device 304; the number of processors 301 in the device may be one or more, one processor 301 being taken as an example in fig. 6; the processor 301, memory 302, input device 303, and output device 304 in the apparatus may be connected by a bus or other means, for example in fig. 6.
The memory 302 serves as a computer-readable storage medium storing a software program, a computer-executable program, and modules such as program instructions/modules (e.g., the acquisition section 21 and the output section 22) corresponding to the medical image segmentation method in the embodiment of the present invention. The processor 301 executes various functional applications of the device and data processing, i.e. implements the above-described medical image segmentation method, by running software programs, instructions and modules stored in the memory 302.
Memory 302 may include primarily a program storage area and a data storage area, wherein the program storage area may store an operating system, at least one application program required for functionality; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 302 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 302 may further include memory located remotely from processor 301, which may be connected to the device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 303 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the device.
The output means 304 may comprise a display device such as a display screen, for example, a display screen of a user terminal.
Example five
A fifth embodiment of the present invention also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing a medical image segmentation method, the method comprising:
acquiring a medical image to be segmented comprising a target segmentation area;
inputting the medical image to be segmented into a trained neural network model so that the trained neural network model performs image segmentation on the medical image to be segmented to obtain the target segmented region, wherein the neural network model comprises a contraction module and an expansion module, and the contraction module is used for downsampling the medical image to be segmented through a capsule convolution layer formed by capsule convolutions with different step sizes to extract feature images with different sizes; the expansion module is configured to gradually recover the received different sized feature maps from the contraction module by a capsule deconvolution layer to generate a target sized feature map.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present invention is not limited to the method operations described above, and may also perform the related operations in the medical image segmentation method provided in any embodiment of the present invention.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, etc., and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the medical image segmentation method according to the embodiments of the present invention.
It should be noted that, in the embodiment of the medical image segmentation apparatus described above, each unit and module included are only divided according to the functional logic, but are not limited to the above-described division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (8)

1. A medical image segmentation method, comprising:
acquiring a medical image to be segmented comprising a target segmentation area;
inputting the medical image to be segmented into a trained neural network model so that the trained neural network model performs image segmentation on the medical image to be segmented to obtain the target segmented region, wherein the neural network model comprises a contraction module and an expansion module, and the contraction module is used for downsampling the medical image to be segmented through a capsule convolution layer formed by capsule convolutions with different step sizes to extract feature images with different sizes; the expansion module is used for gradually recovering the received characteristic diagrams with different sizes from the contraction module through a capsule deconvolution layer to generate a characteristic diagram with a target size;
the acquiring the medical image to be segmented including the target segmented region comprises:
acquiring a medical image containing a target segmentation area, and determining a central point of the target segmentation area on the medical image;
cutting the medical image by taking the central point as the center to generate a medical image to be segmented comprising a target segmentation area, wherein the size of the medical image to be segmented is smaller than that of the medical image;
the determining a center point of the target segmentation region on the medical image comprises:
performing three-dimensional Fourier transform on the medical image, and performing inverse Fourier transform on first harmonic of a result of the three-dimensional Fourier transform to obtain a first edge image;
performing preset edge detection on the first edge image to obtain a cross-sectional profile image of the target segmentation area;
and taking the center point of the cross-sectional profile in the cross-sectional profile image as the center point of the target segmentation area on the medical image.
2. The method according to claim 1, wherein the cross-sectional profile of the target segmented region is circular, and the performing the preset edge detection on the first edge image to obtain the cross-sectional profile image of the target segmented region includes:
carrying out Canny edge detection on the first edge image to obtain a second edge image;
and carrying out Hough circle detection on the second edge image to obtain a cross-sectional contour image of the target segmentation area.
3. The method of any of claims 1-2, wherein the puncturing module comprises at least three capsule convolution layers, the capsule convolution layers comprising at least two concatenated step size capsule convolutions, and a large step size capsule convolution is located after a small step size capsule convolution;
the expansion module at least comprises three capsule deconvolution layers, wherein the capsule deconvolution layers at least comprise a capsule convolution and a capsule deconvolution which are connected, and the capsule convolution is the capsule convolution with the small step length;
the number of layers of the capsule convolution layer is the same as the number of layers of the capsule deconvolution layer.
4. The method of claim 1, wherein the neural network model further comprises a convolutional network module preceding the contraction module, the convolutional network module configured to sequentially perform two-dimensional convolution and nonlinear activation optimization on the medical image to be segmented.
5. The method of claim 4, wherein the capsule convolution layer and capsule deconvolution layer are each parameter-adjusted based on a dynamic routing algorithm.
6. A medical image segmentation apparatus, comprising:
an acquisition section for acquiring a medical image to be segmented including a target segmentation region;
an output part, configured to input the medical image to be segmented into a trained neural network model, so that the trained neural network model performs image segmentation on the medical image to be segmented to obtain the target segmented region, where the neural network model includes a contraction module and an expansion module, and the contraction module is configured to downsample the medical image to be segmented through a capsule convolution layer formed by capsule convolutions with different step sizes to extract feature maps with different sizes; the expansion module is used for gradually recovering the received characteristic diagrams with different sizes from the contraction module through a capsule deconvolution layer to generate a characteristic diagram with a target size;
the acquisition section includes:
an acquisition unit for acquiring a medical image including a target divided region and determining a center point of the target divided region on the medical image;
a determining unit configured to crop the medical image with the center point as a center to generate a medical image to be segmented including the target segmentation region, where a size of the medical image to be segmented is smaller than a size of the medical image;
wherein the determining unit includes:
a first edge image subunit, configured to perform three-dimensional fourier transform on the medical image, and perform inverse fourier transform on a first harmonic of a result of the three-dimensional fourier transform to obtain a first edge image;
an edge detection subunit, configured to perform preset edge detection on the first edge image to obtain a cross-sectional profile image of the target segmentation area;
and the central point subunit is used for taking the central point of the cross section outline in the cross section outline image as the central point of the target segmentation area on the medical image.
7. A medical image segmentation apparatus, the apparatus comprising:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the medical image segmentation method as set forth in any one of claims 1-5.
8. A storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the medical image segmentation method according to any one of claims 1-5.
CN201910707182.9A 2019-08-01 2019-08-01 Medical image segmentation method, device, equipment and storage medium Active CN110570394B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910707182.9A CN110570394B (en) 2019-08-01 2019-08-01 Medical image segmentation method, device, equipment and storage medium
PCT/CN2019/110402 WO2021017168A1 (en) 2019-08-01 2019-10-10 Image segmentation method, apparatus, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910707182.9A CN110570394B (en) 2019-08-01 2019-08-01 Medical image segmentation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110570394A CN110570394A (en) 2019-12-13
CN110570394B true CN110570394B (en) 2023-04-28

Family

ID=68774259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910707182.9A Active CN110570394B (en) 2019-08-01 2019-08-01 Medical image segmentation method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN110570394B (en)
WO (1) WO2021017168A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325737B (en) * 2020-02-28 2024-03-15 上海志唐健康科技有限公司 Low-dose CT image processing method, device and computer equipment
CN111951321A (en) * 2020-08-21 2020-11-17 上海西门子医疗器械有限公司 Method for processing images of a computer tomograph and computer tomograph
CN112950652B (en) * 2021-02-08 2024-01-19 深圳市优必选科技股份有限公司 Robot and hand image segmentation method and device thereof
CN113065480B (en) * 2021-04-09 2023-07-07 暨南大学 Handwriting style identification method and device, electronic device and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090080738A1 (en) * 2007-05-01 2009-03-26 Dror Zur Edge detection in ultrasound images
CN105261006B (en) * 2015-09-11 2017-12-19 浙江工商大学 Medical image segmentation algorithm based on Fourier transformation
CN110838124B (en) * 2017-09-12 2021-06-18 深圳科亚医疗科技有限公司 Method, system, and medium for segmenting images of objects having sparse distribution
CN108629774A (en) * 2018-05-11 2018-10-09 电子科技大学 A kind of annular object method of counting based on hough-circle transform
CN109118479B (en) * 2018-07-26 2022-07-19 中睿能源(北京)有限公司 Capsule network-based insulator defect identification and positioning device and method
CN109344833B (en) * 2018-09-04 2020-12-18 中国科学院深圳先进技术研究院 Medical image segmentation method, segmentation system and computer-readable storage medium
CN109711411B (en) * 2018-12-10 2020-10-30 浙江大学 Image segmentation and identification method based on capsule neurons
CN109840560B (en) * 2019-01-25 2023-07-04 西安电子科技大学 Image classification method based on clustering in capsule network

Also Published As

Publication number Publication date
CN110570394A (en) 2019-12-13
WO2021017168A1 (en) 2021-02-04

Similar Documents

Publication Publication Date Title
CN110570394B (en) Medical image segmentation method, device, equipment and storage medium
CN108198184B (en) Method and system for vessel segmentation in contrast images
CN110337669B (en) Pipeline method for segmenting anatomical structures in medical images in multiple labels
WO2021244661A1 (en) Method and system for determining blood vessel information in image
CN112950651B (en) Automatic delineation method of mediastinal lymph drainage area based on deep learning network
WO2020001217A1 (en) Segmentation method for dissected aorta in ct image based on convolutional neural network
Birenbaum et al. Longitudinal multiple sclerosis lesion segmentation using multi-view convolutional neural networks
JP6505124B2 (en) Automatic contour extraction system and method in adaptive radiation therapy
CN110599500B (en) Tumor region segmentation method and system of liver CT image based on cascaded full convolution network
CN113728335A (en) Method and system for classification and visualization of 3D images
CN111862009B (en) Classifying method of fundus OCT (optical coherence tomography) images and computer readable storage medium
WO2021136368A1 (en) Method and apparatus for automatically detecting pectoralis major region in molybdenum target image
CN113516659A (en) Medical image automatic segmentation method based on deep learning
CN111028248A (en) Method and device for separating static and dynamic pulses based on CT (computed tomography) image
CN115239716B (en) Medical image segmentation method based on shape prior U-Net
CN112233132A (en) Brain magnetic resonance image segmentation method and device based on unsupervised learning
CN111091575B (en) Medical image segmentation method based on reinforcement learning method
Martín-Isla et al. Stacked BCDU-Net with semantic CMR synthesis: Application to myocardial pathology segmentation challenge
CN110992310A (en) Method and device for determining partition where mediastinal lymph node is located
CN112750110A (en) Evaluation system for evaluating lung lesion based on neural network and related products
CN116958705A (en) Medical image classifying system based on graph neural network
Dong et al. Hole-filling based on content loss indexed 3D partial convolution network for freehand ultrasound reconstruction
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
Valverde et al. Multiple sclerosis lesion detection and segmentation using a convolutional neural network of 3D patches
CN115705638A (en) Medical image optimization method, system, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant