WO2021017168A1 - 图像分割方法、装置、设备及存储介质 - Google Patents

图像分割方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021017168A1
WO2021017168A1 PCT/CN2019/110402 CN2019110402W WO2021017168A1 WO 2021017168 A1 WO2021017168 A1 WO 2021017168A1 CN 2019110402 W CN2019110402 W CN 2019110402W WO 2021017168 A1 WO2021017168 A1 WO 2021017168A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
capsule
segmented
target
convolution
Prior art date
Application number
PCT/CN2019/110402
Other languages
English (en)
French (fr)
Inventor
胡战利
贺阳素
吴垠
梁栋
杨永峰
刘新
郑海荣
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2021017168A1 publication Critical patent/WO2021017168A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present disclosure relates to the field of image processing, for example, to an image segmentation method, device, device, and storage medium.
  • Medical image segmentation methods in related technologies mainly include manual segmentation methods and machine learning-based segmentation methods, but no matter which segmentation method, it usually takes a long time to complete image segmentation, which limits image segmentation in related technologies.
  • the method is applied to the image segmentation of more complex organs and tissues.
  • the present disclosure provides an image segmentation method, device, equipment and storage medium.
  • the present disclosure provides an image segmentation method, including:
  • the present disclosure also provides an image segmentation device, including:
  • An acquisition module configured to acquire an image to be segmented containing the target segmentation area
  • An input module configured to input the image to be segmented into a trained neural network model for image segmentation to obtain the target segmentation area
  • the trained neural network model includes a contraction module and an expansion module
  • the shrinking module is configured to down-sample the to-be-segmented image through a capsule convolution layer composed of capsule convolutions of different lengths to extract feature maps of different sizes, and
  • the expansion module is configured to gradually restore the feature maps of different sizes through a capsule deconvolution layer to generate a feature map of a target size.
  • the present disclosure also provides a device, including:
  • At least one processor At least one processor
  • Storage device for storing at least one program
  • the at least one processor When the at least one program is executed by the at least one processor, the at least one processor implements the image segmentation method described above.
  • the present disclosure also provides a storage medium containing computer-executable instructions, where the computer-executable instructions are used to execute the image segmentation method described above when executed by a computer processor.
  • FIG. 1 is a flowchart of an image segmentation method provided in Embodiment 1;
  • Fig. 2 is a flowchart of an image segmentation method provided in the second embodiment
  • 3A is a schematic diagram of an image provided in the second embodiment
  • 3B is a schematic diagram of a first edge image provided in the second embodiment
  • 3C is a schematic diagram of a second edge image provided in the second embodiment
  • Fig. 3D is a schematic diagram of a Hough circle positioning provided in the second embodiment
  • Fig. 4 is a flowchart of image segmentation provided in the second embodiment
  • FIG. 5 is a schematic diagram of the architecture of a neural network model provided in the second embodiment
  • FIG. 6 is a structural block diagram of an image segmentation device provided by the fourth embodiment.
  • Fig. 7 is a structural block diagram of a device provided in the fifth embodiment.
  • FIG. 1 is a flowchart of an image segmentation method provided in Embodiment 1 of the present disclosure.
  • the technical solution of this embodiment may be applicable to the case of performing image segmentation on the image to be segmented based on the trained neural network model to obtain the target segmentation region.
  • the method can be executed by an image segmentation device provided in the present disclosure, and the device can be implemented in software and/or hardware, and configured to be applied in a processor.
  • the method specifically includes the following steps.
  • S102 Input the to-be-segmented image into the trained neural network model, and down-sample the to-be-segmented image to extract feature maps of different sizes through a capsule convolution layer composed of capsule convolutions of different lengths. , And gradually restore the feature maps of different sizes through the capsule deconvolution layer to generate a feature map of the target size, so as to perform image segmentation on the image to be segmented to obtain the target segmentation region.
  • the technical solution of the image segmentation method provided by this embodiment is to obtain the image to be segmented containing the target segmentation region; input the image to be segmented into the trained neural network model, which is formed by the capsule convolution of asynchronous length
  • the capsule convolutional layer down-samples the image to be segmented to extract feature maps of different sizes, and gradually restores feature maps of different sizes through the capsule deconvolution layer to generate feature maps of target sizes to perform image segmentation on the image to be segmented Obtain the target segmentation area.
  • the model parameters are reduced, thereby improving the speed and accuracy of image segmentation by the improved neural network model.
  • Fig. 2 is a flowchart of an image segmentation method provided in the second embodiment of the present disclosure. As shown in Figure 2, the image segmentation method includes the following steps.
  • S201 Acquire an image to be divided that includes a target segmentation area.
  • the image to be segmented is an image that directly participates in image segmentation, and the image to be segmented may be a medical image to be segmented, for example, a complete medical image or a partial medical image including a target segmentation region.
  • medical images are commonly used clinical diagnostic images, such as Computed Tomography (CT) images, Magnetic Resonance Imaging (MRI) images, and Positron Emission Computed Tomography (PET) images Wait.
  • CT Computed Tomography
  • MRI Magnetic Resonance Imaging
  • PET Positron Emission Computed Tomography
  • this embodiment may first crop the clinical diagnostic medical image to obtain the medical image to be segmented including the target segmentation area.
  • the medical image may be cropped with the center point of the target segmentation area on the medical image as the center, so as to generate the medical image to be segmented containing the target segmentation area.
  • the size of the medical image to be segmented is smaller than the size of the acquired medical image, and the ratio of the size of the medical image to be segmented to the size of the acquired medical image is usually determined according to the size of the target segmentation region.
  • the size of the medical image to be segmented may be set to one-half of the size of the acquired medical image.
  • the size of the existing medical image is usually 512 ⁇ 512, and the size of the medical image to be segmented is 256 ⁇ 256.
  • the method for determining the aforementioned center point may include: first performing a three-dimensional Fourier transform on the medical image to obtain a three-dimensional Fourier transform result, and performing Fourier Fourier transform on the first harmonic of the three-dimensional Fourier transform result. Inversely transform the inner leaf to obtain the first edge image; perform preset edge detection on the first edge image to obtain the cross-sectional contour image of the target segmented area; use the center point of the cross-sectional contour in the cross-sectional contour image as the target segmented area The central point on the medical image.
  • the heart can be distinguished from other structures around the heart.
  • MRI images Take MRI images as an example.
  • cardiac MRI images of multiple cardiac cycles are usually acquired. Therefore, the short-axis cardiac MRI image of the slice of this image contains the entire cardiac cycle, and each slice image can be regarded as random.
  • Time-varying two-dimensional images. Therefore, this embodiment performs three-dimensional Fourier transform along the time axis on each slice. Among them, the three-dimensional Fourier transform is defined as:
  • T is the corresponding variable after the time axis t Fourier changes
  • j is the imaginary parameter of the Fourier formula
  • u is the variable after the image row coordinate x Fourier changes
  • v is the image column coordinate y after Fourier change
  • f(t,x,y) is L ⁇ M ⁇ N matrix
  • x 0,1,...,M-1
  • y 0,1,...,N-1
  • t 0,1,..., L-1.
  • the edge information of the first edge image may be extracted by Canny edge detection to generate a second edge image (see FIG. 3C), and the second edge image includes the target segmentation area The edge information and the edge information of other areas. After the second edge image is generated, the center position of the target segmentation area is determined.
  • the circle information of the second edge image is extracted by Hough circle detection to obtain the cross-sectional contour image of the target segmentation area (see Figure 3D), and then The center point of the cross-sectional profile in the cross-sectional profile image is used as the center point of the target segmentation area on the medical image.
  • the second edge image may contain multiple circles.
  • this embodiment may first determine the P score of each Hough circle detected, and then determine the P score of the largest P score The Hough circle is used as the cross-sectional contour image of the target segmentation area, where P is a hyperparameter.
  • the Gaussian kernel function is combined with the maximum value of the left ventricle (Left ventricle, LV) likelihood surface to determine the cross-sectional contour center point in the cross-sectional contour image, and the center point is taken as the center of the left ventricle Cut out a fixed size medical image to be segmented from the medical image (see Figure 4).
  • Gaussian function definition
  • x 0 and y 0 are the center coordinates of the Hough circle
  • ⁇ x and ⁇ y are the variances
  • ⁇ x and ⁇ y are set to fixed values
  • A is the cumulative value of the peak of the Hough circle.
  • the trained neural network model performs image segmentation on the image to be segmented to obtain a target segmentation area
  • the trained neural network model includes a contraction module and an expansion module.
  • the shrinking module is set to downsample the image to be segmented by the capsule convolution layer composed of capsule convolutions of different lengths to extract feature maps of different sizes
  • the expansion module is set to gradually restore different sizes through the capsule deconvolution layer To generate a feature map of the target size.
  • Neural network model using the trained neural network model to segment the medical image to be segmented. Referring to FIG. 4, specifically, the medical image to be segmented is input to the trained neural network model, so that the trained neural network model performs image segmentation on the medical image to be segmented to obtain the target segmented region image.
  • the shrinking module of this embodiment includes at least three capsule convolutional layers, and each of the at least three capsule convolutional layers includes at least two types of connected capsule rolls.
  • the large-step capsule convolution is located after the small-step capsule convolution.
  • the expansion module includes at least three capsule deconvolution layers, each of the at least three capsule deconvolution layers includes at least a connected capsule convolution and a capsule deconvolution, and the capsule convolution is small Capsule convolution of step size.
  • the number of layers of the capsule convolutional layer is the same as the number of layers of the capsule deconvolutional layer. Among them, the step size of the capsule convolution with a large step size can be 2, and the step size of the capsule convolution with a small step size can be 1.
  • the structure of the trained neural network model of this embodiment is similar to a segmentation framework (U-net), but different from U-net, which replaces convolution with a capsule convolution layer Layer and pooling layer, using capsule deconvolution layer for deconvolution operation.
  • the capsule convolutional layer is the contraction phase
  • the capsule deconvolution layer is the expansion phase.
  • the contraction phase is composed of the capsule convolution layer used to extract image features, and each capsule convolution layer uses a 5 ⁇ 5 convolution kernel. After each capsule convolution with a step size of 1, the feature map is down-sampled by a capsule convolution with a step size of 2, so that the network can learn features globally.
  • Each step of the expansion phase includes the upsampling of the feature map and 4 ⁇ 4 capsule deconvolution, which halves the number of feature channels and connects with the corresponding feature maps from the contraction path. Finally, a 3-layer 1 ⁇ 1 convolution operation is used to obtain the target segmentation area.
  • the shrinking module before the shrinking module, it also includes a convolutional network module.
  • the convolutional network module includes a convolutional layer.
  • the convolutional layer is a two-dimensional convolutional layer, so that the image input to the trained neural network model can pass
  • the two-dimensional convolutional layer generates 16 feature maps of the same size and forms a four-dimensional (128 ⁇ 128 ⁇ 1 ⁇ 16) tensor, which is used as the input of the shrinking stage.
  • the trained neural network model shown in Figure 5 has a total of 16 layers, including 4 convolutional layers, 9 capsule convolutional layers and 3 capsule deconvolutional layers.
  • the number of convolutional layers, capsule convolutional layers, and capsule deconvolutional layers can be adjusted according to specific conditions, but the number of capsule convolutional layers and the number of capsule deconvolutional layers must be satisfied.
  • the number of layers is the same.
  • the image to be segmented is segmented through the trained neural network model to obtain the target segmentation area.
  • the trained neural network model includes a contraction module and an expansion module.
  • the contraction module is set to downsample the image to be segmented through the capsule convolution layer composed of capsule convolutions of different lengths to extract feature maps of different sizes.
  • the expansion module is set to gradually restore feature maps of different sizes through the capsule deconvolution layer to generate feature maps of target sizes.
  • the third embodiment of the present disclosure provides an image segmentation method. On the basis of the second embodiment above, a description of the structure of the trained neural network model is added.
  • the convolutional neural network of this embodiment includes a convolutional network module, a contraction module, and an expansion module, wherein the convolutional network module is set to sequentially perform two-dimensional convolution and nonlinear activation optimization on the segmented image.
  • the two-dimensional convolution is a linear operation, and the formula is as follows:
  • i and j are the pixel positions of the image to be divided
  • I is the image to be divided
  • K l is the l-th convolution kernel
  • m is the width of the convolution kernel
  • n is the height of the convolution kernel.
  • S l (i, j) is the output of the lth dimension of the previous two-dimensional convolution
  • f(x) is the output of nonlinear activation
  • the four-dimensional (128 ⁇ 128 ⁇ 1 ⁇ 16) tensor (128 ⁇ 128 ⁇ 1 ⁇ 16) tensor is obtained after inputting the image to be segmented into the convolutional network module, that is, 16 feature maps, which are used as the contraction module input of.
  • the core of the capsule convolution is the best match between the output from the low-level capsule convolutional layer and the output of the high-level capsule convolutional layer.
  • the l-th capsule convolutional layer there is a set of capsule types
  • C ⁇ C 11 ,...C 1w ,...C h1 ,...C hw ⁇ , which are h ⁇ w z-dimensional capsules.
  • each capsule Vector prediction among them each capsule Vector prediction among them,
  • S xy is the output value of convolution calculation, Is the matrix weight, Is a low-level feature, Is the routing coefficient in the routing algorithm, Is the vector corresponding to the capsule, where
  • the calculation formula is as follows:
  • V xy is the value after the activation function is calculated on the output
  • is the norm of S xy .
  • the input step length of the capsule deconvolution layer is filled with 0, and the boundary is filled with 0 and then the convolution operation is performed.
  • the convolution formula is shown in formula (3).
  • the parameters of the capsule deconvolution layer in this embodiment are also updated based on the dynamic routing algorithm.
  • the parameters d, l, k h and k w .
  • d is the number of routing iterations
  • k h is the row of the image
  • k w is the column of the image, for all capsule types in the l-th layer k h ⁇ k w
  • the capsule xy is centered at (x, y) in the l+1th layer.
  • the neural network model for image segmentation, it is necessary to obtain a large number of training samples, and each training sample has a target segmentation area, and then use the training samples to train the neural network model to obtain the trained Neural network model. After the trained neural network model is obtained, it can be used to segment the image to be segmented.
  • the neural network model of this embodiment outputs a probability map through the activation function (Softmax), which specifies the target probability of each pixel, and then uses the adaptive threshold algorithm (Otsu) adaptive threshold algorithm to obtain the threshold, which divides the probability map Are the two categories with the smallest variance.
  • Softmax the activation function
  • Otsu adaptive threshold algorithm
  • the target segmentation area is determined based on the morphological image processing method. Specifically, the connection area is first marked (if two pixels are adjacent, the two pixels are considered to be in the area connected to each other, or the two pixels have the same value in the binary image). All pixels in the connected area are marked with the same value, which is called "connected area label".
  • the area lower than the threshold is regarded as the background area
  • the area higher than or equal to the threshold is regarded as the target area
  • the target area is closed to fill the small holes in the target area.
  • the target segmentation area is extracted from the image to be segmented.
  • the trained neural network model corresponding to the SegCaps neural network model performs image segmentation processing on the same image, and counts each index of the segmentation processing result
  • Table 1 The data is shown in Table 1 below. It can be seen from Table 1 that each index data of the image segmentation result of the trained neural network model corresponding to the neural network model of the embodiment of the present disclosure is better than the trained neural network corresponding to the SegCaps neural network
  • the corresponding index data of the image segmentation result of the model is used to compare the similarity and difference between a limited sample set.
  • MSD is the average surface distance (Mean Surface Distance, MDS).
  • HD is the Hausdorff Distance (HD)
  • ED is the end of diastole
  • ES the end of systole.
  • the reasonable configuration of the capsule convolution and the non-capsule convolution greatly reduces the network model parameters, thereby reducing the amount of image segmentation calculations, and reducing The amount of image segmentation calculations can improve the accuracy of image segmentation.
  • Fig. 6 is a structural block diagram of an image segmentation device provided in the third embodiment of the present disclosure.
  • the device is used to execute the image segmentation method provided in any of the foregoing embodiments, and the device can be implemented in software or hardware.
  • the device includes an acquisition module 21 and an input module 22.
  • the obtaining module 21 is configured to obtain an image to be divided that includes the target segmentation area
  • the input module 22 is configured to input the image to be segmented into the trained neural network model for image segmentation to obtain the target segmentation area;
  • the trained neural network model includes a contraction module and an expansion module
  • the shrinking module is set to downsample the image to be segmented through the capsule convolution layer composed of capsule convolutions of different lengths to extract feature maps of different sizes, and
  • the expansion module is set to gradually restore feature maps of different sizes through the capsule deconvolution layer to generate feature maps of target sizes.
  • the obtaining module 21 includes:
  • the acquiring unit is configured to acquire an image containing the target segmented area, and determine the center point of the target segmented area on the image;
  • the determining unit is configured to crop the image with the center point as the center to generate a to-be-divided image including the target segmentation area, wherein the size of the to-be-divided image is smaller than the size of the image.
  • the determining unit includes:
  • the first edge image subunit is configured to perform three-dimensional Fourier transform on the image to obtain a three-dimensional Fourier transform result, and perform inverse Fourier transform on the first harmonic of the three-dimensional Fourier transform result To obtain the first edge image;
  • An edge detection subunit configured to perform preset edge detection on the first edge image to obtain a cross-sectional contour image of the target segmented area
  • the center point subunit is set to use the center point of the cross-sectional contour in the cross-sectional contour image as the center point of the target segmentation area on the image.
  • the edge detection subunit is configured to perform Canny edge detection on the first edge image to obtain a second edge image; and perform Hough circle detection on the second edge image to obtain a cross-sectional contour image of the target segmentation area.
  • the shrinking module includes at least three capsule convolutional layers, each capsule convolutional layer is a capsule convolution combination composed of two kinds of step size capsule convolution, and the large step size of the two step size capsule convolution
  • the capsule convolution of is located after the small-step capsule convolution;
  • the expansion module includes at least three capsule deconvolution layers, and each capsule deconvolution layer is a combination of capsule deconvolution and capsule deconvolution.
  • the capsule convolution is a small-step capsule convolution; the number of capsule convolution combinations is the same as the number of capsule deconvolution combinations.
  • the neural network model also includes a convolutional network module located before the shrinking module, and the convolutional network module is set to sequentially perform two-dimensional convolution and nonlinear activation optimization on the image to be segmented.
  • the capsule convolutional layer and the capsule deconvolutional layer are both adjusted based on dynamic routing algorithms.
  • the technical solution of an image segmentation device includes an acquisition module and an input module.
  • the acquisition module is configured to acquire the image to be segmented containing the target segmentation area
  • the input module is configured to input the image to be segmented into the trained neural network model for image segmentation to obtain the target segmentation area; wherein the trained neural network
  • the model includes a shrinking module and an expansion module.
  • the shrinking module is set to downsample the image to be segmented through the capsule convolution layer composed of capsules with different lengths to extract feature maps of different sizes
  • the expansion module is set to pass the capsule reverse
  • the convolutional layer gradually restores feature maps of different sizes to generate feature maps of target sizes.
  • the image segmentation device provided by the embodiment of the present disclosure can execute the image segmentation method provided by any embodiment of the present disclosure, and has the corresponding functional modules and beneficial effects for executing the image segmentation method.
  • FIG. 7 is a schematic structural diagram of a device provided by Embodiment 5 of the present disclosure.
  • the device includes a processor 301, a memory 302, an input device 303, and an output device 304.
  • the number of processors 301 in the device may be at least one, and one processor 301 is taken as an example in FIG. 7.
  • the processor 301, the memory 302, the input device 303, and the output device 304 in the device may be connected by a bus or other methods. In FIG. 7, the connection by a bus is taken as an example.
  • the memory 302 can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the image segmentation method in the embodiment of the present disclosure (for example, the acquisition module 21 and the input module 22). ).
  • the processor 301 executes each functional application and data processing of the device by running the software programs, instructions, and modules stored in the memory 302, that is, realizes the aforementioned image segmentation method.
  • the memory 302 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the terminal, etc.
  • the memory 302 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the memory 302 may further include a memory remotely provided with respect to the processor 301, and these remote memories may be connected to the device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the input device 303 can be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the device.
  • the output device 304 may include a display device such as a display screen, for example, a display screen of a user terminal.
  • the sixth embodiment of the present disclosure also provides a storage medium containing computer-executable instructions, which are used to execute an image segmentation method when the computer-executable instructions are executed by a computer processor, and the method includes:
  • a storage medium containing computer-executable instructions provided by the embodiments of the present disclosure is not limited to the method operations described above, and can also execute the image segmentation methods provided in any embodiment of the present disclosure. Related operations.
  • the present disclosure can be implemented by software and necessary general-purpose hardware, of course, it can also be implemented by hardware, but in many cases the former is a better implementation.
  • the technical solution of the present disclosure can be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a computer floppy disk, Read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disk, etc., including several instructions to make a computer device (which can be a personal computer, A server, or a network device, etc.) execute the image segmentation method described in each embodiment of the present disclosure.
  • a computer-readable storage medium such as a computer floppy disk, Read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

一种图像分割方法、装置、设备及存储介质,该方法包括:获取包含目标分割区域的待分割图像(S101);将所述待分割图像输入至已训练的神经网络模型中,通过由不同步长的胶囊卷积所构成的胶囊卷积层对所述待分割图像进行降采样以提取不同尺寸的特征图,并通过胶囊反卷积层逐步恢复所述不同尺寸的特征图以生成目标尺寸的特征图,以对所述待分割图像进行图像分割得到所述目标分割区域(S102)。

Description

图像分割方法、装置、设备及存储介质
本申请要求在2019年08月01日提交中国专利局、申请号为201910707182.9的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及图像处理领域,例如涉及一种图像分割方法、装置、设备及存储介质。
背景技术
为了便于医生的临床诊断,通常需要对图像,如医学图像,进行分割以将目标器官组织从原始医学图像中提取出来,以便于医生获取目标器官组织的细节信息,从而提高医生临床诊断的准确率。
相关技术的医学图像分割方法主要包括手动分割方法和基于机器学习的分割方法,但无论哪一种分割方法,通常都需要花费较长的时间才能完成图像分割,从而限制了相关技术中的图像分割方法在较为复杂的器官组织图像分割上的应用。
发明内容
本公开提供了一种图像分割方法、装置、设备及存储介质。
本公开提供了一种图像分割方法,包括:
获取包含目标分割区域的待分割图像;
将所述待分割图像输入至已训练的神经网络模型中,通过由不同步长的胶囊卷积所构成的胶囊卷积层对所述待分割图像进行降采样以提取不同尺寸的特征图,并通过胶囊反卷积层逐步恢复所述不同尺寸的特征图以生成目标尺寸的 特征图,以对所述待分割图像进行图像分割得到所述目标分割区域。
本公开还提供了一种图像分割装置,包括:
获取模块,设置为获取包含目标分割区域的待分割图像;
输入模块,设置为将所述待分割图像输入至已训练的神经网络模型中进行图像分割以得到所述目标分割区域;
其中,所述已训练的神经网络模型包括收缩模块和扩张模块;
所述收缩模块设置为通过由不同步长的胶囊卷积所构成的胶囊卷积层对所述待分割图像进行降采样以提取不同尺寸的特征图,以及
所述扩张模块设置为通过胶囊反卷积层逐步恢复所述不同尺寸的特征图以生成目标尺寸的特征图。
本公开还提供了一种设备,包括:
至少一个处理器;
存储装置,用于存储至少一个程序;
当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如上所述的图像分割方法。
本公开还提供了一种包含计算机可执行指令的存储介质,其中,所述计算机可执行指令在由计算机处理器执行时用于执行如上所述的图像分割方法。
附图说明
图1是实施例一提供的一种图像分割方法的流程图;
图2是实施例二提供的一种图像分割方法的流程图;
图3A是实施例二提供的一种图像的示意图;
图3B是实施例二提供的一种第一边缘图像的示意图;
图3C是实施例二提供的一种第二边缘图像的示意图;
图3D是实施例二提供的一种霍夫圆定位示意图;
图4是实施例二提供的一种图像分割流程图;
图5是实施例二提供的一种神经网络模型的架构示意图;
图6是实施例四提供的一种图像分割装置的结构框图;
图7是实施例五提供的一种设备的结构框图。
具体实施方式
实施例一
图1是本公开实施例一提供的一种图像分割方法的流程图。本实施例的技术方案可以适用于基于已训练的神经网络模型对待分割图像进行图像分割以得到目标分割区域的情况。该方法可以由本公开提供的一种图像分割装置来执行,该装置可以采用软件和/或硬件的方式实现,并配置在处理器中应用。该方法具体包括如下步骤。
S101、获取包含目标分割区域的待分割图像;
S102、将所述待分割图像输入至已训练的神经网络模型中,通过由不同步长的胶囊卷积所构成的胶囊卷积层对所述待分割图像进行降采样以提取不同尺寸的特征图,并通过胶囊反卷积层逐步恢复所述不同尺寸的特征图以生成目标尺寸的特征图,以对所述待分割图像进行图像分割得到所述目标分割区域。
本实施例提供的图像分割方法的技术方案,获取包含目标分割区域的待分割图像;将所述待分割图像输入至已训练的神经网络模型中,通过由不同步长的胶囊卷积所构成的胶囊卷积层对待分割图像进行降采样以提取不同尺寸的特征图,并通过胶囊反卷积层逐步恢复不同尺寸的特征图以生成目标尺寸的特征 图,以对所述待分割图像进行图像分割得到所述目标分割区域。通过对神经网络模型结构的改进减少了模型参数,从而提高了改进后的神经网络模型对图像分割的速度和准确度。
实施例二
图2是本公开实施例二提供的一种图像分割方法的流程图。如图2所示,该图像分割方法包括如下步骤。
S201、获取包含目标分割区域的待分割图像。
其中,待分割图像为直接参与图像分割的图像,待分割图像可以为待分割医学图像,例如,可以是完整的医学图像,也可以是包括目标分割区域的部分医学图像。其中,医学图像为常用临床诊断图像,比如电子计算机断层扫描(Computed Tomography,CT)图像、磁共振成像(Magnetic Resonance Imaging,MRI)图像和正电子发射型计算机断层成像(Positron Emission Computed Tomography,PET)图像等。为了便于描述以下以医学图像为例进行说明。
对于上述包括目标分割区域的部分医学图像,本实施例可以先对临床诊断医学图像进行裁剪以得到包括目标分割区域在内的待分割医学图像。具体地,可以以目标分割区域在医学图像上的中心点为中心,对医学图像进行裁剪以生成包含目标分割区域的待分割医学图像。可以理解的是,该待分割医学图像的尺寸小于所获取的医学图像的尺寸,且待分割医学图像的尺寸与所获取的医学图像的尺寸的比例通常根据目标分割区域的尺寸来确定。示例性地,以左心室为例,待分割医学图像的尺寸可以设置为所获取的医学图像的尺寸的二分之一。比如,现有医学图像尺寸通常为512×512,那么待分割医学图像的尺寸则为256×256。
一实施例中,前述中心点的确定方法可以包括:先对该医学图像进行三维傅里叶变换以得到三维傅里叶变换的结果,并对三维傅里叶变换的结果的一次谐波进行傅里叶逆变换以得到第一边缘图像;对该第一边缘图像进行预设边缘检测以得到目标分割区域的截面轮廓图像;将该截面轮廓图像中的截面轮廓的中心点作为该目标分割区域在医学图像上的中心点。
可以理解的是,当心脏有频率搏动时,每个像素位置处的灰度值随时间变化而变化,而且随着时间的推移,每个像素位置处的灰度值会在很大范围内变化,据此可将心脏与心脏周围其他结构区分出来。以MRI图像为例,在MRI图像采集时,通常会采集多个心动周期的心脏MRI图像,因此该图像的切片的短轴心脏MRI图像包含整个心动周期,且每个切片图像可被视为随时间变化的二维图像。因此本实施例在每个切片上沿时间轴进行三维傅里叶变换。其中,三维傅里叶变换的定义为:
Figure PCTCN2019110402-appb-000001
其中,T为时间轴t傅里叶变化后对应变量,j为傅里叶公式虚数参数,u为图像行坐标x傅里叶变化后的变量,v为图像列坐标y傅里叶变化后的变量,f(t,x,y)为L×M×N矩阵,x=0,1,…,M-1,y=0,1,…,N-1,t=0,1,…,L-1。
由于心脏周期运动在相同的频率上,因此MRI图像(如图3A)在三维傅里叶变换之后,对三维傅立叶变换的结果的一次谐波使用傅里叶逆变换可以获得携带有轮廓信息的第一边缘图像(如图3B)。
一实施例中,第一边缘图像获得之后,可以通过坎尼(Canny)边缘检测提取第一边缘图像的边缘信息以生成第二边缘图像(参见图3C),该第二边缘图像包括目标分割区域的边缘信息以及其它区域的边缘信息。第二边缘图像生成 后,再确定目标分割区域的中心位置。
以左心室为例,考虑到左心室截面近似圆形,本实施例通过霍夫圆检测提取该第二边缘图像的圆信息以得到目标分割区域的截面轮廓图像(参见图3D),然后将该截面轮廓图像中的截面轮廓的中心点作为目标分割区域在医学图像上的中心点。可以理解的是,第二边缘图像中可能包含多个圆形,为了提高截面轮廓确定的准确性,本实施例可以先确定检测出的每一个霍夫圆的P得分,然后将P得分最大的霍夫圆作为目标分割区域的截面轮廓图像,其中,P是一个超参数。目标分割区域的截面轮廓图像确定之后,通过高斯核函数结合左心室(Left ventricle,LV)似然表面的最大值确定截面轮廓图像中的截面轮廓中心点,并以该中心点作为左心室的中心从医学图像中裁剪出固定尺寸的待分割医学图像(参见图4)。
其中,高斯函数定义:
Figure PCTCN2019110402-appb-000002
其中,x 0和y 0是霍夫圆的中心坐标,σ x和σ y是方差,σ x和σ y被设置为固定值,A是霍夫圆的峰值的累积值。
S202、将待分割图像输入至已训练的神经网络模型中,以使已训练的神经网络模型对待分割图像进行图像分割以得到目标分割区域,其中,该已训练的神经网络模型包括收缩模块和扩张模块,收缩模块设置为通过由不同步长的胶囊卷积所构成的胶囊卷积层对待分割图像进行降采样以提取不同尺寸的特征图,扩张模块设置为通过胶囊反卷积层逐步恢复不同尺寸的特征图以生成目标尺寸的特征图。
为了便于医生获得目标分割区域的细节信息,通常需要对待分割图像,如 待分割医学图像进行图像分割,以将目标分割区域从待分割医学图像中提取出来,为此本实施例引入了已训练的神经网络模型,使用该已训练的神经网络模型对待分割医学图像进行分割。参见图4,具体地,将待分割医学图像输入至该已训练的神经网络模型,以使该已训练的神经网络模型对待分割医学图像进行图像分割以得到目标分割区域图像。
其中,如图5所示,本实施例的收缩模块至少包括三个胶囊卷积层,该至少三个胶囊卷积层中每个胶囊卷积层至少包括相连接的两种步长的胶囊卷积,且这两种步长的胶囊卷积中大步长的胶囊卷积位于小步长的胶囊卷积之后。扩张模块至少包括三个胶囊反卷积层,该至少三个胶囊反卷积层中每个胶囊反卷积层至少包括相连接的胶囊卷积和胶囊反卷积,且该胶囊卷积为小步长的胶囊卷积。胶囊卷积层的层数与胶囊反卷积层的层数相同。其中,大步长的胶囊卷积的步长可以为2,小步长的胶囊卷积的步长可以为1。
示例性地,如图5所示,本实施例的已训练的神经网络模型的结构类似于一个分割框架(U-net),但又不同于U-net,它通过胶囊卷积层代替卷积层和池化层,使用胶囊反卷积层进行反卷积运算。胶囊卷积层为收缩阶段,胶囊反卷积层为扩张阶段,收缩阶段由用于提取图像特征的胶囊卷积层组成,每个胶囊卷积层使用5×5卷积核。在每个步长为1的胶囊卷积之后,通过步长为2的胶囊卷积对特征图进行下采样,以使网络能全局学习特征。扩张阶段的每一步都包括特征图的上采样和4×4的胶囊反卷积,将特征通道的数量减半并与来自收缩路径的相应特征图连接。最后使用3层1×1的卷积运算得到目标分割区域。
一实施例中,在收缩模块之前还包括卷积网络模块,该卷积网络模块包括一个卷积层,该卷积层为二维卷积层,从而使输入已训练的神经网络模型的图像通过该二维卷积层产生16个尺寸相同的特征图,并形成四维(128×128×1×16) 张量,该四维张量作为收缩阶段的输入。其中,图5示出的已训练的神经网络模型共有16层,包括4个卷积层,9个胶囊卷积层和3个胶囊反卷积层。可以理解的是,在实际使用中,卷积层、胶囊卷积层以及胶囊反卷积层的数量可以根据具体情况进行调整,但需满足胶囊卷积层的层数与胶囊反卷积层的层数相同。
本公开实施例提供的图像分割方法的技术方案,通过已训练的神经网络模型对待分割图像进行图像分割以得到目标分割区域。具体地,该已训练的神经网络模型包括收缩模块和扩张模块,收缩模块设置为通过由不同步长的胶囊卷积所构成的胶囊卷积层对待分割图像进行降采样以提取不同尺寸的特征图;扩张模块设置为通过胶囊反卷积层逐步恢复不同尺寸的特征图以生成目标尺寸的特征图。通过对神经网络模型结构的改进减少了模型参数,从而提高了改进后的神经网络模型对图像分割的速度和准确度。
实施例三
本公开实施例三提供了一种图像分割方法。在上述实施例二的基础上,增加了对已训练的神经网络模型结构的说明。
本实施例的卷积神经网络包括卷积网络模块、收缩模块和扩展模块,其中,卷积网络模块设置为依次对待分割图像进行二维卷积和非线性激活优化。其中,二维卷积为线性运算,公式如下:
Figure PCTCN2019110402-appb-000003
其中,i和j为待分割图像的像素位置,I为待分割图像,K l为第l个卷积核,m为卷积核的宽,n为卷积核的高。经过二维卷积计算之后,再通过非线性激活优化 对二维卷积的结果进行优化。其中,非线性激活优化的公式为:
Figure PCTCN2019110402-appb-000004
其中,S l(i,j)为上一步二维卷积第l维的输出,f(x)为非线性激活的输出。
由于左心室具有二维结构的特征,因此将待分割图像输入卷积网络模块之后得到四维(128×128×1×16)张量,即16个特征图,将这16个特征图作为收缩模块的输入。
对于收缩模块,胶囊卷积的核心是由来自低层胶囊卷积层的输出与高层胶囊卷积层输出的最佳匹配。在第l个胶囊卷积层中,存在胶囊类型集合
Figure PCTCN2019110402-appb-000005
对于每一种胶囊类型,存在C={C 11,…C 1w,…C h1,…C hw},是h×w个z维胶囊,在第l+1个胶囊卷积层中,每个胶囊
Figure PCTCN2019110402-appb-000006
会收到向量预测
Figure PCTCN2019110402-appb-000007
其中,
Figure PCTCN2019110402-appb-000008
Figure PCTCN2019110402-appb-000009
其中,S xy为卷积计算的输出值,
Figure PCTCN2019110402-appb-000010
为矩阵权重,
Figure PCTCN2019110402-appb-000011
为低层特征,
Figure PCTCN2019110402-appb-000012
为路由算法中的路由系数,
Figure PCTCN2019110402-appb-000013
为胶囊所对应的向量,其中,
Figure PCTCN2019110402-appb-000014
的计算公式如下:
Figure PCTCN2019110402-appb-000015
其中,k为胶囊类型数量,
Figure PCTCN2019110402-appb-000016
为改变路由系数的参数。
非线性变换公式如下:
Figure PCTCN2019110402-appb-000017
其中,V xy为对输出经历激活函数计算后的值,||S xy||为S xy的范数。
在扩张模块中,对于胶囊反卷积层输入步长补0,边界填充补0之后进行卷积运算,卷积公式参见公式(3)。其中,本实施例的胶囊反卷积层的参数也是 基于动态路由算法进行更新。
对于动态路由算法,参数:
Figure PCTCN2019110402-appb-000018
d、l、k h和k w
Figure PCTCN2019110402-appb-000019
为低层l层的胶囊卷积的输出,d为路由迭代次数,k h为图像的行,k w为图像的列,对于在第l层的k h×k w内的所有胶囊类型
Figure PCTCN2019110402-appb-000020
胶囊xy以第l+1层的(x,y)为中心点。那么本实施例的神经网络模型中每一个参数更新过程为:
Figure PCTCN2019110402-appb-000021
其中,箭头←表示赋值。
第d次迭代时,在l层的胶囊类型:
Figure PCTCN2019110402-appb-000022
对于l+1层所有的胶囊卷积xy:
Figure PCTCN2019110402-appb-000023
对于l+1层所有的胶囊卷积xy:
Figure PCTCN2019110402-appb-000024
对于l层的所有胶囊类型
Figure PCTCN2019110402-appb-000025
和l+1层的所有胶囊xy:
Figure PCTCN2019110402-appb-000026
最后返回V xy
可以理解的是,在使用神经网络模型进行图像分割之前,需要先获取大量的训练样本,且每个训练样本均标识有目标分割区域,然后使用训练样本对神经网络模型进行训练从而得到已训练的神经网络模型。已训练的神经网络模型得到之后,即可用其对待分割图像进行图像分割。
本实施例的神经网络模型通过激活函数(Softmax)输出概率图,该概率图指定每个像素的目标概率,然后使用自适应阈值算法(Otsu)自适应阈值算法获得阈值,该算法将概率图划分为具有最小方差的两个类别。在确定了待分割图像的每个像素的类别之后,基于形态学图像处理方法确定目标分割区域。具体地,首先标记连接区域(如果两个像素相邻,则这两个像素被认为处于相互连接的区域中,或者这两个像素在二进制图像中具有相同的值)。连接区域中的所有像素都标有相同的值,称为“连接区域标记”。其次,根据连接区域的尺寸, 低于阈值的区域被视为背景区域,而高于或等于阈值的区域被视为目标区域,最后对目标区域进行闭运算以填充目标区域中的小孔。连接相邻目标区域并平滑边界,同时避免其它区域的显著变化,从而确定最终的目标分割区域。目标分割区域确定之后,将该目标分割区域从待分割图像中提取出来。
基于本实施例的神经网络模型所对应的已训练的神经网络模型,与SegCaps神经网络模型所对应的已训练的神经网络模型对相同的图像进行图像分割处理,并统计分割处理结果的每个指标数据,如下述表1所示。从该表1中可以看出,本公开实施例的神经网络模型所对应的已训练的神经网络模型的图像分割结果的每个指标数据,均优于SegCaps神经网络所对应的已训练的神经网络模型的图像分割结果的对应的指标数据。其中,Dice为相似度,Jaccard又称为杰卡德相似系数(Jaccard similarity coefficient),用于比较有限样本集之间的相似性与差异性。MSD为平均表面距离(Mean Surface Distance,MDS)。HD为豪斯多夫距离(Hausdorff Distance,HD),ED为舒张期末期,ES为收缩期末期。
表1图像分割结果统计指标汇总表
Figure PCTCN2019110402-appb-000027
相较于相关技术的神经网络模型,特别是相关技术的胶囊网络模型,通过 胶囊卷积与非胶囊卷积的合理配置,大大减少了网络模型参数,从而减少图像分割的运算量,并在减少图像分割运算量的同时提高图像分割的准确性。
实施例四
图6是本公开实施例三提供的一种图像分割装置的结构框图。该装置用于执行上述任意实施例所提供的图像分割方法,该装置可选为软件或硬件实现。该装置包括获取模块21和输入模块22。
获取模块21,设置为获取包含目标分割区域的待分割图像;
输入模块22,设置为将待分割图像输入至已训练的神经网络模型中进行图像分割以得到所述目标分割区域;
其中,该已训练的神经网络模型包括收缩模块和扩张模块;
收缩模块设置为通过由不同步长的胶囊卷积所构成的胶囊卷积层对待分割图像进行降采样以提取不同尺寸的特征图,以及
扩张模块设置为通过胶囊反卷积层逐步恢复不同尺寸的特征图以生成目标尺寸的特征图。
其中,获取模块21包括:
获取单元,设置为获取包含目标分割区域的图像,并确定目标分割区域在图像上的中心点;
确定单元,设置为以中心点为中心对图像进行裁剪以生成包含目标分割区域的待分割图像,其中,待分割图像的尺寸小于所述图像的尺寸。
其中,确定单元包括:
第一边缘图像子单元,设置为对所述图像进行三维傅里叶变换以得到三维傅里叶变换的结果,并对所述三维傅里叶变换的结果的一次谐波进行傅里叶逆 变换以得到第一边缘图像;
边缘检测子单元,设置为对所述第一边缘图像进行预设边缘检测以得到目标分割区域的截面轮廓图像;
中心点子单元,设置为将所述截面轮廓图像中的截面轮廓的中心点作为所述目标分割区域在所述图像上的中心点。
其中,边缘检测子单元是设置为对第一边缘图像进行坎尼(Canny)边缘检测以得到第二边缘图像;以及对第二边缘图像进行霍夫圆检测以得到目标分割区域的截面轮廓图像。
其中,收缩模块包括至少三个胶囊卷积层,每个胶囊卷积层是由两种步长的胶囊卷积组成的胶囊卷积组合,且这两种步长的胶囊卷积中大步长的胶囊卷积位于小步长的胶囊卷积之后;扩张模块包括至少三个胶囊反卷积层,每个胶囊反卷积层是由胶囊反卷积和胶囊卷积组成的胶囊反卷积组合,且该胶囊卷积为小步长的胶囊卷积;胶囊卷积组合的数量与胶囊反卷积组合的数量相同。
其中,神经网络模型还包括位于收缩模块之前的卷积网络模块,卷积网络模块设置为对待分割图像依次进行二维卷积和非线性激活优化。
其中,胶囊卷积层和胶囊反卷积层均基于动态路由算法进行参数调整。
本公开实施例提供的图像分割装置的技术方案,该图像分割装置包括获取模块和输入模块。获取模块设置为获取包含目标分割区域的待分割图像,输入模块设置为将待分割图像输入至已训练的神经网络模型中进行图像分割以得到所述目标分割区域;其中,该已训练的神经网络模型包括收缩模块和扩张模块,收缩模块设置为通过由不同步长的胶囊卷积所构成的胶囊卷积层对待分割图像进行降采样以提取不同尺寸的特征图,以及扩张模块设置为通过胶囊反卷积层逐步恢复不同尺寸的特征图以生成目标尺寸的特征图。通过对神经网络模型结 构的改进提高了神经网络模型的图像分割速度、准确度和普适性。
本公开实施例所提供的图像分割装置可执行本公开任意实施例所提供的图像分割方法,具备执行图像分割方法相应的功能模块和有益效果。
实施例五
图7为本公开实施例五提供的一种设备的结构示意图,如图7所示,该设备包括处理器301、存储器302、输入装置303以及输出装置304。该设备中处理器301的数量可以是至少一个,图7中以一个处理器301为例。该设备中的处理器301、存储器302、输入装置303以及输出装置304可以通过总线或其他方式连接,图7中以通过总线连接为例。
存储器302作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本公开实施例中的图像分割方法对应的程序指令/模块(例如,获取模块21和输入模块22)。处理器301通过运行存储在存储器302中的软件程序、指令以及模块,从而执行该设备的每种功能应用以及数据处理,即实现上述的图像分割方法。
存储器302可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,存储器302可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器302可进一步包括相对于处理器301远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置303可用于接收输入的数字或字符信息,以及产生与设备的用户 设置以及功能控制有关的键信号输入。
输出装置304可包括显示屏等显示设备,例如,用户终端的显示屏。
实施例六
本公开实施例六还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种图像分割方法,该方法包括:
获取包含目标分割区域的待分割图像;
将所述待分割图像输入至已训练的神经网络模型中,通过由不同步长的胶囊卷积所构成的胶囊卷积层对所述待分割图像进行降采样以提取不同尺寸的特征图,并通过胶囊反卷积层逐步恢复所述不同尺寸的特征图以生成目标尺寸的特征图,以对所述待分割图像进行图像分割得到所述目标分割区域。
当然,本公开实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的方法操作,还可以执行本公开任意实施例所提供的图像分割方法中的相关操作。
通过以上关于实施例的描述,所属领域的技术人员可以清楚地了解到,本公开可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开每个实施例所述的图 像分割方法。

Claims (20)

  1. 一种图像分割方法,包括:
    获取包含目标分割区域的待分割图像;
    将所述待分割图像输入至已训练的神经网络模型中,通过由不同步长的胶囊卷积所构成的胶囊卷积层对所述待分割图像进行降采样以提取不同尺寸的特征图,并通过胶囊反卷积层逐步恢复所述不同尺寸的特征图以生成目标尺寸的特征图,以对所述待分割图像进行图像分割得到所述目标分割区域。
  2. 根据权利要求1所述的方法,其中,所述获取包含目标分割区域的待分割图像,包括:
    获取包含所述目标分割区域的图像,并确定所述目标分割区域在所述图像上的中心点;
    以所述中心点为中心对所述图像进行裁剪以生成包含所述目标分割区域的待分割图像,其中,所述待分割图像的尺寸小于所述图像的尺寸。
  3. 根据权利要求2所述的方法,其中,所述确定所述目标分割区域在所述图像上的中心点,包括:
    对所述图像进行三维傅里叶变换以得到三维傅里叶变换的结果;
    对所述三维傅里叶变换的结果的一次谐波进行傅里叶逆变换以得到第一边缘图像;
    对所述第一边缘图像进行预设边缘检测以得到目标分割区域的截面轮廓图像;
    将所述截面轮廓图像中的截面轮廓的中心点作为所述目标分割区域在所述图像上的中心点。
  4. 根据权利要求3所述的方法,其中,所述截面轮廓图像中的截面轮廓为圆形,所述对所述第一边缘图像进行预设边缘检测以得到目标分割区域的截面 轮廓图像,包括:
    对所述第一边缘图像进行坎尼Canny边缘检测以得到第二边缘图像;
    对所述第二边缘图像进行霍夫圆检测以得到目标分割区域的截面轮廓图像。
  5. 根据权利要求1-4任一项所述的方法,其中,所述通过由不同步长的胶囊卷积所构成的胶囊卷积层对所述待分割图像进行降采样以提取不同尺寸的特征图,包括:
    通过至少三个胶囊卷积层对所述待分割图像进行降采样以提取不同尺寸的特征图,其中,所述至少三个胶囊卷积层中每个胶囊卷积层至少包括相连接的两种步长的胶囊卷积,且所述两种步长的胶囊卷积中大步长的胶囊卷积位于小步长的胶囊卷积之后。
  6. 根据权利要求5所述的方法,其中,通过胶囊反卷积层逐步恢复所述不同尺寸的特征图以生成目标尺寸的特征图,以对所述待分割图像进行图像分割得到所述目标分割区域,包括:
    通过至少三个胶囊反卷积层逐步恢复所述不同尺寸的特征图以生成目标尺寸的特征图,以对所述待分割图像进行图像分割得到所述目标分割区域,其中,所述至少三个胶囊反卷积层中每个胶囊反卷积层至少包括相连接的胶囊卷积和胶囊反卷积,且所述胶囊卷积为所述小步长的胶囊卷积。
  7. 根据权利要求6所述的方法,其中,所述胶囊卷积层的层数与所述胶囊反卷积层的层数相同。
  8. 根据权利要求1所述的方法,其中,所述通过由不同步长的胶囊卷积所构成的胶囊卷积层对所述待分割图像进行降采样以提取不同尺寸的特征图之前,还包括:
    对所述待分割图像依次进行二维卷积和非线性激活优化。
  9. 根据权利要求7所述的方法,其中,所述胶囊卷积层和所述胶囊反卷积层均基于动态路由算法进行参数调整。
  10. 一种图像分割装置,包括:
    获取模块,设置为获取包含目标分割区域的待分割图像;
    输入模块,设置为将所述待分割图像输入至已训练的神经网络模型中进行图像分割以得到所述目标分割区域;
    其中,所述已训练的神经网络模型包括收缩模块和扩张模块,
    所述收缩模块设置为通过由不同步长的胶囊卷积所构成的胶囊卷积层对所述待分割图像进行降采样以提取不同尺寸的特征图,以及
    所述扩张模块设置为通过胶囊反卷积层逐步恢复所述不同尺寸的特征图以生成目标尺寸的特征图。
  11. 根据权利要求10所述的装置,其中,所述获取模块是设置为
    获取包含所述目标分割区域的图像,并确定所述目标分割区域在所述图像上的中心点;
    以所述中心点为中心对所述图像进行裁剪以生成包含所述目标分割区域的待分割图像,其中,所述待分割图像的尺寸小于所述图像的尺寸。
  12. 根据权利要求11所述的装置,其中,所述获取模块是设置为通过以下方式确定所述目标分割区域在所述图像上的中心点:
    对所述图像进行三维傅里叶变换以得到三维傅里叶变换的结果;
    对所述三维傅里叶变换的结果的一次谐波进行傅里叶逆变换以得到第一边缘图像;
    对所述第一边缘图像进行预设边缘检测以得到目标分割区域的截面轮廓图 像;
    将所述截面轮廓图像中的截面轮廓的中心点作为所述目标分割区域在所述图像上的中心点。
  13. 根据权利要求12所述的装置,其中,所述截面轮廓图像中的截面轮廓为圆形,则所述获取模块是设置为通过以下方式对所述第一边缘图像进行预设边缘检测以得到目标分割区域的截面轮廓图像:
    对所述第一边缘图像进行坎尼Canny边缘检测以得到第二边缘图像;
    对所述第二边缘图像进行霍夫圆检测以得到目标分割区域的截面轮廓图像。
  14. 根据权利要求10-13任一项所述的装置,其中,所述收缩模块是设置为通过至少三个胶囊卷积层对所述待分割图像进行降采样以提取不同尺寸的特征图,其中,所述至少三个胶囊卷积层中每个胶囊卷积层至少包括相连接的两种步长的胶囊卷积,且所述两种步长的胶囊卷积中大步长的胶囊卷积位于小步长的胶囊卷积之后。
  15. 根据权利要14所述的装置,其中,所述扩张模块是设置为通过至少三个胶囊反卷积层逐步恢复所述不同尺寸的特征图以生成目标尺寸的特征图,以得到所述目标分割区域,其中,所述至少三个胶囊反卷积层中每个胶囊反卷积层至少包括相连接的胶囊卷积和胶囊反卷积,且所述胶囊卷积为所述小步长的胶囊卷积。
  16. 根据权利要15所述的装置,其中,所述胶囊卷积层的层数与所述胶囊反卷积层的层数相同。
  17. 根据权利要求10所述的装置,其中,所述已训练的神经网络模型还包括:
    卷积网络模块,设置为对所述待分割图像依次进行二维卷积和非线性激活优化。
  18. 根据权利要求16所述的装置,其中,所述胶囊卷积层和所述胶囊反卷积层均基于动态路由算法进行参数调整。
  19. 一种设备,包括:
    至少一个处理器;
    存储装置,用于存储至少一个程序;
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-9中任一所述的图像分割方法。
  20. 一种包含计算机可执行指令的存储介质,其中,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-9中任一所述的图像分割方法。
PCT/CN2019/110402 2019-08-01 2019-10-10 图像分割方法、装置、设备及存储介质 WO2021017168A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910707182.9A CN110570394B (zh) 2019-08-01 2019-08-01 医学图像分割方法、装置、设备及存储介质
CN201910707182.9 2019-08-01

Publications (1)

Publication Number Publication Date
WO2021017168A1 true WO2021017168A1 (zh) 2021-02-04

Family

ID=68774259

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/110402 WO2021017168A1 (zh) 2019-08-01 2019-10-10 图像分割方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN110570394B (zh)
WO (1) WO2021017168A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113065480A (zh) * 2021-04-09 2021-07-02 暨南大学 书法作品风格的识别方法、装置、电子装置和存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325737B (zh) * 2020-02-28 2024-03-15 上海志唐健康科技有限公司 低剂量ct图像处理方法、装置和计算机设备
CN111951321A (zh) * 2020-08-21 2020-11-17 上海西门子医疗器械有限公司 处理计算机断层扫描的图像的方法及计算机断层扫描设备
CN112950652B (zh) * 2021-02-08 2024-01-19 深圳市优必选科技股份有限公司 机器人及其手部图像分割方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118479A (zh) * 2018-07-26 2019-01-01 中睿能源(北京)有限公司 基于胶囊网络的绝缘子缺陷识别定位装置及方法
CN109344833A (zh) * 2018-09-04 2019-02-15 中国科学院深圳先进技术研究院 医学图像分割方法、分割系统及计算机可读存储介质
US20190080456A1 (en) * 2017-09-12 2019-03-14 Shenzhen Keya Medical Technology Corporation Method and system for performing segmentation of image having a sparsely distributed object
CN109711411A (zh) * 2018-12-10 2019-05-03 浙江大学 一种基于胶囊神经元的图像分割识别方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090080738A1 (en) * 2007-05-01 2009-03-26 Dror Zur Edge detection in ultrasound images
CN105261006B (zh) * 2015-09-11 2017-12-19 浙江工商大学 基于傅里叶变换的医学图像分割算法
CN108629774A (zh) * 2018-05-11 2018-10-09 电子科技大学 一种基于霍夫圆变换的环形物体计数方法
CN109840560B (zh) * 2019-01-25 2023-07-04 西安电子科技大学 基于胶囊网络中融入聚类的图像分类方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190080456A1 (en) * 2017-09-12 2019-03-14 Shenzhen Keya Medical Technology Corporation Method and system for performing segmentation of image having a sparsely distributed object
CN109118479A (zh) * 2018-07-26 2019-01-01 中睿能源(北京)有限公司 基于胶囊网络的绝缘子缺陷识别定位装置及方法
CN109344833A (zh) * 2018-09-04 2019-02-15 中国科学院深圳先进技术研究院 医学图像分割方法、分割系统及计算机可读存储介质
CN109711411A (zh) * 2018-12-10 2019-05-03 浙江大学 一种基于胶囊神经元的图像分割识别方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113065480A (zh) * 2021-04-09 2021-07-02 暨南大学 书法作品风格的识别方法、装置、电子装置和存储介质
CN113065480B (zh) * 2021-04-09 2023-07-07 暨南大学 书法作品风格的识别方法、装置、电子装置和存储介质

Also Published As

Publication number Publication date
CN110570394A (zh) 2019-12-13
CN110570394B (zh) 2023-04-28

Similar Documents

Publication Publication Date Title
US11182896B2 (en) Automated segmentation of organ chambers using deep learning methods from medical imaging
WO2021017168A1 (zh) 图像分割方法、装置、设备及存储介质
CN108776969B (zh) 基于全卷积网络的乳腺超声图像肿瘤分割方法
Mahapatra et al. Image super-resolution using progressive generative adversarial networks for medical image analysis
Chakravarty et al. RACE-net: a recurrent neural network for biomedical image segmentation
WO2020001217A1 (zh) 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法
CN110337669B (zh) 一种用于多标签分割医学图像中的解剖结构的管线方法
JP7433297B2 (ja) 深層学習ベースのコレジストレーション
WO2021244661A1 (zh) 确定图像中血管信息的方法和系统
JP2020510463A (ja) 全層畳み込みネットワークを利用する自動化されたセグメンテーション
CN111557020A (zh) 基于完全卷积神经网络的心脏cta解剖结构分割系统
CN107492071A (zh) 医学图像处理方法及设备
US20220012890A1 (en) Model-Based Deep Learning for Globally Optimal Surface Segmentation
WO2021136368A1 (zh) 钼靶图像中胸大肌区域自动检测方法及装置
Habijan et al. Whole heart segmentation from CT images using 3D U-net architecture
Mahapatra et al. Progressive generative adversarial networks for medical image super resolution
WO2024021523A1 (zh) 基于图网络的大脑皮层表面全自动分割方法及系统
US20230394670A1 (en) Anatomically-informed deep learning on contrast-enhanced cardiac mri for scar segmentation and clinical feature extraction
CN113298742A (zh) 基于图像配准的多模态视网膜图像融合方法及系统
CN108898578B (zh) 一种医疗图像的处理方法、装置及计算机存储介质
JP2020109614A (ja) 画像処理装置、画像処理システム、画像処理方法、プログラム
Liu et al. Left atrium segmentation in CT volumes with fully convolutional networks
He et al. Automatic left ventricle segmentation from cardiac magnetic resonance images using a capsule network
Pang et al. A modified scheme for liver tumor segmentation based on cascaded FCNs
CN112164447B (zh) 图像处理方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19939188

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19939188

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19939188

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.02.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 19939188

Country of ref document: EP

Kind code of ref document: A1