CN112419271A - Image segmentation method and device and computer readable storage medium - Google Patents

Image segmentation method and device and computer readable storage medium Download PDF

Info

Publication number
CN112419271A
CN112419271A CN202011325510.8A CN202011325510A CN112419271A CN 112419271 A CN112419271 A CN 112419271A CN 202011325510 A CN202011325510 A CN 202011325510A CN 112419271 A CN112419271 A CN 112419271A
Authority
CN
China
Prior art keywords
image segmentation
blood vessel
image
training
vessel image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011325510.8A
Other languages
Chinese (zh)
Other versions
CN112419271B (en
Inventor
袁懿伦
高扬
周凌霄
张崇磊
宋伟
袁小聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shenguangsu Technology Co ltd
Original Assignee
Shenzhen Shenguangsu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shenguangsu Technology Co ltd filed Critical Shenzhen Shenguangsu Technology Co ltd
Publication of CN112419271A publication Critical patent/CN112419271A/en
Application granted granted Critical
Publication of CN112419271B publication Critical patent/CN112419271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image segmentation method, an image segmentation device and a computer-readable storage medium, wherein a training sample set is constructed based on a target blood vessel image data set and a corresponding label set; training a mixed deep learning network comprising a full convolution neural network and U-net by adopting a training sample set to obtain an image segmentation model; and inputting the blood vessel image to be segmented into an image segmentation model for image segmentation. By implementing the method, the blood vessel image segmentation is carried out by adopting the hybrid deep learning network, the overall characteristics of the image are emphasized, and the blood vessel segmentation precision and the robustness are effectively improved.

Description

Image segmentation method and device and computer readable storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to an image segmentation method and apparatus, and a computer-readable storage medium.
Background
Photoacoustic imaging technology has the ability to recognize molecular specificity and lateral resolution to the cellular level within the limits of optical diffraction, and has found widespread application in vascular imaging. The blood vessel image carries basic medical information, and can provide effective guidance for professional diagnosis.
The blood vessel image segmentation is an important work of biomedical image analysis, and the modern image processing technology makes good contribution to the blood vessel segmentation. At present, in the prior art, a threshold segmentation method, a region growing method, a maximum entropy method and a k-means clustering method are generally adopted to segment blood vessel images, and the methods all have the problem of limited segmentation precision.
Disclosure of Invention
Embodiments of the present invention provide an image segmentation method, an image segmentation apparatus, and a computer-readable storage medium, which can at least solve the problem in the related art that when a blood vessel image is segmented, the segmentation accuracy is relatively limited.
In order to achieve the above object, a first aspect of the embodiments of the present invention provides an image segmentation method, including:
constructing a training sample set based on the target blood vessel image data set and the corresponding label set; the target blood vessel image data set comprises a plurality of blood vessel image samples, and the label set comprises classification labels corresponding to the blood vessel image samples;
training a preset mixed deep learning network by adopting the training sample set to obtain an image segmentation model; the hybrid deep learning network comprises a first full convolution neural network and a second full convolution neural network, wherein the first full convolution neural network executes two up-sampling operations with the step length of 2 and one single-step operation with the step length of 8 in the deconvolution process, the convolution layer of the second full convolution neural network is of a U-shaped structure, and the second full convolution neural network respectively executes four up-sampling operations and four down-sampling operations with the step length of 2 in the deconvolution process;
and inputting the blood vessel image to be segmented into the image segmentation model for image segmentation.
In order to achieve the above object, a second aspect of embodiments of the present invention provides an image segmentation apparatus, including:
the construction module is used for constructing a training sample set based on the target blood vessel image data set and the corresponding label set; the target blood vessel image data set comprises a plurality of blood vessel image samples, and the label set comprises classification labels corresponding to the blood vessel image samples;
the training module is used for training a preset mixed deep learning network by adopting the training sample set to obtain an image segmentation model; the hybrid deep learning network comprises a first full convolution neural network and a second full convolution neural network, wherein the first full convolution neural network executes two up-sampling operations with the step length of 2 and one single-step operation with the step length of 8 in the deconvolution process, the convolution layer of the second full convolution neural network is of a U-shaped structure, and the second full convolution neural network respectively executes four up-sampling operations and four down-sampling operations with the step length of 2 in the deconvolution process;
and the segmentation module is used for inputting the blood vessel image to be segmented into the image segmentation model for image segmentation.
To achieve the above object, a third aspect of embodiments of the present invention provides an electronic apparatus, including: a processor, a memory, and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of any of the image segmentation methods described above.
To achieve the above object, a fourth aspect of the embodiments of the present invention provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of any one of the image segmentation methods described above.
According to the image segmentation method, the image segmentation device and the computer-readable storage medium, a training sample set is constructed based on a target blood vessel image data set and a corresponding label set; training a mixed deep learning network comprising a full convolution neural network and U-net by adopting a training sample set to obtain an image segmentation model; and inputting the blood vessel image to be segmented into an image segmentation model for image segmentation. By implementing the method, the blood vessel image segmentation is carried out by adopting the hybrid deep learning network, the overall characteristics of the image are emphasized, and the blood vessel segmentation precision and the robustness are effectively improved.
Other features and corresponding effects of the present invention are set forth in the following portions of the specification, and it should be understood that at least some of the effects are apparent from the description of the present invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic basic flowchart of an image segmentation method according to a first embodiment of the present invention;
fig. 2 is a schematic diagram of a network structure of an FCN according to a first embodiment of the present invention;
fig. 3 is a schematic diagram of a network structure of U-net according to a first embodiment of the present invention;
fig. 4 is a schematic structural diagram of a hybrid deep learning network according to a first embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a result visualization of a conventional image segmentation method according to a first embodiment of the present invention;
fig. 6 is a box diagram of evaluation indexes of the deep learning method according to the first embodiment of the present invention;
fig. 7 is a schematic visualization diagram of an image segmentation method based on a deep learning network according to a first embodiment of the present invention;
FIG. 8 is a schematic structural diagram of an image segmentation apparatus according to a second embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to a third embodiment of the invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment:
first, the present embodiment explains a conventional image segmentation optimization algorithm, which mainly includes the following steps:
the threshold segmentation method is to select an appropriate threshold pixel intensity as a segmentation line. Thus, a clear classification between foreground and background can be observed. Two major drawbacks of the thresholding method are the high sensitivity of thresholding and the lack of morphological information considerations.
The Region Growing (RG) method is a process of grouping pixels or sub-regions into larger regions according to a predefined criterion. The basic idea is to start with a set of manually selected seed points as initial points. The seed point may be a single pixel or a small region. The first step is to combine adjacent pixels or regions with similar attributes to form a new growing seed point. The next step is to repeat the above process until the region converges (no other seed points can be found). It is clear that a key issue with RG is that the choice of initial growth point cannot be determined empirically.
The maximum entropy method is used to describe the degree of uncertainty of information, and the essence of the maximum entropy principle is that the probability of an event occurring in the system satisfies all known constraints without making assumptions on any unknown information, in other words, treating the unknown information as equal probability. In the maximum entropy image segmentation, the total entropy of the image under each segmentation threshold is calculated, the maximum entropy is found, and the segmentation threshold corresponding to the maximum entropy is used as a final threshold. Pixels in the image with a gray level greater than this threshold are classified as foreground, otherwise as background.
The K-means clustering method is an iterative algorithm and mainly comprises the following 4 steps: a) randomly selecting a group of K-type initial centroids; b) labeling each sample according to the distance between the sample and each clustering center; c) calculating and updating a new centroid for each class; d) repeating steps b) and c) until the centers converge.
However, the above conventional image segmentation optimization algorithms all focus on local features of an image, do not consider spatial information of the image, and are all sub-optimal subdivision solutions.
In order to solve the technical problem in the related art that the segmentation accuracy is relatively limited when the image segmentation optimization algorithm performs the blood vessel image segmentation, the present embodiment provides an image segmentation method, as shown in fig. 1, which is a basic flow diagram of the image segmentation method provided by the present embodiment, and the image segmentation method provided by the present embodiment includes the following steps:
step 101, constructing a training sample set based on the target blood vessel image data set and the corresponding label set.
Specifically, the target blood vessel image data set of the present embodiment includes a plurality of blood vessel image samples, and the tag set includes a classification tag corresponding to each blood vessel image sample.
In this embodiment, an in vivo blood vessel image may be acquired from the ear of a switzerland webster mouse using an OR-PAM system that employs a surface plasmon resonance sensor as an ultrasound detector. And projecting the maximum amplitude of each PA A-line to the depth direction, and reconstructing a Maximum Amplitude Projection (MAP) image. The lateral resolution of the system is around 4.5um, so that the blood vessels are visualized. The surface plasmon resonance sensor of this embodiment can respond to ultrasound with a wide bandwidth, a certain depth resolution OR around 7.6um for the OR-PAM system, and it takes about 10 minutes to capture a 512 x 512 pixel blood vessel image.
In addition, all the label sets corresponding to the data set images can be obtained through manual labeling of Labelme, graphical interface image annotation software developed by the Massachusetts institute of technology.
In an optional implementation manner of this embodiment, before constructing the training sample set based on the target blood vessel image data set and the corresponding label set, the method further includes: obtaining effective blood vessel image samples with image quality meeting preset quality requirements from a limited number of blood vessel image samples; and performing data enhancement processing on the effective blood vessel image samples, and constructing a target blood vessel image data set with the sample quantity meeting the preset quantity requirement.
In particular, the images obtained in the OR-PAM system are typically limited, and partial images need to be discarded due to quality problems such as noise, breakpoints OR discontinuities. Because the number of images provided by the PA system is insufficient, the embodiment can perform image sample expansion on the obtained effective blood vessel image by adopting data enhancement methods such as cutting, overturning, mapping and the like so as to avoid the problems of overfitting and low training precision during subsequent model training. In addition, the present embodiment may crop the dataset image to 256 × 256 pixels to speed up the training process. And, can also be from the final data set, choose some as the test set at random, the other picture is put into training set and verification set at random.
And 102, training a preset hybrid deep learning network by adopting a training sample set to obtain an image segmentation model.
In particular, Convolutional Neural Networks (CNNs) are a powerful visual model that can produce a hierarchy of features. The use of CNN in semantic segmentation has exceeded the state of the art. Although previous models such as GoogleNet, VGG, AlexNet, etc. show better performance, none of them can achieve end-to-end training because of the existence of a full connection layer before the network output and consistent label size dimensions. In addition, the fully connected layer of the network expands the extracted features into one-dimensional vectors, thereby discarding spatial information of the feature map extracted from each map. And the Full Convolutional Network (FCN) replaces the Full connection layer with the Convolutional layer, which avoids preprocessing and post-processing the image, thereby preserving spatial information.
The hybrid deep learning network of the present embodiment includes a first full convolutional neural network (FCN) and a second full convolutional neural network (U-net), both networks consisting of convolution kernels of size 3 x 3. The first full convolution neural network executes two up-sampling operations with the step length of 2 and one single-step operation with the step length of 8 in the deconvolution process, the convolution layer of the second full convolution neural network is of a U-shaped structure, and the second full convolution neural network respectively executes four up-sampling operations with the step length of 2 and down-sampling operations in the deconvolution process.
As shown in fig. 2, which is a schematic diagram of a network structure of the FCN provided in this embodiment, except for the last layer of the network, each convolution kernel is added with a non-linear correction unit ReLu, and there is no significant difference between upsampling and deconvolution. Therefore, the network adopts an upsampling method to reduce the number of training parameters, and two convolution operations and a dropout block are used to prevent overfitting in the convolution to deconvolution conversion process.
Fig. 3 is a schematic diagram of a network structure of U-net provided in this embodiment, and the U-net is a model developed based on FCN, has strong robustness, and has a wide application field in academic and industrial fields. Although both networks are full convolutional layers, subtle differences can be found in the connectivity layer, the U-net combines low-level features of the coding part and high-level features of the decoding part of the network, which effectively avoids the loss of characteristics caused by pooling layers in the network. In addition, the network replaces additional layers with connection layers, fusing low-layer features with high-layer features, rather than simply adding corresponding pixels, thereby expanding channel capacity.
Fig. 4 is a schematic structural diagram of the hybrid deep learning network provided in this embodiment, and it should be further explained that the hybrid deep learning network Hy-Net based on the FCN and the U-Net in this embodiment combines the results from the FCN and the U-Net with a connection block (concatenate) and an activation block (sigmoid). The final probability map, i.e. the net output, is processed through the sigmoid function, with a default threshold set to 0.5, indicating that map entries greater than 0.5 are classified as foreground and the remaining entries are considered as background.
In an optional implementation manner of this embodiment, training a preset hybrid deep learning network by using a training sample set to obtain an image segmentation model, including: setting the initial learning rate to be 0.0001, setting the minimum batch size to be 2, and performing iterative training on a preset mixed deep learning network by adopting a training sample set according to a random gradient descent algorithm; and when the loss function value obtained by iterative training converges to a preset function value, determining the network model obtained by current iterative training as the trained image segmentation model.
Specifically, in this embodiment, the training process of the network is repeated for a plurality of times to perform iterative optimization, the output obtained by each training prediction of the neural network and the classification label marked by the sample are calculated as a Loss Function (Loss Function), and the Loss Function may be cross entropy Loss; and then, updating trainable parameters in the network by adopting a random gradient descent algorithm, adjusting parameters such as the weight of the neural network and the like to reduce the loss function value of the next iteration, judging that the model convergence condition is met when the loss function value meets the preset standard, namely completing the training process of the whole neural network model, and otherwise, continuing the next iteration training until the model convergence condition is met.
And 103, inputting the blood vessel image to be segmented into an image segmentation model for image segmentation.
Specifically, the Hy-Net of the embodiment optimizes the results of the two models by combining the feature outputs of the FCN and the U-Net, thereby effectively avoiding the uniqueness of the output of a single model. Compared with the traditional method, due to the fact that convolution kernels (feature descriptors) and the parameter sharing features of the kernels are used, the image segmentation model of the embodiment fully considers the overall features of the image, and has higher accuracy and robustness when the blood vessel image is segmented.
In an optional implementation manner of this embodiment, before inputting the blood vessel image to be segmented into the image segmentation model for image segmentation, the method further includes: inputting a preset test sample set into an image segmentation model to obtain a classification label of test output; and carrying out correlation calculation on the classification label output by the test and the classification label marked by the test sample set. Correspondingly, when the correlation degree is larger than a preset correlation degree threshold value, the image segmentation model is determined to be effective, and then the step of inputting the blood vessel image to be segmented into the image segmentation model for image segmentation is executed.
Specifically, in this embodiment, after the image segmentation model is trained, the validity of the image segmentation model is verified by using the test sample set, that is, the test sample set is input to the trained image segmentation model, and then the correlation between the output classification label and the original classification label in the test sample set is compared to determine the validity of the model, when the correlation between the test data and the original data is greater than a preset threshold, it is determined that the trained image segmentation model is an effective and correct model, and then the image to be segmented can be segmented by using the effective image segmentation model; otherwise, it indicates that the trained image segmentation model is insufficiently trained, and needs to be further optimized to ensure the image segmentation accuracy in the actual use process.
It should be noted that the following four indexes: the Die Coefficients (DC), intersection (IoU), sensitivity (Sen) and accuracy (Acc) were applied to each test experiment of the present embodiment to quantify the performance of the experiments of the present embodiment on various segmentation methods.
In the present embodiment, DC, IoU, Sen, and Acc of four conventional non-deep learning methods are compared. The threshold segmentation method selects a pixel intensity threshold value of 100 as a threshold value, and the segmentation precision reaches 97.40%. The remaining 3 indices were poor with mean values of 70.98%, 56.09% and 61.64%, respectively.
In addition, the region growing method requires the selection of an initial seed point. Thus, the image pixels are ordered in ascending pixel curvature. And taking the pixel with the minimum curvature as an initial seed point. The selection method ensures that the algorithm starts from the smoothest region of the image, and reduces the segmentation times. The threshold (maximum density distance between 8 pixels around the centroid) is set to 0.8. Evaluation scores for DC, IoU, Sen and Acc based on the region growing method were 64.30%, 49.70%, 51.96% and 97.26%, respectively.
The maximum entropy method achieves 50.77%, 35.33%, 95.95% and 91.02% for DC, IoU, Sen and Acc, respectively.
And (3) realizing K-mean clustering segmentation in MATLAB by using an imseg function to obtain 75.21%, 60.93%, 70.92% and 97.59% of DC, IoU, Sen and Acc respectively.
Fig. 5 is a diagram illustrating the visualization of the result of the conventional image segmentation method, so as to better illustrate the main differences of the various methods in segmentation. The raw test image is shown in fig. 5(a), in RGB channels, representing raw data captured by a PA imaging system. Fig. 5(b) shows the image segmentation results of the manual labeling, and fig. 5(c) to (f) show the image segmentation results of the threshold segmentation method, the region growing method, the maximum entropy method, and the K-mean clustering method, respectively. As is clear from columns 3, 4 and 5 in fig. 5, the conventional methods are good for bright images with sharp contour boundaries, but poor for images with unclear boundaries (column 1, column 2 and column 6). It can be seen that the four conventional segmentation methods described above lack robustness and generalization.
In addition, in the embodiment, the DC, IoU, Sen and Acc of the three deep learning methods (namely FCN, U-Net and Hy-Net) are further compared. Fig. 6 is a box diagram of evaluation indexes of the deep learning method according to the present embodiment, in which (a) to (d) in fig. 6 correspond to DC (which may also be expressed as DICE in fig. 6), IoU, Sen, and Acc, respectively. The minimum values of the performance of the FCN on DC, IoU, Sen and Acc are 60.31%, 43.17%, 53.23% and 92.82% respectively, and the maximum values are 84.07%, 72.52%, 87.43% and 99.71% respectively. The minimum values of the U-net to DC, IoU, Sen and Acc performances are 66.38%, 49.68%, 52.20% and 96.03%, respectively, and the maximum values are 96.77%, 93.75%, 98.29% and 99.94%, respectively. The minimum performance values of Hy-Net on DC, IoU, Sen and Acc are 69.83%, 53.65%, 75.47% and 95.32% respectively, and the maximum values are 94.67%, 89.87%, 97.49% and 99.90% respectively. The median dice for FCN, U-Net and Hy-Net were 66.32%, 83.79% and 85.13%, respectively, the median IoU for the three deep learning methods was 49.61%, 72.10% and 74.11%, respectively, the median Sen was 69.57%, 83.36% and 90.62%, respectively, and the median Acc was 96.38%, 98.11% and 98.18%, respectively.
In the deep learning method, the FCN performance is the worst, the U-Net performance is the second, and the Hy-Net performance is the best. Specifically, U-net performed 13.71%, 17.97%, 13.37%, and 1.46% higher than FCN; Hy-Net performed 15.34%, 20.05%, 18.62% and 1.55% higher than FCN; Hy-Net performed 1.63%, 2.08%, 5.25% and 0.09% higher than U-Net.
FIG. 7 is a visual schematic diagram of an image segmentation method based on a deep learning network, and FIG. 7(a) to (c) show image segmentation results of FCN, U-Net and Hy-Net, respectively. Visualization shows that Hy-Net can get a high degree of overlap with the label, both in large and small containers.
Therefore, as can be seen from the quantification and visualization results, the performance of Hy-Net is superior to that of FCN and U-Net, and the Hy-Net has good stability and robustness. Mainly characterized by the fact that both FCN and U-net are piecewise deficient. This phenomenon can be further explained from two aspects: firstly, the FCN and the U-net have characteristic limitation of the model no matter the super-parameter adjustment is different, the iteration times are increased or the size of the training set is increased; and secondly, the Hy-Net optimizes the results of the two models by combining the characteristic outputs of the FCN and the U-Net, so that the uniqueness of the output of a single model is effectively avoided.
It should be noted that, the uncertainty of the selection of the network binary threshold may cause the segmentation to be insufficient or excessive, and in this embodiment, a set of thresholds is tested, so that the following configuration (FCN:80, U-Net:100, Hy-Net:150) can obtain excellent results.
In conclusion, according to the deep learning network (Hy-Net) for PA image vessel segmentation provided by the embodiment, it can be obtained from the above evaluation results that the method of the embodiment obtains higher accuracy and robustness compared with the conventional method, and moreover, Hy-Net is significantly better than four evaluation indexes of FCN and U-Net.
According to the image segmentation method provided by the embodiment of the invention, a training sample set is constructed based on a target blood vessel image data set and a corresponding label set; training a mixed deep learning network comprising a full convolution neural network and U-net by adopting a training sample set to obtain an image segmentation model; and inputting the blood vessel image to be segmented into an image segmentation model for image segmentation. By implementing the method, the blood vessel image segmentation is carried out by adopting the hybrid deep learning network, the overall characteristics of the image are emphasized, and the blood vessel segmentation precision and the robustness are effectively improved.
Second embodiment:
in order to solve the technical problem that the segmentation accuracy is relatively limited when the image segmentation optimization algorithm in the related art performs the segmentation of the blood vessel image, the present embodiment shows an image segmentation apparatus, specifically referring to fig. 8, the image segmentation apparatus of the present embodiment includes:
a constructing module 801, configured to construct a training sample set based on a target blood vessel image data set and a corresponding label set; the target blood vessel image data set comprises a plurality of blood vessel image samples, and the label set comprises classification labels corresponding to the blood vessel image samples;
the training module 802 is configured to train a preset hybrid deep learning network by using a training sample set to obtain an image segmentation model; the hybrid deep learning network comprises a first full convolution neural network and a second full convolution neural network, wherein the first full convolution neural network executes two up-sampling operations with the step length of 2 and one single-step operation with the step length of 8 in the deconvolution process, the convolution layer of the second full convolution neural network is of a U-shaped structure, and the second full convolution neural network respectively executes four up-sampling operations and four down-sampling operations with the step length of 2 in the deconvolution process;
and the segmentation module 803 is used for inputting the blood vessel image to be segmented into the image segmentation model for image segmentation.
In some implementations of this embodiment, the building block 801 is further configured to: before a training sample set is constructed based on a target blood vessel image data set and a corresponding label set, obtaining effective blood vessel image samples with image quality meeting preset quality requirements from a limited number of blood vessel image samples; and performing data enhancement processing on the effective blood vessel image samples, and constructing a target blood vessel image data set with the sample quantity meeting the preset quantity requirement.
In some embodiments of this embodiment, the training module 802 is specifically configured to: setting the initial learning rate to be 0.0001, setting the minimum batch size to be 2, and performing iterative training on a preset mixed deep learning network by adopting a training sample set according to a random gradient descent algorithm; and when the loss function value obtained by iterative training converges to a preset function value, determining the network model obtained by current iterative training as the trained image segmentation model.
In some embodiments of this embodiment, the image segmentation apparatus further comprises: the test module is used for inputting a preset test sample set into the image segmentation model before inputting the blood vessel image to be segmented into the image segmentation model for image segmentation so as to obtain a classification label of test output; carrying out correlation calculation on the classification label output by the test and the classification label marked by the test sample set; and when the correlation degree is greater than a preset correlation degree threshold value, determining that the image segmentation model is effective. Correspondingly, the dividing module 803 is specifically configured to: and when the image segmentation model is effective, inputting the blood vessel image to be segmented into the image segmentation model for image segmentation.
It should be noted that, the image segmentation methods in the foregoing embodiments can be implemented based on the image segmentation apparatus provided in this embodiment, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the image segmentation apparatus described in this embodiment may refer to the corresponding process in the foregoing method embodiments, and is not described herein again.
By adopting the image segmentation device provided by the embodiment, a training sample set is constructed based on a target blood vessel image data set and a corresponding label set; training a mixed deep learning network comprising a full convolution neural network and U-net by adopting a training sample set to obtain an image segmentation model; and inputting the blood vessel image to be segmented into an image segmentation model for image segmentation. By implementing the method, the blood vessel image segmentation is carried out by adopting the hybrid deep learning network, the overall characteristics of the image are emphasized, and the blood vessel segmentation precision and the robustness are effectively improved.
The third embodiment:
the present embodiment provides an electronic device, as shown in fig. 9, which includes a processor 901, a memory 902, and a communication bus 903, where: the communication bus 903 is used for realizing connection communication between the processor 901 and the memory 902; the processor 901 is configured to execute one or more computer programs stored in the memory 902 to implement at least one step of the image segmentation method in the first embodiment.
The present embodiments also provide a computer-readable storage medium including volatile or non-volatile, removable or non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, computer program modules or other data. Computer-readable storage media include, but are not limited to, RAM (Random Access Memory), ROM (Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash Memory or other Memory technology, CD-ROM (Compact disk Read-Only Memory), Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
The computer-readable storage medium in this embodiment may be used for storing one or more computer programs, and the stored one or more computer programs may be executed by a processor to implement at least one step of the method in the first embodiment.
The present embodiment also provides a computer program, which can be distributed on a computer readable medium and executed by a computing device to implement at least one step of the method in the first embodiment; and in some cases at least one of the steps shown or described may be performed in an order different than that described in the embodiments above.
The present embodiments also provide a computer program product comprising a computer readable means on which a computer program as shown above is stored. The computer readable means in this embodiment may include a computer readable storage medium as shown above.
It will be apparent to those skilled in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software (which may be implemented in computer program code executable by a computing device), firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit.
In addition, communication media typically embodies computer readable instructions, data structures, computer program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to one of ordinary skill in the art. Thus, the present invention is not limited to any specific combination of hardware and software.
The foregoing is a more detailed description of embodiments of the present invention, and the present invention is not to be considered limited to such descriptions. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (10)

1. An image segmentation method, comprising:
constructing a training sample set based on the target blood vessel image data set and the corresponding label set; the target blood vessel image data set comprises a plurality of blood vessel image samples, and the label set comprises classification labels corresponding to the blood vessel image samples;
training a preset mixed deep learning network by adopting the training sample set to obtain an image segmentation model; the hybrid deep learning network comprises a first full convolution neural network and a second full convolution neural network, wherein the first full convolution neural network executes two up-sampling operations with the step length of 2 and one single-step operation with the step length of 8 in the deconvolution process, the convolution layer of the second full convolution neural network is of a U-shaped structure, and the second full convolution neural network respectively executes four up-sampling operations and four down-sampling operations with the step length of 2 in the deconvolution process;
and inputting the blood vessel image to be segmented into the image segmentation model for image segmentation.
2. The image segmentation method of claim 1, wherein before constructing the training sample set based on the target vessel image dataset and the corresponding label set, further comprising:
obtaining effective blood vessel image samples with image quality meeting preset quality requirements from a limited number of blood vessel image samples;
and performing data enhancement processing on the effective blood vessel image samples to construct and obtain the target blood vessel image data set with the sample number meeting the preset number requirement.
3. The image segmentation method of claim 1, wherein the training a preset hybrid deep learning network with the training sample set to obtain an image segmentation model comprises:
setting the initial learning rate to be 0.0001, setting the minimum batch size to be 2, and performing iterative training on a preset mixed deep learning network by adopting the training sample set according to a random gradient descent algorithm;
and when the loss function value obtained by iterative training converges to a preset function value, determining the network model obtained by current iterative training as the trained image segmentation model.
4. The image segmentation method according to any one of claims 1 to 3, wherein before inputting the blood vessel image to be segmented into the image segmentation model for image segmentation, the method further comprises:
inputting a preset test sample set into the image segmentation model to obtain a classification label of test output;
performing correlation calculation on the classification label output by the test and the classification label marked by the test sample set;
and when the correlation degree is greater than a preset correlation degree threshold value, determining that the image segmentation model is effective, and then executing the step of inputting the blood vessel image to be segmented into the image segmentation model for image segmentation.
5. An image segmentation apparatus, comprising:
the construction module is used for constructing a training sample set based on the target blood vessel image data set and the corresponding label set; the target blood vessel image data set comprises a plurality of blood vessel image samples, and the label set comprises classification labels corresponding to the blood vessel image samples;
the training module is used for training a preset mixed deep learning network by adopting the training sample set to obtain an image segmentation model; the hybrid deep learning network comprises a first full convolution neural network and a second full convolution neural network, wherein the first full convolution neural network executes two up-sampling operations with the step length of 2 and one single-step operation with the step length of 8 in the deconvolution process, the convolution layer of the second full convolution neural network is of a U-shaped structure, and the second full convolution neural network respectively executes four up-sampling operations and four down-sampling operations with the step length of 2 in the deconvolution process;
and the segmentation module is used for inputting the blood vessel image to be segmented into the image segmentation model for image segmentation.
6. The image segmentation apparatus of claim 5, wherein the construction module is further to: before a training sample set is constructed based on a target blood vessel image data set and a corresponding label set, obtaining effective blood vessel image samples with image quality meeting preset quality requirements from a limited number of blood vessel image samples; and performing data enhancement processing on the effective blood vessel image samples to construct and obtain the target blood vessel image data set with the sample number meeting the preset number requirement.
7. The image segmentation apparatus of claim 5, wherein the training module is specifically configured to: setting the initial learning rate to be 0.0001, setting the minimum batch size to be 2, and performing iterative training on a preset mixed deep learning network by adopting the training sample set according to a random gradient descent algorithm; and when the loss function value obtained by iterative training converges to a preset function value, determining the network model obtained by current iterative training as the trained image segmentation model.
8. The image segmentation apparatus according to any one of claims 5 to 7, further comprising: a test module;
the test module is used for inputting a preset test sample set into the image segmentation model before inputting the blood vessel image to be segmented into the image segmentation model for image segmentation so as to obtain a classification label of test output; performing correlation calculation on the classification label output by the test and the classification label marked by the test sample set; when the correlation degree is larger than a preset correlation degree threshold value, determining that the image segmentation model is valid;
the segmentation module is specifically configured to: and when the image segmentation model is effective, inputting the blood vessel image to be segmented into the image segmentation model for image segmentation.
9. An electronic device, comprising: a processor, a memory, and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the image segmentation method according to any one of claims 1 to 4.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores one or more programs which are executable by one or more processors to implement the steps of the image segmentation method according to any one of claims 1 to 4.
CN202011325510.8A 2020-10-27 2020-11-24 Image segmentation method, device and computer readable storage medium Active CN112419271B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/124169 WO2022087853A1 (en) 2020-10-27 2020-10-27 Image segmentation method and apparatus, and computer-readable storage medium
CNPCT/CN2020/124169 2020-10-27

Publications (2)

Publication Number Publication Date
CN112419271A true CN112419271A (en) 2021-02-26
CN112419271B CN112419271B (en) 2023-12-01

Family

ID=74777028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011325510.8A Active CN112419271B (en) 2020-10-27 2020-11-24 Image segmentation method, device and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN112419271B (en)
WO (1) WO2022087853A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113190934A (en) * 2021-06-10 2021-07-30 北京三一智造科技有限公司 Optimization method and device of pick barrel drill and electronic equipment
CN114818839A (en) * 2022-07-01 2022-07-29 之江实验室 Deep learning-based optical fiber sensing underwater acoustic signal identification method and device
CN115170912A (en) * 2022-09-08 2022-10-11 北京鹰瞳科技发展股份有限公司 Method for training image processing model, method for generating image and related product

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237260A (en) * 2022-06-02 2023-12-15 北京阅影科技有限公司 Training method of image processing model, image processing method and device
CN115359057B (en) * 2022-10-20 2023-03-28 中国科学院自动化研究所 Deep learning-based freezing electron microscope particle selection method and device and electronic equipment
CN115631301B (en) * 2022-10-24 2023-07-28 东华理工大学 Soil-stone mixture image three-dimensional reconstruction method based on improved full convolution neural network
CN116503607B (en) * 2023-06-28 2023-09-19 天津市中西医结合医院(天津市南开医院) CT image segmentation method and system based on deep learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584254A (en) * 2019-01-07 2019-04-05 浙江大学 A kind of heart left ventricle's dividing method based on the full convolutional neural networks of deep layer
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network
US20190244357A1 (en) * 2018-02-07 2019-08-08 International Business Machines Corporation System for Segmentation of Anatomical Structures in Cardiac CTA Using Fully Convolutional Neural Networks
CN110660046A (en) * 2019-08-30 2020-01-07 太原科技大学 Industrial product defect image classification method based on lightweight deep neural network
CN111028217A (en) * 2019-12-10 2020-04-17 南京航空航天大学 Image crack segmentation method based on full convolution neural network
CN111127447A (en) * 2019-12-26 2020-05-08 河南工业大学 Blood vessel segmentation network and method based on generative confrontation network

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6965343B2 (en) * 2016-10-31 2021-11-10 コニカ ミノルタ ラボラトリー ユー.エス.エー.,インコーポレイテッド Image segmentation methods and systems with control feedback
CN107016681B (en) * 2017-03-29 2023-08-25 浙江师范大学 Brain MRI tumor segmentation method based on full convolution network
CN108876805B (en) * 2018-06-20 2021-07-27 长安大学 End-to-end unsupervised scene passable area cognition and understanding method
CN109886307A (en) * 2019-01-24 2019-06-14 西安交通大学 A kind of image detecting method and system based on convolutional neural networks
CN111583262A (en) * 2020-04-23 2020-08-25 北京小白世纪网络科技有限公司 Blood vessel segmentation method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190244357A1 (en) * 2018-02-07 2019-08-08 International Business Machines Corporation System for Segmentation of Anatomical Structures in Cardiac CTA Using Fully Convolutional Neural Networks
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network
CN109584254A (en) * 2019-01-07 2019-04-05 浙江大学 A kind of heart left ventricle's dividing method based on the full convolutional neural networks of deep layer
CN110660046A (en) * 2019-08-30 2020-01-07 太原科技大学 Industrial product defect image classification method based on lightweight deep neural network
CN111028217A (en) * 2019-12-10 2020-04-17 南京航空航天大学 Image crack segmentation method based on full convolution neural network
CN111127447A (en) * 2019-12-26 2020-05-08 河南工业大学 Blood vessel segmentation network and method based on generative confrontation network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王娜;傅迎华;蒋念平;: "基于监督的全卷积神经网络视网膜血管分割", 软件导刊 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113190934A (en) * 2021-06-10 2021-07-30 北京三一智造科技有限公司 Optimization method and device of pick barrel drill and electronic equipment
CN114818839A (en) * 2022-07-01 2022-07-29 之江实验室 Deep learning-based optical fiber sensing underwater acoustic signal identification method and device
CN114818839B (en) * 2022-07-01 2022-09-16 之江实验室 Deep learning-based optical fiber sensing underwater acoustic signal identification method and device
CN115170912A (en) * 2022-09-08 2022-10-11 北京鹰瞳科技发展股份有限公司 Method for training image processing model, method for generating image and related product

Also Published As

Publication number Publication date
WO2022087853A1 (en) 2022-05-05
CN112419271B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN112419271B (en) Image segmentation method, device and computer readable storage medium
CN110706246B (en) Blood vessel image segmentation method and device, electronic equipment and storage medium
CN111899245B (en) Image segmentation method, image segmentation device, model training method, model training device, electronic equipment and storage medium
KR101856584B1 (en) Method and device for identifying traffic signs
CN106940816B (en) CT image pulmonary nodule detection system based on 3D full convolution neural network
CN110599500B (en) Tumor region segmentation method and system of liver CT image based on cascaded full convolution network
CN110751636B (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
CN110070531B (en) Model training method for detecting fundus picture, and fundus picture detection method and device
CN110599505A (en) Organ image segmentation method and device, electronic equipment and storage medium
WO2021136368A1 (en) Method and apparatus for automatically detecting pectoralis major region in molybdenum target image
CN111179295B (en) Improved two-dimensional Otsu threshold image segmentation method and system
CN112602114A (en) Image processing method and device, neural network and training method, and storage medium
CN115965750B (en) Vascular reconstruction method, vascular reconstruction device, vascular reconstruction computer device, and vascular reconstruction program
CN113256670A (en) Image processing method and device, and network model training method and device
CN110570394A (en) medical image segmentation method, device, equipment and storage medium
CN114332133A (en) New coronary pneumonia CT image infected area segmentation method and system based on improved CE-Net
CN114758137A (en) Ultrasonic image segmentation method and device and computer readable storage medium
CN113920109A (en) Medical image recognition model training method, recognition method, device and equipment
Choudhary et al. Mathematical modeling and simulation of multi-focus image fusion techniques using the effect of image enhancement criteria: A systematic review and performance evaluation
Bhuvaneswari et al. Contrast enhancement of retinal images using green plan masking and whale optimization algorithm
CN116778486A (en) Point cloud segmentation method, device, equipment and medium of angiography image
CN116129417A (en) Digital instrument reading detection method based on low-quality image
CN113269788B (en) Guide wire segmentation method based on depth segmentation network and shortest path algorithm under X-ray perspective image
CN112862785B (en) CTA image data identification method, device and storage medium
CN112862786B (en) CTA image data processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant