CN112102259A - Image segmentation algorithm based on boundary guide depth learning - Google Patents

Image segmentation algorithm based on boundary guide depth learning Download PDF

Info

Publication number
CN112102259A
CN112102259A CN202010875313.7A CN202010875313A CN112102259A CN 112102259 A CN112102259 A CN 112102259A CN 202010875313 A CN202010875313 A CN 202010875313A CN 112102259 A CN112102259 A CN 112102259A
Authority
CN
China
Prior art keywords
boundary
convolution
network
target
convolution module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010875313.7A
Other languages
Chinese (zh)
Inventor
王雷
沈梅晓
常倩
施策
陈浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eye Hospital of Wenzhou Medical University
Original Assignee
Eye Hospital of Wenzhou Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eye Hospital of Wenzhou Medical University filed Critical Eye Hospital of Wenzhou Medical University
Priority to CN202010875313.7A priority Critical patent/CN112102259A/en
Publication of CN112102259A publication Critical patent/CN112102259A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image segmentation algorithm based on boundary guide depth learning, which improves a classical U-Net network by designing three different convolution modules and two different sub-networks, thereby constructing an image segmentation network with boundary guide characteristics, and executing simultaneous extraction of interesting targets and boundaries thereof in an image.

Description

Image segmentation algorithm based on boundary guide deep learning
Technical Field
The invention relates to the technical field of image segmentation and processing, in particular to an image segmentation algorithm based on boundary guide deep learning.
Background
Image segmentation is a processing technology for detecting and extracting an interest target from an image, and the interest target is effectively segmented by analyzing gray distribution characteristics, tissue contrast and correlation among tissues of different regions of the image. The processing technology can assist in the tasks of understanding and analyzing images, detecting and positioning the focus, measuring the shape of the focus area and the like, thereby having very important clinical diagnosis value and academic research significance. To accurately perform image segmentation, a number of segmentation algorithms have been developed and roughly classified into unsupervised (unsupervised) and supervised (super) segmentation algorithms. (a) The unsupervised segmentation algorithm is mainly used for detecting and extracting an interest target through conventional computer vision technologies such as morphological operation or threshold value method according to the gray distribution characteristics of the image and the information such as tissue contrast. The algorithm generally has higher computational efficiency and can effectively segment images with good imaging quality. However, they are not effective in processing images with poor tissue contrast, severe imaging artifacts, or noise. In particular, the algorithms often involve more parameters assigned by experience, and cannot process images under different imaging conditions on a large scale, so that the algorithms have limited application value. (b) Supervised segmentation algorithms generally require manual intervention in image processing, and perform segmentation of regions of interest by selecting appropriate feature information or manual labeling. Due to the use of the characteristic information, the algorithm can reduce the influence of adverse imaging conditions such as artifacts or noise to a certain extent, thereby being superior to an unsupervised image segmentation algorithm. However, the acquisition of feature information requires a large amount of background knowledge, and the acquired information may have a weak correlation with the object of interest, thereby affecting the segmentation performance of the algorithm. In order to automatically detect various feature information in an image, a supervised segmentation algorithm based on deep learning has gained wide attention in recent years. Among the deep learning-based algorithms, U-Net is a relatively popular segmentation network, is widely applied to segmentation tasks of various medical images, and can achieve high segmentation performance. However, such a U-shaped segmentation network cannot effectively highlight feature information closely related to an interested target, and easily loses some key image information, thereby having relatively low segmentation accuracy in a boundary region of the target. In order to improve the partitioning performance of the U-Net network in the boundary region, it is necessary to design a suitable convolution module and network architecture to perform accurate partitioning of the target and its boundary.
As a classical segmentation network, U-Net, although achieving good segmentation performance, has limited boundary detection accuracy in some cases. This is mainly because (a) the decoding process of the encoded convolution characteristics by the U-Net network is indiscriminately not only susceptible to interference from irrelevant redundant characteristics, but also results in the key target characteristics having lower action weight in segmentation; (b) the segmentation network uses the down-sampling operation of the image for many times, greatly reduces the resolution of the image, causes the loss of a large amount of structural textures and causes the blurring of the target boundary. These deficiencies result in the inability of the U-Net network to effectively process the boundary regions of the target, especially when the input image has poor tissue contrast, severe imaging artifacts or noise. In order to overcome the defects of the U-Net network, a proper deep learning network needs to be designed, the boundary information closely related to an interested target in an image is highlighted by performing necessary processing on the characteristic information between two adjacent convolution layers, the detection sensitivity of a segmentation network on the target boundary is improved, and the target and the boundary thereof are accurately segmented at the same time.
Disclosure of Invention
In order to solve the technical defects in the prior art, particularly the problem of segmentation processing when an image has low tissue contrast, serious imaging artifacts or noise and the like, the invention provides an image segmentation algorithm based on boundary guide deep learning.
The technical solution adopted by the invention is as follows: an image segmentation algorithm based on boundary-guided deep learning comprises the following steps:
(1) according to a traditional boundary detection algorithm, three different convolution modules, namely a coding convolution module, a decoding convolution module and a boundary awareness convolution module, are constructed to realize the detection of target convolution characteristics and boundary convolution characteristics in an input image;
(2) integrating three different convolution modules into a classical U-shaped network, constructing two different network branches which are respectively a target extraction sub-network and a boundary detection sub-network, and utilizing the target extraction sub-network and the boundary detection sub-network to accurately extract an interest target and a boundary thereof;
(3) and effectively integrating the convolution characteristics in the two sub-networks to realize the image segmentation guided by the boundary convolution characteristics.
The step (1) is specifically as follows: based on a traditional boundary detection algorithm, a coding convolution module and a decoding convolution module are integrated into a classical U-Net network and used for replacing a single convolution module used by the network in the image coding and feature decoding processes, so that an object extraction sub-network (OES) is constructed, the network and the U-Net are consistent in network architecture, the respective extraction of the object and the boundary convolution features and the segmentation of an interest object are realized, the boundary convolution features acquired from the object extraction sub-network are input into a boundary awareness convolution module, a new boundary detection sub-network (EDS) is constructed in the form of a U-type network architecture, and the extraction of the corresponding boundary of the interest object is executed.
The method comprises the following steps of inputting the boundary convolution characteristics acquired from the target extraction sub-network into a boundary awareness convolution module, and constructing a new boundary detection sub-network in the form of a U-shaped network architecture: according to the structure of a decoding convolution module in a U-Net network, different coding convolution characteristics need to be processed in series, and then convolution and difference operation are carried out on series results by utilizing a plurality of convolution layers, so that the decoding convolution module for extracting a target area and the boundary awareness convolution module for detecting a target boundary are obtained.
The step (2) is specifically as follows: the coding convolution module and the decoding convolution module are integrated into a U-shaped network structure, so that an object extraction sub-network can be constructed for extracting a specified object area. The coding convolution module and the boundary awareness convolution module are integrated into a U-shaped network structure, a boundary detection sub-network can be constructed and used for detecting the boundary corresponding to the target area, and the two sub-networks are respectively used for decoding the target and the boundary convolution characteristics, so that the input image is segmented into the interest target and the corresponding boundary.
The step (3) is specifically as follows: four kinds of feature information, namely, the detected target and boundary convolution characteristics of the encoding convolution module, the decoding convolution characteristics output by the boundary-aware convolution module, and the output result of the upsampling operation, are input into the decoding convolution module, integration of various different kinds of feature information is achieved through a series operation based on image channels, and then the decoding processing of the feature information is performed using three identical convolution layers, i.e., Conv3 × 3 → BN → ReLU, one element-based subtraction operation, and one series operation based on image channels. By performing the integration and decoding between the output results of the two sub-networks layer by layer, a boundary-based image segmentation network can be constructed.
The invention has the beneficial effects that: the invention provides an image segmentation algorithm based on boundary guide deep learning, which improves a classical U-Net network by designing three different convolution modules and two different sub-networks, thereby constructing an image segmentation network with boundary guide characteristics, executing simultaneous extraction of interesting targets and boundaries thereof in an image, being applicable to an image segmentation task, obtaining relatively high segmentation precision, and providing important theoretical support for target space positioning, focus detection and morphological quantification.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a boundary-guided image segmentation network to be designed in the present invention.
Fig. 3 shows three different convolution modules, from top to bottom, which are respectively an encoding convolution module, a decoding convolution module and a boundary awareness convolution module, designed based on the conventional boundary detection algorithm.
FIG. 4 shows the results of the present invention after segmenting the OCT image of the cornea, columns 1-2 are the original image to be segmented and the corresponding manual labeling results, columns 3-6 are the segmentation results corresponding to U-Net, M-Net, Deeplabv3 and the pseudo-design network.
FIG. 5 is the boundary of the object obtained by the segmented network to be designed in the present invention, wherein columns 1 and 3 are manually labeled boundaries of the object, and columns 2 and 4 are boundaries of the object obtained by the segmented network.
Detailed Description
The design idea is as follows:
design of convolution module (1) the existing deep learning-based image segmentation network (such as U-Net) generally uses a simpler convolution module to realize extraction of various convolution features in an input image. These convolution modules are usually composed of two identical convolution layers (convolutional layers), each of which is composed of three basic operation modules in the deep learning domain, namely, 3 × 3convolution operation (3 × 3convolutional operation, Conv3 × 3), batch regularization (BN) and modified linear activation function (ReLU). The sequential superposition of these basic operations (i.e., Conv3 × 3 → BN → ReLU) makes the convolution module generally have relatively limited feature detectability, and cannot effectively highlight the convolution features (such as features of object boundary or position) closely related to the object of interest, thereby leading to a certain performance defect in image segmentation of the deep learning network. In order to enhance key feature information in an image, a convolution module is reasonably designed to effectively distinguish different convolution features in the image, so that the different convolution features have different action weights in the deep learning process, and further the detection sensitivity and the segmentation accuracy of an interest target are improved. Therefore, the invention is inspired by the traditional boundary detection algorithm (namely the difference between an image and the smooth filtering result thereof can highlight various boundary textures in the image to a certain extent), and three different convolution modules are introduced to carry out the coding detection and decoding extraction of different convolution characteristics. The convolution modules capture the form change of an interest target on different convolution characteristic graphs (feature maps) by calculating the information difference of the corresponding positions of the output results of two adjacent convolution layers, so that the detection sensitivity of a segmentation network to the target boundary is improved, the boundary information is ensured to have larger action weight in the segmentation network, and the segmentation performance of the image is improved;
the convolution module commonly used at present is simply formed by overlapping two identical convolution layers (i.e., Conv3 × 3 → BN → ReLU), and there is no other operation between the output results of the convolution layers. In order to detect the information difference between the output results of different convolution layers in the convolution module, the invention designs three different convolution modules by using the traditional boundary detection algorithm. The traditional boundary detection algorithm can be represented as follows:
Figure BDA0002652490830000051
wherein, I represents an input original gray image,
Figure BDA0002652490830000052
denotes the convolution operator and G (σ) denotes a gaussian kernel with standard deviation σ. E is a difference map (difference image) between the original image and its gaussian filtering result, which can highlight various boundary information in the image more obviously than the original image, and these information are usually closely related to the boundary of the target, so that it can provide a key guide for the segmentation of the target region.
Based on the traditional boundary detection algorithm, the traditional convolution module can be modified, and detection and enhancement of various boundary textures in the image are realized by calculating the characteristic information change before and after the appointed convolution layer. The convolution modules to be designed are shown in fig. 3, which are respectively: (a) the coding convolution module is used for coding various information in the input image and can realize the conversion from pixel gray scale to convolution characteristic; (b) the decoding convolution module is used for decoding various convolution characteristics and executing the elimination of redundant characteristic information and the selection and enhancement of key module characteristics; (c) the boundary-aware convolution module for the boundary convolution feature decoding process functions similarly to the decoding convolution module. The convolution modules respectively convert image information into target convolution characteristics and boundary convolution characteristics through pixel-based subtraction operation, so that the resolution and the enhancement of different characteristic information are realized.
The network branch design (2) based on the convolution module is that the classic U-Net network is an end-to-end deep network architecture with a simple structure, is widely used in the segmentation tasks of various medical images and can obtain higher segmentation performance; however, such networks are not sensitive enough to some important features that are closely related to the object of interest because (a) various convolution features are processed indiscriminately in image segmentation; (b) the multiple use of the image downsampling operation (MaxPooling2 × 2) results in a large amount of texture details and information such as spatial position being lost, and thus results in a target boundary region with low segmentation accuracy. In order to overcome the two defects, the three convolution modules introduced in the foregoing are introduced into a U-type network structure, and extraction of the interest target and the corresponding boundary thereof is respectively realized by designing two different U-type network branches. Both sub-networks use a U-shaped structure, so that the advantages of the U-Net network are inherited to a certain extent. In addition, because the target and the boundary thereof have strong organizational relevance, the characteristics of different convolution layers in the two sub-networks can be integrated together, so that the segmentation of the target and the detection of the boundary can be mutually promoted, and the accurate extraction of a target area is promoted together;
the encoding convolution module and the decoding convolution module are integrated into a classical U-Net network and used for replacing a single convolution module used by the network in the image encoding and feature decoding processes, so that an object extraction sub-network (OES) is constructed. The network and the U-Net are kept consistent on the network architecture, and the respective extraction of target and boundary convolution characteristics and the segmentation of an interest target are mainly realized. In order to overcome the defects of U-Net, the boundary convolution characteristics acquired from the target extraction sub-network are input into a boundary awareness convolution module, a new boundary detection sub-network (EDS) is constructed in the form of a U-type network architecture, and the extraction of the corresponding boundary of the interest target is executed. Since the target extraction sub-network can simultaneously acquire the target and boundary convolution features, the two sub-networks share one image information encoding operation but use different feature decoding operations.
The advantages of this design strategy mainly include: (a) converting input image information into different convolution characteristics, and enabling the characteristics to have different action weights in image segmentation, so that the sensitivity of a segmentation network to an interested target is improved, and the training efficiency and the segmentation performance of a network model are effectively improved; (b) the boundary convolution characteristics are generally closely related to the boundary of the interested target, and the related image can enhance the boundary characteristics of the target to a certain extent and reduce the segmentation error of a boundary region; (c) meanwhile, the interesting target and the boundary thereof are segmented, so that the problem of information loss caused by multiple times of image downsampling can be partially solved, and the interference of redundant irrelevant features on the segmentation performance is reduced.
Based on the boundary-guided segmentation network (3), the two designed network branches respectively execute the extraction of the interest target and the corresponding boundary thereof; in view of the tissue relevance between the interest target and the boundary thereof, the segmentation precision of the image can be improved to a certain extent by combining the corresponding convolution characteristics in the two network branches. For this purpose, decoding features of corresponding convolution levels in two network branches are integrated by means of a decoding convolution module, and deletion of redundant convolution features and enhancement of key target features are realized through image channel-based series-connection operation (channel-wise compensation) or pixel-based subtraction operation (element-wise subtraction). The operation of the decoding convolution module is realized layer by layer, an input image can be converted into a probability map (probability map) with the same dimension, and the probability map is thresholded by using a proper probability value, so that a required segmentation result can be obtained. In order to obtain a probability map of the input image, a segmentation network needs to be trained on the labeled image using a suitable cost function (loss function) and optimization algorithm (optimization algorithm). After training is finished, the trained model can be applied to execute rapid and accurate segmentation of the specified interesting target in a large number of images;
and integrating the convolution characteristics in the target extraction sub-network and the boundary detection sub-network in view of the relevance between the interest target and the boundary thereof, and performing layer-by-layer decoding of the target. Specifically, four kinds of feature information, i.e., the detected target and boundary convolution characteristics of the encoding convolution module, the decoded convolution characteristics output by the boundary-aware convolution module, and the output result of the UpSampling operation (UpSampling2 × 2), are input into the decoding convolution module, integration of various kinds of feature information is achieved by a series operation based on image channels, and then the decoding processing of the feature information is performed using three identical convolution layers (i.e., Conv3 × 3 → BN → ReLU), one element-based subtraction operation, and one series operation based on image channels. By performing the integration and decoding between the output results of the two sub-networks layer by layer, a boundary-based image segmentation network can be constructed.
The boundary-guided split network is developed from a U-Net network, but has unique characteristics on the network architecture: (a) inspired by the traditional boundary detection algorithm, a large amount of subtraction based on pixels is used, so that the detection sensitivity of a segmentation network to a target boundary is effectively improved, and the segmentation precision of a boundary area is improved; (b) the classical U-shaped network structure is used for multiple times, and the detection performance of the interest target and the boundary thereof is improved while the advantages of the structure are maintained; (c) the two sub-networks share one coding operation process to realize the conversion of image information into convolution characteristics, and the decoding results of the two sub-networks are integrated together, so that the extraction of the target and the boundary can be mutually promoted, and the accurate segmentation of the target area is promoted together.
The following further describes an image segmentation deep learning network based on boundary guidance with reference to the accompanying drawings;
referring to fig. 1, the invention relates to an image segmentation deep learning network based on boundary guidance, comprising the following steps:
step 1, constructing a convolution module with boundary sensitivity characteristics based on a traditional boundary detection algorithm, and realizing coding and decoding processing of target and boundary convolution characteristics.
(1a) In a classical U-Net network, the coding and decoding processing of image information is simply and sequentially executed by a convolution module formed by linear iteration of two convolution layers, and the convolution module cannot perform any comparative evaluation on the information difference between the output results of the two convolution layers, so that a segmentation network cannot obtain proper balance between redundant feature elimination and target information enhancement, and relatively large boundary segmentation errors are caused. In order to improve the detection performance of the convolution module, the output results of two adjacent convolution layers are subjected to pixel-based subtraction operation so as to detect the change situation of the structural texture in different characteristic images. This process is similar to conventional boundary detection algorithms, and can detect a large amount of characteristic information closely related to the boundary of the object, and thus can be used to guide the segmentation of the object region. Due to the introduction of subtraction operation, the output results of different convolution layers are properly integrated, so that a code convolution module to be designed can be obtained, and the detection of target and boundary convolution characteristics is realized.
(1b) In order to effectively process the target and boundary convolution characteristics detected by the coding convolution module, different convolution modules (namely a decoding convolution module and a boundary awareness convolution module) need to be designed for decoding operations of the two convolution characteristics respectively. According to the structure of a decoding convolution module in a U-Net network, different coding convolution characteristics need to be processed in series, and then convolution and difference operation are carried out on series results by utilizing a plurality of convolution layers, so that the decoding convolution module for extracting a target area and the boundary awareness convolution module for detecting a target boundary are obtained.
Step 2, network branch design based on convolution module
The coding convolution module and the decoding convolution module are integrated into a U-shaped network structure, so that an object extraction sub-network (OES) can be constructed for extracting a specified object area. The coding convolution module and the boundary awareness convolution module are integrated into a U-type network structure, and an edge detection sub-network (EDS) can be constructed and used for detecting the boundary corresponding to the target area. The two sub-networks are respectively used for decoding the target and boundary convolution characteristics, and the segmentation of the input image to the interested target and the corresponding boundary thereof is realized.
Step 3, segmenting the network based on boundary guidance
Because the coding convolution module can simultaneously detect the target and the boundary convolution characteristics in the input image, the two designed sub-networks can be integrated together to share one image coding processing process, and simultaneously, because the target area and the boundary thereof have extremely strong organizational relevance, the decoding output results of the two sub-networks can be effectively integrated, so that the extraction of the target area and the detection of the target boundary can be mutually promoted, and the segmentation of the interested target is jointly promoted. The network formed by coupling two sub-networks is a boundary-guided image segmentation network designed by the invention.
1. Simulation conditions are as follows:
the invention carries out segmentation experiments on Keras deep learning software on a Windows 1064 bit Intel (R) Xeon (R) Gold 5120 CPU @2.20GHz 2.19GHz RAM 64GB platform, and experimental data is a cornea OCT image data set which is collected and manually labeled from an eye vision hospital affiliated to Wenzhou medical university.
2. Simulation content and results
The simulation experiment uses the corneal OCT image to train and independently verify a boundary-guided segmentation network which is designed in a simulated mode, tests the effectiveness of an algorithm, and carries out comparative evaluation on the effectiveness of the simulation experiment and the performance of the three existing segmentation networks (namely U-Net, M-Net and Deeplabv3), and the experimental result is presented in figures 4 and 5:
in fig. 4, columns 1-2 are the original image to be segmented and the corresponding manual labeling result thereof, and columns 3-6 are the segmentation results corresponding to U-Net, M-Net, depllabv 3, and the pseudo-design network, respectively. As can be seen from the segmentation result, the segmentation network to be designed has better segmentation precision and smoother target boundary than other segmentation networks.
In fig. 5, columns 1 and 3 are manually labeled object boundaries, and columns 2 and 4 are object boundaries obtained by segmenting the network. According to the segmentation result, the segmentation network to be designed can successfully extract the boundary of the target even under the condition that the target boundary is small. In addition, since the OCT image of the cornea has extremely low tissue contrast, this results in that the segmentation network to be designed cannot effectively detect the boundary of the object in some regions.
The segmentation results for four different networks can be found: the segmentation network to be designed can accurately detect interest targets with different sizes and has optimal segmentation performance. In the other three segmentation networks, the interest targets with different sizes cannot be simultaneously considered in image segmentation, so that the segmentation performance is relatively limited.
In the description of the present invention, it should be noted that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
The skilled person should understand that: although the invention has been described in terms of the above specific embodiments, the inventive concept is not limited thereto and any modification applying the inventive concept is intended to be included within the scope of the patent claims.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (5)

1. An image segmentation algorithm based on boundary-guided deep learning is characterized by comprising the following steps:
(1) according to a traditional boundary detection algorithm, three different convolution modules, namely a coding convolution module, a decoding convolution module and a boundary awareness convolution module, are constructed to realize the detection of target convolution characteristics and boundary convolution characteristics in an input image;
(2) integrating three different convolution modules into a classical U-NET network, constructing two different network branches which are respectively a target extraction sub-network and a boundary detection sub-network, and utilizing the two sub-networks to accurately extract an interest target and a boundary thereof;
(3) and effectively integrating the convolution characteristics in the two sub-networks to realize the image segmentation guided by the boundary convolution characteristics.
2. The image segmentation algorithm based on the boundary-guided deep learning as claimed in claim 1, wherein the step (1) is specifically: based on a traditional boundary detection algorithm, a coding convolution module and a decoding convolution module are integrated into a classical U-Net network and used for replacing a single convolution module used by the network in the image coding and feature decoding processes, so that an object extraction sub-network (OES) is constructed, the network and the U-Net are consistent on a network architecture, the respective extraction of the object and the boundary convolution feature and the segmentation of an interest object are realized, the boundary convolution feature acquired from the object extraction sub-network is input into a boundary awareness convolution module, a new boundary detection sub-network (EDS) is constructed in the form of the U-NET network architecture, and the extraction of the boundary corresponding to the interest object is executed.
3. The image segmentation algorithm based on the boundary-guided deep learning of claim 2, wherein the step of inputting the boundary convolution features obtained from the target extraction sub-network into the boundary-aware convolution module, and constructing a new boundary detection sub-network in the form of the U-NET network architecture as well, comprises: according to the structure of a decoding convolution module in a U-Net network, different coding convolution characteristics need to be processed in series, and then convolution and difference operation are carried out on series results by utilizing a plurality of convolution layers, so that the decoding convolution module for extracting a target area and the boundary awareness convolution module for detecting a target boundary are obtained.
4. The image segmentation algorithm based on boundary-guided deep learning according to claim 1, wherein the step (2) is specifically as follows: integrating the coding convolution module and the decoding convolution module into a U-NET network structure, constructing a target extraction sub-network for extracting a specified target area, integrating the coding convolution module and the boundary awareness convolution module into the U-NET network structure, constructing a boundary detection sub-network for detecting a boundary corresponding to the target area, and respectively decoding the target and the boundary convolution characteristics to realize the segmentation of an input image into an interest target and the corresponding boundary.
5. The image segmentation algorithm based on boundary-guided deep learning according to claim 1, wherein the step (3) is specifically as follows: inputting four kinds of characteristic information, namely a detected target and a boundary convolution characteristic of an encoding convolution module, a decoding convolution characteristic output by a boundary awareness convolution module and an output result of an up-sampling operation of an input image, into a decoding convolution module, realizing integration of various different kinds of characteristic information through a series operation based on image channels, then performing decoding processing of the characteristic information by using three same convolution layers, namely Conv3 x 3 → BN → ReLU, one element-based subtraction operation and one series operation based on the image channels, performing integration and decoding between two sub-network output results layer by layer, and constructing a boundary-guided image segmentation network.
CN202010875313.7A 2020-08-27 2020-08-27 Image segmentation algorithm based on boundary guide depth learning Pending CN112102259A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010875313.7A CN112102259A (en) 2020-08-27 2020-08-27 Image segmentation algorithm based on boundary guide depth learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010875313.7A CN112102259A (en) 2020-08-27 2020-08-27 Image segmentation algorithm based on boundary guide depth learning

Publications (1)

Publication Number Publication Date
CN112102259A true CN112102259A (en) 2020-12-18

Family

ID=73757848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010875313.7A Pending CN112102259A (en) 2020-08-27 2020-08-27 Image segmentation algorithm based on boundary guide depth learning

Country Status (1)

Country Link
CN (1) CN112102259A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160261A (en) * 2021-03-09 2021-07-23 温州医科大学附属眼视光医院 Boundary enhancement convolution neural network for OCT image corneal layer segmentation
CN113192089A (en) * 2021-04-12 2021-07-30 温州医科大学附属眼视光医院 Bidirectional cross-connected convolutional neural network for image segmentation
CN113587946A (en) * 2021-07-06 2021-11-02 安徽农业大学 Visual navigation system and method for field agricultural machine
CN115082502A (en) * 2022-06-30 2022-09-20 温州医科大学 Image segmentation method based on distance-guided deep learning strategy

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190205758A1 (en) * 2016-12-30 2019-07-04 Konica Minolta Laboratory U.S.A., Inc. Gland segmentation with deeply-supervised multi-level deconvolution networks
CN110276402A (en) * 2019-06-25 2019-09-24 北京工业大学 A kind of salt body recognition methods based on the enhancing of deep learning semanteme boundary
CN110930416A (en) * 2019-11-25 2020-03-27 宁波大学 MRI image prostate segmentation method based on U-shaped network
CN111008986A (en) * 2019-11-20 2020-04-14 天津大学 Remote sensing image segmentation method based on multitask semi-convolution
CN111340816A (en) * 2020-03-23 2020-06-26 沈阳航空航天大学 Image segmentation method based on double-U-shaped network framework

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190205758A1 (en) * 2016-12-30 2019-07-04 Konica Minolta Laboratory U.S.A., Inc. Gland segmentation with deeply-supervised multi-level deconvolution networks
CN110276402A (en) * 2019-06-25 2019-09-24 北京工业大学 A kind of salt body recognition methods based on the enhancing of deep learning semanteme boundary
CN111008986A (en) * 2019-11-20 2020-04-14 天津大学 Remote sensing image segmentation method based on multitask semi-convolution
CN110930416A (en) * 2019-11-25 2020-03-27 宁波大学 MRI image prostate segmentation method based on U-shaped network
CN111340816A (en) * 2020-03-23 2020-06-26 沈阳航空航天大学 Image segmentation method based on double-U-shaped network framework

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HAN LIU,LEI WANG: "SDFN:Segmentation-based deep Fusion network for thoracic disease classification in chest X-ray Images", 《COMPUTERIZED MEDICAL IMAGING AND GRAPHICS》 *
SHUJUN WANG,LEQUAN YU,ET.,AL.: "Boundary and Entropy-Driven Adversarial Learning for Fundus Image Segmentation", 《MICCAI 2019: MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2019》 *
WMULEI: "CornealSegmentation", 《HTTPS://GITHUB.COM/WMULEI/CORNEALSEGMENTATION/BLOB/MASTER/MODEL.PY》 *
XIAO FENG WANG,DE SHUANG HUANG,HUAN XU: "An efficient local Chan–Vese model for image segmentation", 《PATTERN RECOGNITION》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160261A (en) * 2021-03-09 2021-07-23 温州医科大学附属眼视光医院 Boundary enhancement convolution neural network for OCT image corneal layer segmentation
CN113160261B (en) * 2021-03-09 2022-11-18 温州医科大学附属眼视光医院 Boundary enhancement convolution neural network for OCT image corneal layer segmentation
CN113192089A (en) * 2021-04-12 2021-07-30 温州医科大学附属眼视光医院 Bidirectional cross-connected convolutional neural network for image segmentation
CN113192089B (en) * 2021-04-12 2022-07-19 温州医科大学附属眼视光医院 Bidirectional cross-connection convolutional neural network for image segmentation
CN113587946A (en) * 2021-07-06 2021-11-02 安徽农业大学 Visual navigation system and method for field agricultural machine
CN115082502A (en) * 2022-06-30 2022-09-20 温州医科大学 Image segmentation method based on distance-guided deep learning strategy
CN115082502B (en) * 2022-06-30 2024-05-10 温州医科大学 Image segmentation method based on distance guidance deep learning strategy

Similar Documents

Publication Publication Date Title
CN111145170B (en) Medical image segmentation method based on deep learning
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
CN112102259A (en) Image segmentation algorithm based on boundary guide depth learning
WO2021003821A1 (en) Cell detection method and apparatus for a glomerular pathological section image, and device
CN112001928B (en) Retina blood vessel segmentation method and system
CN112862830B (en) Multi-mode image segmentation method, system, terminal and readable storage medium
US20220383661A1 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
CN115546605A (en) Training method and device based on image labeling and segmentation model
CN115375711A (en) Image segmentation method of global context attention network based on multi-scale fusion
CN115937158A (en) Stomach cancer focus region segmentation method based on layered attention mechanism
CN113538363A (en) Lung medical image segmentation method and device based on improved U-Net
CN113160261B (en) Boundary enhancement convolution neural network for OCT image corneal layer segmentation
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN113192089B (en) Bidirectional cross-connection convolutional neural network for image segmentation
CN114445419A (en) Lung segment segmentation method, device and system based on bronchial topological structure
CN110570417B (en) Pulmonary nodule classification device and image processing equipment
CN116012283B (en) Full-automatic ultrasonic image measurement method, equipment and storage medium
CN116258717B (en) Lesion recognition method, device, apparatus and storage medium
CN116597041B (en) Nuclear magnetic image definition optimization method and system for cerebrovascular diseases and electronic equipment
CN115359082A (en) Boundary enhancement-based OCT image choroid layer structure segmentation method and storage medium
US20230206438A1 (en) Multi arm machine learning models with attention for lesion segmentation
CN113160240A (en) Cyclic hopping deep learning network
Zhang et al. Automated segmentation of skin lesion based on multi-scale feature extraction and attention mechanism
CN117237341A (en) Human body peripheral blood sample detection method and system based on hyperspectral image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201218