CN112070722A - Fluorescence in situ hybridization cell nucleus segmentation method and system - Google Patents

Fluorescence in situ hybridization cell nucleus segmentation method and system Download PDF

Info

Publication number
CN112070722A
CN112070722A CN202010815601.3A CN202010815601A CN112070722A CN 112070722 A CN112070722 A CN 112070722A CN 202010815601 A CN202010815601 A CN 202010815601A CN 112070722 A CN112070722 A CN 112070722A
Authority
CN
China
Prior art keywords
module
fluorescence
situ hybridization
input end
cell
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010815601.3A
Other languages
Chinese (zh)
Inventor
刘剑飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Miaoke Code Biotechnology Co ltd
Original Assignee
Xiamen Miaoke Code Biotechnology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Miaoke Code Biotechnology Co ltd filed Critical Xiamen Miaoke Code Biotechnology Co ltd
Priority to CN202010815601.3A priority Critical patent/CN112070722A/en
Publication of CN112070722A publication Critical patent/CN112070722A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10064Fluorescence image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Medical Informatics (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)

Abstract

The invention discloses a fluorescence in situ hybridization cell nucleus segmentation method and a fluorescence in situ hybridization cell nucleus segmentation system. The invention adopts the trained linknet network to segment the fluorescence in-situ hybridization cell image to extract high-level semantic features to abstract the image, and fuses corresponding shallow information to capture detailed features when a decoder recovers the resolution of a feature map so as to better extract the required target cell prospect.

Description

Fluorescence in situ hybridization cell nucleus segmentation method and system
Technical Field
The invention relates to the technical field of cell segmentation, in particular to a fluorescence in situ hybridization cell nucleus segmentation method and a fluorescence in situ hybridization cell nucleus segmentation system.
Background
Fluorescence In Situ Hybridization (FISH) is a molecular cytogenetic technique that provides reliable imaging biomarkers for the diagnosis of cancer and genetic diseases. Segmentation of fluorescent in situ hybridization cells is a prerequisite for quantitative analysis of these imaging biomarkers. However, the adhesion between cells in these many cell images often exists, which makes it difficult for many automated segmentation algorithms to accurately segment foreground cells from FISH cell images.
How to overcome the phenomenon of cell adhesion and accurately segment the foreground cells from the FISH cell image becomes a technical problem to be solved urgently.
Disclosure of Invention
The invention aims to provide a fluorescence in situ hybridization cell nucleus segmentation method and a fluorescence in situ hybridization cell nucleus segmentation system, which are used for overcoming the phenomenon of cell adhesion so as to accurately segment foreground cells from a FISH cell image.
In order to achieve the purpose, the invention provides the following scheme:
a fluorescent in situ hybridization nucleus segmentation method, which comprises the following steps:
segmenting the fluorescence in-situ hybrid cell image by using the trained linknet network to obtain a segmented fluorescence in-situ hybrid cell image;
carrying out cell center point identification on the segmented fluorescence in-situ hybrid cell image by using the trained U-net network to obtain a cell center point probability map;
according to the segmented fluorescence in situ hybridization cell image and the cell center point probability map, determining that the independent areas with the number of cell center points larger than 1 in the segmented fluorescence in situ hybridization cell image are adhesion areas;
and performing secondary segmentation on the bonding area by adopting a watershed algorithm according to the number and distribution of the cell center points of the bonding area in the cell center point probability map.
Optionally, the trained linknet network includes: an encoder and a decoder;
the encoder comprises two convolution layers and three residual modules, wherein the two convolution layers are respectively a first convolution layer and a second convolution layer, the three residual modules are respectively a first residual module, a second residual module and a third residual module, and the first convolution layer, the second convolution layer, the first residual module, the second residual module and the third residual module are sequentially connected;
the decoder comprises three up-sampling modules and three convolution layers, wherein the three up-sampling modules are a first up-sampling module, a second up-sampling module and a third up-sampling module respectively, and the three convolution layers comprise a third convolution layer, a fourth convolution layer and a fifth convolution layer respectively;
the input end of the first up-sampling module is connected with the output end of the third residual error module, the output end of the first up-sampling module is connected with the first input end of the second up-sampling module, and the first up-sampling module is used for performing convolution operation on the output image of the third residual error module; a second input end of the second upsampling module is connected with an output end of the second residual error module, an output end of the second upsampling module is connected with a first input end of the third upsampling module, and the second upsampling module is used for performing convolution operation on an addition operation result after performing addition operation on an output image of the second residual error module and an output image of the first upsampling module; a second input end of the third upsampling module is connected with an output end of the first residual error module, an output end of the third upsampling module is connected with an input end of the third convolutional layer, and the third upsampling module is used for performing convolution operation on an addition operation result after performing addition operation on an output image of the first residual error module and an output image of the second upsampling module; the output end of the third convolutional layer is connected with the input end of the fourth convolutional layer, and the output end of the fourth convolutional layer is connected with the input end of the fifth convolutional layer.
Optionally, the residual error module includes a sixth convolutional layer, a seventh convolutional layer, and an adder;
an output end of the sixth convolutional layer is connected with an input end of the seventh convolutional layer, an output end of the seventh convolutional layer is connected with a first input end of the adder, an input end of the sixth convolutional layer is connected with a second input end of the adder, an input end of the sixth convolutional layer is used as an input end of the residual error module, and an output end of the adder is used as an output end of the residual error module.
Optionally, the segmenting the fluorescence in situ hybrid cell image by using the trained linknet network to obtain the segmented fluorescence in situ hybrid cell image further includes:
acquiring a plurality of fluorescence in situ hybridization cell image samples, and manually segmenting the fluorescence in situ hybridization cell image samples to establish a first training sample set of training samples comprising the fluorescence in situ hybridization cell image samples and manual segmentation results;
initializing a linknet network by using Xaviers;
and taking the BCE loss function as a target loss function of the linknet network, training the initialized linknet network by using the first training sample set until the value of the target loss function of the linknet network is smaller than a first preset threshold value, and outputting the trained linknet network.
Optionally, the method for identifying the cell center point of the segmented fluorescence in-situ hybrid cell image by using the trained U-net network to obtain a cell center point probability map further includes:
acquiring a plurality of fluorescence in situ hybridization cell image samples, and carrying out cell center point recognition and identification on the fluorescence in situ hybridization cell image samples in an artificial mode to establish a second training sample set of training samples comprising the fluorescence in situ hybridization cell image samples and an artificial recognition result;
and taking the Dice loss function as a target loss function of the U-net network, training the U-net network by using the second training sample set until the value of the target loss function of the U-net network is smaller than a second preset threshold value, and outputting the trained U-net network.
A fluorescent in situ hybridization cell nucleus segmentation system, the segmentation system comprising:
the image segmentation module is used for segmenting the fluorescence in-situ hybrid cell image by utilizing the trained linknet network to obtain a segmented fluorescence in-situ hybrid cell image;
the central point identification module is used for identifying the central point of the cell on the segmented fluorescence in-situ hybrid cell image by utilizing the trained U-net network to obtain a cell central point probability map;
according to the segmented fluorescence in situ hybridization cell image and the cell center point probability map, determining that the independent areas with the number of cell center points larger than 1 in the segmented fluorescence in situ hybridization cell image are adhesion areas;
and the secondary segmentation module is used for carrying out secondary segmentation on the adhesion region by adopting a watershed algorithm according to the number and the distribution of the cell central points of the adhesion region in the cell central point probability map.
Optionally, the trained linknet network includes: an encoder and a decoder;
the encoder comprises two convolution layers and three residual modules, wherein the two convolution layers are respectively a first convolution layer and a second convolution layer, the three residual modules are respectively a first residual module, a second residual module and a third residual module, and the first convolution layer, the second convolution layer, the first residual module, the second residual module and the third residual module are sequentially connected;
the decoder comprises three up-sampling modules and three convolution layers, wherein the three up-sampling modules are a first up-sampling module, a second up-sampling module and a third up-sampling module respectively, and the three convolution layers comprise a third convolution layer, a fourth convolution layer and a fifth convolution layer respectively;
the input end of the first up-sampling module is connected with the output end of the third residual error module, the output end of the first up-sampling module is connected with the first input end of the second up-sampling module, and the first up-sampling module is used for performing convolution operation on the output image of the third residual error module; a second input end of the second upsampling module is connected with an output end of the second residual error module, an output end of the second upsampling module is connected with a first input end of the third upsampling module, and the second upsampling module is used for performing convolution operation on an addition operation result after performing addition operation on an output image of the second residual error module and an output image of the first upsampling module; a second input end of the third upsampling module is connected with an output end of the first residual error module, an output end of the third upsampling module is connected with an input end of the third convolutional layer, and the third upsampling module is used for performing convolution operation on an addition operation result after performing addition operation on an output image of the first residual error module and an output image of the second upsampling module; the output end of the third convolutional layer is connected with the input end of the fourth convolutional layer, and the output end of the fourth convolutional layer is connected with the input end of the fifth convolutional layer.
Optionally, the residual error module includes a sixth convolutional layer, a seventh convolutional layer, and an adder;
an output end of the sixth convolutional layer is connected with an input end of the seventh convolutional layer, an output end of the seventh convolutional layer is connected with a first input end of the adder, an input end of the sixth convolutional layer is connected with a second input end of the adder, an input end of the sixth convolutional layer is used as an input end of the residual error module, and an output end of the adder is used as an output end of the residual error module.
Optionally, the segmentation system further includes:
the system comprises a first training sample set establishing module, a second training sample set establishing module and a third training sample set establishing module, wherein the first training sample set establishing module is used for acquiring a plurality of fluorescence in situ hybrid cell image samples, carrying out manual segmentation on the fluorescence in situ hybrid cell image samples and establishing a first training sample set of training samples comprising the fluorescence in situ hybrid cell image samples and manual segmentation results;
the linknet network initialization module is used for initializing the linknet network by using Xaviers;
and the linknet network training module is used for taking the BCE loss function as a target loss function of the linknet network, training the initialized linknet network by using the first training sample set until the value of the target loss function of the linknet network is smaller than a first preset threshold value, and outputting the trained linknet network.
Optionally, the segmentation system further includes:
the second training sample set establishing module is used for acquiring a plurality of fluorescence in situ hybridization cell image samples, identifying and marking the cell center points of the fluorescence in situ hybridization cell image samples in an artificial mode, and establishing a second training sample set of training samples comprising the fluorescence in situ hybridization cell image samples and an artificial identification result;
and the U-net network training module is used for taking the Dice loss function as a target loss function of the U-net network, training the U-net network by using the second training sample set until the value of the target loss function of the U-net network is smaller than a second preset threshold value, and outputting the trained U-net network.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention discloses a fluorescent in-situ hybridization cell nucleus segmentation method and a system, wherein the segmentation method comprises the steps of firstly segmenting a fluorescent in-situ hybridization cell image by utilizing a trained linknet network to obtain a segmented fluorescent in-situ hybridization cell image; carrying out cell center point identification on the segmented fluorescence in-situ hybrid cell image by using the trained U-net network to obtain a cell center point probability map; then, according to the segmented fluorescence in situ hybridization cell image and the cell center point probability map, determining that the independent areas with the number of the cell center points larger than 1 in the segmented fluorescence in situ hybridization cell image are adhesion areas; and finally, performing secondary segmentation on the bonding area by adopting a watershed algorithm according to the number and distribution of the cell center points of the bonding area in the cell center point probability map. The invention adopts the trained linknet network to segment the fluorescence in-situ hybridization cell image to extract high-level semantic features to abstract the image, and fuses corresponding shallow information to capture detailed features when a decoder recovers the resolution of a feature map so as to better extract the required target cell prospect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of a fluorescent in situ hybridization cell nucleus segmentation method provided by the present invention;
FIG. 2 is a schematic diagram of a fluorescent in situ hybridization cell nucleus segmentation method according to the present invention;
FIG. 3 is a diagram of a linknet network architecture provided by the present invention;
fig. 4 is a schematic diagram of a residual module provided in the present invention, wherein fig. 4(a) is a schematic diagram of a first residual module, fig. 4(b) is a schematic diagram of a second residual module, and fig. 4(c) is a schematic diagram of a third residual module;
FIG. 5 is a network architecture diagram of the U-net provided by the present invention;
FIG. 6 is a schematic diagram of the convolution operation provided by the present invention;
FIG. 7 is a schematic diagram illustrating the principle of deconvolution operation provided by the present invention;
FIG. 8 is a comparison graph of the segmentation effect provided by the present invention, wherein FIG. 8(a) is a graph of the fluorescence in situ hybridization cell to be segmented using the present invention, FIG. 8(b) is a graph of the watershed segmentation effect based on the prior information using the present invention, and FIG. 8(c) is a graph of the watershed segmentation effect using no prior information.
Detailed Description
The invention aims to provide a fluorescence in situ hybridization cell nucleus segmentation method and a fluorescence in situ hybridization cell nucleus segmentation system, which are used for overcoming the phenomenon of cell adhesion so as to accurately segment foreground cells from a FISH cell image.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Aiming at the problem of how to effectively separate foreground cells and the mutual adhesion of the cells, the invention provides a linknet algorithm, and the network structure of the algorithm is similar to that of U-net and is also a symmetrical topological structure of a coder-decoder. However, different from the original network, the resnet network is selected in the encoding part of the encoder to extract high-level semantic features to abstract the semantic features, and corresponding shallow information is fused to capture detailed features when the decoder recovers the resolution of the feature map, so that the required target cell foreground can be better extracted. In addition, in order to solve the problem of mutual adhesion between cells, the invention acquires the cell center point by training a depth model so as to separate each single cell. The identification of the center of the cell is mainly used for identifying the core of each cell. In the cell center point network, in addition to using different loss functions, deconvolution that can be learned by the network is utilized in order to improve the dense feature response of the output feature map. And finally, integrating the centroid points of the cells into the watershed algorithm can greatly improve the accuracy of cell segmentation. Experimental results show that the method provided by the invention can effectively separate the cells adhered to each other, and the segmentation precision is improved to a certain extent.
In order to achieve the purpose, the invention provides the following scheme:
the present invention provides a fluorescent in situ hybridization cell nucleus segmentation method as shown in fig. 1 and 2, which comprises the following steps:
and 101, segmenting the fluorescence in-situ hybrid cell image by using the trained linknet network to obtain the segmented fluorescence in-situ hybrid cell image.
Inspired by U-Net and resnet network structures, the invention uses a network topology structure called linknet, through which the FISH cell image of the invention can be better segmented, and the network schematic diagram is shown in FIG. 3:
the convolution in fig. 3 represents a 3 × 3 convolution operation, and by using an up-sampling operation and combining with an addition operation of a gray arrow, more bottom information can be effectively combined when the resolution of the corresponding feature map is restored, so that the network can finally output a more refined result map, where module 1 is a first residual module, module 2 is a second residual module, and module 3 is a third residual module. As shown in fig. 3, the three residual blocks are shown in blocks 1, 2 and 3 of the encoding part, and the structures of the first residual block, the second residual block and the third residual block are respectively shown in fig. 4(a), 4(b) and 4 (c).
The choice of activation function is also very important for convolutional neural networks. The nonlinear cell activation function chosen for use in the present invention is ELU, which is given by equation (1):
Figure BDA0002632521450000071
where α ═ 1 is a constant. The ELU activation function can effectively relieve gradient diffusion during backward propagation.
In the linknet network structure, the objective function we choose is the BCE loss function, which is defined as (2).
Figure BDA0002632521450000081
In the formula, x represents the probability map of each point which is finally output and belongs to the cell, y represents the label corresponding to the point, and N is the total number of pixels of the input image in each batch. Continuously iteratively updating linknet by using the objective function will cause the final output data distribution of the network to tend towards the distribution in the label image.
In convolutional neural networks, good weight initialization is important. Otherwise, some parts of the network may not contribute effectively to other network parts because of the presence of excessive weights. In our linknet network model, Xaviers is applied to the network parameters of each layer for initialization.
The trained linknet network comprises: an encoder and a decoder;
the encoder comprises two convolution layers and three residual modules, wherein the two convolution layers are respectively a first convolution layer and a second convolution layer, the three residual modules are respectively a first residual module, a second residual module and a third residual module, and the first convolution layer, the second convolution layer, the first residual module, the second residual module and the third residual module are sequentially connected;
the decoder comprises three up-sampling modules and three convolution layers, wherein the three up-sampling modules are a first up-sampling module, a second up-sampling module and a third up-sampling module respectively, and the three convolution layers comprise a third convolution layer, a fourth convolution layer and a fifth convolution layer respectively;
the input end of the first up-sampling module is connected with the output end of the third residual error module, the output end of the first up-sampling module is connected with the first input end of the second up-sampling module, and the first up-sampling module is used for performing convolution operation on the output image of the third residual error module; a second input end of the second upsampling module is connected with an output end of the second residual error module, an output end of the second upsampling module is connected with a first input end of the third upsampling module, and the second upsampling module is used for performing convolution operation on an addition operation result after performing addition operation on an output image of the second residual error module and an output image of the first upsampling module; a second input end of the third upsampling module is connected with an output end of the first residual error module, an output end of the third upsampling module is connected with an input end of the third convolutional layer, and the third upsampling module is used for performing convolution operation on an addition operation result after performing addition operation on an output image of the first residual error module and an output image of the second upsampling module; the output end of the third convolutional layer is connected with the input end of the fourth convolutional layer, and the output end of the fourth convolutional layer is connected with the input end of the fifth convolutional layer.
Optionally, the residual error module includes a sixth convolutional layer, a seventh convolutional layer, and an adder;
an output end of the sixth convolutional layer is connected with an input end of the seventh convolutional layer, an output end of the seventh convolutional layer is connected with a first input end of the adder, an input end of the sixth convolutional layer is connected with a second input end of the adder, an input end of the sixth convolutional layer is used as an input end of the residual error module, and an output end of the adder is used as an output end of the residual error module.
The method comprises the following steps of segmenting a fluorescence in-situ hybrid cell image by utilizing a trained linknet network to obtain a segmented fluorescence in-situ hybrid cell image, and the method also comprises the following steps: acquiring a plurality of fluorescence in situ hybridization cell image samples, and manually segmenting the fluorescence in situ hybridization cell image samples to establish a first training sample set of training samples comprising the fluorescence in situ hybridization cell image samples and manual segmentation results; initializing a linknet network by using Xaviers; and taking the BCE loss function as a target loss function of the linknet network, training the initialized linknet network by using the first training sample set until the value of the target loss function of the linknet network is smaller than a first preset threshold value, and outputting the trained linknet network.
And 102, carrying out cell center point identification on the segmented fluorescence in-situ hybrid cell image by using the trained U-net network to obtain a cell center point probability map.
In the cell center identification model, the invention uses the network structure of U-net, as shown in FIG. 5. The network structure presents a coding-decoding topology by reusing 3 × 3 convolution operations during the coding process, and each convolution operation is followed by a nonlinear unit (ReLU) and a batch normalization operation. When maximum pooling is performed, the number of signature channels will increase by a factor, since the size of the signature decreases and increasing the number of convolution channels can effectively mitigate the loss of information. And the decoding stage of U-Net can be regarded as a shape generator, and the function of the shape generator is mainly used for recovering the required central point region of the target cell. And in the decoding stage we do not simply apply linear interpolation to increase the size of the feature map. The feature layer map of the previous layer is upsampled using a learnable convolution (transposed convolution), the output of which is an enlarged and dense activation map, operating in the manner shown in fig. 6. Meanwhile, in order to better locate the position of the cell center point, the network structure acquires detail information in the bottom layer information by combining with the high-resolution feature map in the encoding stage when restoring the feature map size, and then refines the output feature result based on the information by using 3 × 3 convolution. The final output of the network is a probability map of the same size as the input image, representing the probability that each pixel belongs to a cell center point.
The U-net architecture of FIG. 5. In fig. 5, each frame to be grayed corresponds to a feature map of multiple channels, and the number of channels is represented at the top of these frames. The size of the feature map is provided at the lower left edge of the box, the box without grey scale representing the feature map information copied from the encoded part, and the arrows representing the different operations. The schematic diagram of the convolution operation is shown in fig. 6, and the schematic diagram of the deconvolution operation is shown in fig. 7.
Furthermore, deep convolutional neural networks are often difficult to optimize due to the problem of internal covariate bias; the distribution of inputs in each layer is iteratively changed during training as the parameters of its preceding layer network are updated. There is often some frustration in optimizing deeper networks because the reasons for the changes in distribution to propagate across layers will be amplified step by step. Therefore, we perform batch-normal (batch-normal) after each activation function, reducing the internal covariate bias by normalizing the input distribution of each layer to a standard gaussian distribution. We observe that batch normalization plays an important role in optimizing our network structure.
In the center point region identification model, the invention uses a Dice loss function to optimize our model. The loss function can be defined as (3):
Figure BDA0002632521450000101
where N is the image fed in each batch, YjIs a true label, P, for each specimenjRepresenting the final prediction probability map.
The training process is as follows: acquiring a plurality of fluorescence in situ hybridization cell image samples, and carrying out cell center point recognition and identification on the fluorescence in situ hybridization cell image samples in an artificial mode to establish a second training sample set of training samples comprising the fluorescence in situ hybridization cell image samples and an artificial recognition result; and taking the Dice loss function as a target loss function of the U-net network, training the U-net network by using the second training sample set until the value of the target loss function of the U-net network is smaller than a second preset threshold value, and outputting the trained U-net network.
103, according to the segmented fluorescence in situ hybridization cell image and the cell center point probability map, determining that the independent areas with the number of the cell center points larger than 1 in the segmented fluorescence in situ hybridization cell image are adhesion areas.
And 104, performing secondary segmentation on the adhesion region by using a watershed algorithm according to the number and distribution of the cell center points of the adhesion region in the cell center point probability map.
Watershed algorithms are often used to separate different regions that are stuck to each other, and a general watershed algorithm is easily affected by small noise in an image, and an over-segmentation phenomenon occurs due to the existence of a plurality of extreme points in a connected region, and the general practicability of the results is very low. In order to solve the above problems, the present invention first uses the cell center region obtained from the center recognition model, and uses these regions as prior information to guide the marker-based watershed algorithm to separate cell regions that are mutually adhered in the cell segmentation result, and the final effect diagram is shown in fig. 8 (a). In fig. 8(b), the post-processing algorithm proposed by the present invention can be used to accurately separate the cell regions that are adhered to each other. In contrast, in the 8(c) diagram, due to the lack of prior information, the ordinary watershed algorithm cannot find stable water injection points, so that many over-segmentation phenomena are generated.
The invention also provides a fluorescence in situ hybridization cell nucleus segmentation system, which comprises:
and the image segmentation module is used for segmenting the fluorescence in-situ hybrid cell image by utilizing the trained linknet network to obtain the segmented fluorescence in-situ hybrid cell image.
The trained linknet network comprises: an encoder and a decoder; the encoder comprises two convolution layers and three residual modules, wherein the two convolution layers are respectively a first convolution layer and a second convolution layer, the three residual modules are respectively a first residual module, a second residual module and a third residual module, and the first convolution layer, the second convolution layer, the first residual module, the second residual module and the third residual module are sequentially connected; the decoder comprises three up-sampling modules and three convolution layers, wherein the three up-sampling modules are a first up-sampling module, a second up-sampling module and a third up-sampling module respectively, and the three convolution layers comprise a third convolution layer, a fourth convolution layer and a fifth convolution layer respectively; the input end of the first up-sampling module is connected with the output end of the third residual error module, the output end of the first up-sampling module is connected with the first input end of the second up-sampling module, and the first up-sampling module is used for performing convolution operation on the output image of the third residual error module; a second input end of the second upsampling module is connected with an output end of the second residual error module, an output end of the second upsampling module is connected with a first input end of the third upsampling module, and the second upsampling module is used for performing convolution operation on an addition operation result after performing addition operation on an output image of the second residual error module and an output image of the first upsampling module; a second input end of the third upsampling module is connected with an output end of the first residual error module, an output end of the third upsampling module is connected with an input end of the third convolutional layer, and the third upsampling module is used for performing convolution operation on an addition operation result after performing addition operation on an output image of the first residual error module and an output image of the second upsampling module; the output end of the third convolutional layer is connected with the input end of the fourth convolutional layer, and the output end of the fourth convolutional layer is connected with the input end of the fifth convolutional layer.
Wherein the residual module comprises a sixth convolutional layer, a seventh convolutional layer and an adder; an output end of the sixth convolutional layer is connected with an input end of the seventh convolutional layer, an output end of the seventh convolutional layer is connected with a first input end of the adder, an input end of the sixth convolutional layer is connected with a second input end of the adder, an input end of the sixth convolutional layer is used as an input end of the residual error module, and an output end of the adder is used as an output end of the residual error module.
In order to implement the training of the linknet network, the segmentation system further includes: the system comprises a first training sample set establishing module, a second training sample set establishing module and a third training sample set establishing module, wherein the first training sample set establishing module is used for acquiring a plurality of fluorescence in situ hybrid cell image samples, carrying out manual segmentation on the fluorescence in situ hybrid cell image samples and establishing a first training sample set of training samples comprising the fluorescence in situ hybrid cell image samples and manual segmentation results; the linknet network initialization module is used for initializing the linknet network by using Xaviers; and the linknet network training module is used for taking the BCE loss function as a target loss function of the linknet network, training the initialized linknet network by using the first training sample set until the value of the target loss function of the linknet network is smaller than a first preset threshold value, and outputting the trained linknet network.
And the central point identification module is used for identifying the central point of the cell on the segmented fluorescence in-situ hybrid cell image by utilizing the trained U-net network to obtain a cell central point probability map.
The segmentation system for implementing the training of the U-net network further comprises: the second training sample set establishing module is used for acquiring a plurality of fluorescence in situ hybridization cell image samples, identifying and marking the cell center points of the fluorescence in situ hybridization cell image samples in an artificial mode, and establishing a second training sample set of training samples comprising the fluorescence in situ hybridization cell image samples and an artificial identification result; the U-net network training module is used for taking the Dice loss function as a target loss function of the U-net network, training the U-net network by using the second training sample set until the value of the target loss function of the U-net network is smaller than a second preset threshold value, and outputting the trained U-net network; according to the segmented fluorescence in situ hybridization cell image and the cell center point probability map, determining that the independent areas with the number of cell center points larger than 1 in the segmented fluorescence in situ hybridization cell image are adhesion areas;
and the secondary segmentation module is used for carrying out secondary segmentation on the adhesion region by adopting a watershed algorithm according to the number and the distribution of the cell central points of the adhesion region in the cell central point probability map.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention discloses a fluorescent in-situ hybridization cell nucleus segmentation method and a system, wherein the segmentation method comprises the steps of firstly segmenting a fluorescent in-situ hybridization cell image by utilizing a trained linknet network to obtain a segmented fluorescent in-situ hybridization cell image; carrying out cell center point identification on the segmented fluorescence in-situ hybrid cell image by using the trained U-net network to obtain a cell center point probability map; then, according to the segmented fluorescence in situ hybridization cell image and the cell center point probability map, determining that the independent areas with the number of the cell center points larger than 1 in the segmented fluorescence in situ hybridization cell image are adhesion areas; and finally, performing secondary segmentation on the bonding area by adopting a watershed algorithm according to the number and distribution of the cell center points of the bonding area in the cell center point probability map. The invention adopts the trained linknet network to segment the fluorescence in-situ hybridization cell image to extract high-level semantic features to abstract the image, and fuses corresponding shallow information to capture detailed features when a decoder recovers the resolution of a feature map so as to better extract the required target cell prospect.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The principle and the implementation manner of the present invention are explained by applying specific examples, the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof, the described embodiments are only a part of the embodiments of the present invention, not all embodiments, and all other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts belong to the protection scope of the present invention.

Claims (10)

1. A fluorescence in situ hybridization nucleus segmentation method is characterized by comprising the following steps:
segmenting the fluorescence in-situ hybrid cell image by using the trained linknet network to obtain a segmented fluorescence in-situ hybrid cell image;
carrying out cell center point identification on the segmented fluorescence in-situ hybrid cell image by using the trained U-net network to obtain a cell center point probability map;
according to the segmented fluorescence in situ hybridization cell image and the cell center point probability map, determining that the independent areas with the number of cell center points larger than 1 in the segmented fluorescence in situ hybridization cell image are adhesion areas;
and performing secondary segmentation on the bonding area by adopting a watershed algorithm according to the number and distribution of the cell center points of the bonding area in the cell center point probability map.
2. The fluorescence in situ hybridization nucleus segmentation method according to claim 1, wherein the trained linknet network comprises: an encoder and a decoder;
the encoder comprises two convolution layers and three residual modules, wherein the two convolution layers are respectively a first convolution layer and a second convolution layer, the three residual modules are respectively a first residual module, a second residual module and a third residual module, and the first convolution layer, the second convolution layer, the first residual module, the second residual module and the third residual module are sequentially connected;
the decoder comprises three up-sampling modules and three convolution layers, wherein the three up-sampling modules are a first up-sampling module, a second up-sampling module and a third up-sampling module respectively, and the three convolution layers comprise a third convolution layer, a fourth convolution layer and a fifth convolution layer respectively;
the input end of the first up-sampling module is connected with the output end of the third residual error module, the output end of the first up-sampling module is connected with the first input end of the second up-sampling module, and the first up-sampling module is used for performing convolution operation on the output image of the third residual error module; a second input end of the second upsampling module is connected with an output end of the second residual error module, an output end of the second upsampling module is connected with a first input end of the third upsampling module, and the second upsampling module is used for performing convolution operation on an addition operation result after performing addition operation on an output image of the second residual error module and an output image of the first upsampling module; a second input end of the third upsampling module is connected with an output end of the first residual error module, an output end of the third upsampling module is connected with an input end of the third convolutional layer, and the third upsampling module is used for performing convolution operation on an addition operation result after performing addition operation on an output image of the first residual error module and an output image of the second upsampling module; the output end of the third convolutional layer is connected with the input end of the fourth convolutional layer, and the output end of the fourth convolutional layer is connected with the input end of the fifth convolutional layer.
3. The fluorescence in situ hybridization nucleus segmentation method according to claim 2, wherein the residual module comprises a sixth convolution layer, a seventh convolution layer and an adder;
an output end of the sixth convolutional layer is connected with an input end of the seventh convolutional layer, an output end of the seventh convolutional layer is connected with a first input end of the adder, an input end of the sixth convolutional layer is connected with a second input end of the adder, an input end of the sixth convolutional layer is used as an input end of the residual error module, and an output end of the adder is used as an output end of the residual error module.
4. The fluorescence in-situ hybridization cell nucleus segmentation method according to claim 2, wherein the segmentation of the fluorescence in-situ hybridization cell image by using the trained linknet network to obtain the segmented fluorescence in-situ hybridization cell image further comprises:
acquiring a plurality of fluorescence in situ hybridization cell image samples, and manually segmenting the fluorescence in situ hybridization cell image samples to establish a first training sample set of training samples comprising the fluorescence in situ hybridization cell image samples and manual segmentation results;
initializing a linknet network by using Xaviers;
and taking the BCE loss function as a target loss function of the linknet network, training the initialized linknet network by using the first training sample set until the value of the target loss function of the linknet network is smaller than a first preset threshold value, and outputting the trained linknet network.
5. The method for segmenting the fluorescence in situ hybridization cell nucleus according to claim 1, wherein the method for identifying the cell center point of the segmented fluorescence in situ hybridization cell image by using the trained U-net network to obtain the cell center point probability map further comprises:
acquiring a plurality of fluorescence in situ hybridization cell image samples, and carrying out cell center point recognition and identification on the fluorescence in situ hybridization cell image samples in an artificial mode to establish a second training sample set of training samples comprising the fluorescence in situ hybridization cell image samples and an artificial recognition result;
and taking the Dice loss function as a target loss function of the U-net network, training the U-net network by using the second training sample set until the value of the target loss function of the U-net network is smaller than a second preset threshold value, and outputting the trained U-net network.
6. A fluorescent in situ hybridization cell nucleus segmentation system, the segmentation system comprising:
the image segmentation module is used for segmenting the fluorescence in-situ hybrid cell image by utilizing the trained linknet network to obtain a segmented fluorescence in-situ hybrid cell image;
the central point identification module is used for identifying the central point of the cell on the segmented fluorescence in-situ hybrid cell image by utilizing the trained U-net network to obtain a cell central point probability map;
according to the segmented fluorescence in situ hybridization cell image and the cell center point probability map, determining that the independent areas with the number of cell center points larger than 1 in the segmented fluorescence in situ hybridization cell image are adhesion areas;
and the secondary segmentation module is used for carrying out secondary segmentation on the adhesion region by adopting a watershed algorithm according to the number and the distribution of the cell central points of the adhesion region in the cell central point probability map.
7. The fluorescence in situ hybridization cell nuclear segmentation system according to claim 6, wherein the trained linknet network comprises: an encoder and a decoder;
the encoder comprises two convolution layers and three residual modules, wherein the two convolution layers are respectively a first convolution layer and a second convolution layer, the three residual modules are respectively a first residual module, a second residual module and a third residual module, and the first convolution layer, the second convolution layer, the first residual module, the second residual module and the third residual module are sequentially connected;
the decoder comprises three up-sampling modules and three convolution layers, wherein the three up-sampling modules are a first up-sampling module, a second up-sampling module and a third up-sampling module respectively, and the three convolution layers comprise a third convolution layer, a fourth convolution layer and a fifth convolution layer respectively;
the input end of the first up-sampling module is connected with the output end of the third residual error module, the output end of the first up-sampling module is connected with the first input end of the second up-sampling module, and the first up-sampling module is used for performing convolution operation on the output image of the third residual error module; a second input end of the second upsampling module is connected with an output end of the second residual error module, an output end of the second upsampling module is connected with a first input end of the third upsampling module, and the second upsampling module is used for performing convolution operation on an addition operation result after performing addition operation on an output image of the second residual error module and an output image of the first upsampling module; a second input end of the third upsampling module is connected with an output end of the first residual error module, an output end of the third upsampling module is connected with an input end of the third convolutional layer, and the third upsampling module is used for performing convolution operation on an addition operation result after performing addition operation on an output image of the first residual error module and an output image of the second upsampling module; the output end of the third convolutional layer is connected with the input end of the fourth convolutional layer, and the output end of the fourth convolutional layer is connected with the input end of the fifth convolutional layer.
8. The fluorescence in situ hybridization cell nuclear segmentation system of claim 7 wherein the residual module includes a sixth convolution layer, a seventh convolution layer and an adder;
an output end of the sixth convolutional layer is connected with an input end of the seventh convolutional layer, an output end of the seventh convolutional layer is connected with a first input end of the adder, an input end of the sixth convolutional layer is connected with a second input end of the adder, an input end of the sixth convolutional layer is used as an input end of the residual error module, and an output end of the adder is used as an output end of the residual error module.
9. The fluorescence in situ hybridization cell nucleus segmentation system according to claim 6, further comprising:
the system comprises a first training sample set establishing module, a second training sample set establishing module and a third training sample set establishing module, wherein the first training sample set establishing module is used for acquiring a plurality of fluorescence in situ hybrid cell image samples, carrying out manual segmentation on the fluorescence in situ hybrid cell image samples and establishing a first training sample set of training samples comprising the fluorescence in situ hybrid cell image samples and manual segmentation results;
the linknet network initialization module is used for initializing the linknet network by using Xaviers;
and the linknet network training module is used for taking the BCE loss function as a target loss function of the linknet network, training the initialized linknet network by using the first training sample set until the value of the target loss function of the linknet network is smaller than a first preset threshold value, and outputting the trained linknet network.
10. The fluorescence in situ hybridization cell nucleus segmentation system according to claim 6, further comprising:
the second training sample set establishing module is used for acquiring a plurality of fluorescence in situ hybridization cell image samples, identifying and marking the cell center points of the fluorescence in situ hybridization cell image samples in an artificial mode, and establishing a second training sample set of training samples comprising the fluorescence in situ hybridization cell image samples and an artificial identification result;
and the U-net network training module is used for taking the Dice loss function as a target loss function of the U-net network, training the U-net network by using the second training sample set until the value of the target loss function of the U-net network is smaller than a second preset threshold value, and outputting the trained U-net network.
CN202010815601.3A 2020-08-14 2020-08-14 Fluorescence in situ hybridization cell nucleus segmentation method and system Pending CN112070722A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010815601.3A CN112070722A (en) 2020-08-14 2020-08-14 Fluorescence in situ hybridization cell nucleus segmentation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010815601.3A CN112070722A (en) 2020-08-14 2020-08-14 Fluorescence in situ hybridization cell nucleus segmentation method and system

Publications (1)

Publication Number Publication Date
CN112070722A true CN112070722A (en) 2020-12-11

Family

ID=73661416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010815601.3A Pending CN112070722A (en) 2020-08-14 2020-08-14 Fluorescence in situ hybridization cell nucleus segmentation method and system

Country Status (1)

Country Link
CN (1) CN112070722A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309280A (en) * 2022-12-16 2023-06-23 上海药明康德新药开发有限公司 Lymphocyte labeling method and system

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682305A (en) * 2012-04-25 2012-09-19 深圳市迈科龙医疗设备有限公司 Automatic screening system and automatic screening method using thin-prep cytology test
CN103745210A (en) * 2014-01-28 2014-04-23 爱威科技股份有限公司 Method and device for classifying white blood cells
CN104392460A (en) * 2014-12-12 2015-03-04 山东大学 Adherent white blood cell segmentation method based on nucleus-marked watershed transformation
CN108182674A (en) * 2017-12-14 2018-06-19 合肥金星机电科技发展有限公司 Granularity Detection analysis method based on U-Net deep learning networks
CN108447062A (en) * 2018-02-01 2018-08-24 浙江大学 A kind of dividing method of the unconventional cell of pathological section based on multiple dimensioned mixing parted pattern
CN109102498A (en) * 2018-07-13 2018-12-28 华南理工大学 A kind of method of cluster type nucleus segmentation in cervical smear image
CN109472761A (en) * 2018-11-23 2019-03-15 军事科学院系统工程研究院卫勤保障技术研究所 A kind of method for cell count and system based on fluorescent image
CN110097552A (en) * 2018-06-21 2019-08-06 北京大学 A kind of automatic division method of mouse prefrontal lobe neuron two-photon fluorescence image
CN110136154A (en) * 2019-05-16 2019-08-16 西安电子科技大学 Remote sensing images semantic segmentation method based on full convolutional network and Morphological scale-space
CN110472616A (en) * 2019-08-22 2019-11-19 腾讯科技(深圳)有限公司 Image-recognizing method, device, computer equipment and storage medium
CN110610490A (en) * 2019-09-11 2019-12-24 哈尔滨理工大学 Method for positioning white blood cells in lesion cell image
CN110675411A (en) * 2019-09-26 2020-01-10 重庆大学 Cervical squamous intraepithelial lesion recognition algorithm based on deep learning
WO2020093042A1 (en) * 2018-11-02 2020-05-07 Deep Lens, Inc. Neural networks for biomedical image analysis
CN111145209A (en) * 2019-12-26 2020-05-12 北京推想科技有限公司 Medical image segmentation method, device, equipment and storage medium
CN111179273A (en) * 2019-12-30 2020-05-19 山东师范大学 Method and system for automatically segmenting leucocyte nucleoplasm based on deep learning
CN111353987A (en) * 2020-03-02 2020-06-30 中国科学技术大学 Cell nucleus segmentation method and device
US20200211189A1 (en) * 2018-12-31 2020-07-02 Tempus Labs, Inc. Artificial intelligence segmentation of tissue images
CN111402267A (en) * 2020-03-13 2020-07-10 中山大学孙逸仙纪念医院 Segmentation method, device and terminal for epithelial cell nucleus in prostate cancer pathological image

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682305A (en) * 2012-04-25 2012-09-19 深圳市迈科龙医疗设备有限公司 Automatic screening system and automatic screening method using thin-prep cytology test
CN103745210A (en) * 2014-01-28 2014-04-23 爱威科技股份有限公司 Method and device for classifying white blood cells
CN104392460A (en) * 2014-12-12 2015-03-04 山东大学 Adherent white blood cell segmentation method based on nucleus-marked watershed transformation
CN108182674A (en) * 2017-12-14 2018-06-19 合肥金星机电科技发展有限公司 Granularity Detection analysis method based on U-Net deep learning networks
CN108447062A (en) * 2018-02-01 2018-08-24 浙江大学 A kind of dividing method of the unconventional cell of pathological section based on multiple dimensioned mixing parted pattern
CN110097552A (en) * 2018-06-21 2019-08-06 北京大学 A kind of automatic division method of mouse prefrontal lobe neuron two-photon fluorescence image
CN109102498A (en) * 2018-07-13 2018-12-28 华南理工大学 A kind of method of cluster type nucleus segmentation in cervical smear image
WO2020093042A1 (en) * 2018-11-02 2020-05-07 Deep Lens, Inc. Neural networks for biomedical image analysis
CN109472761A (en) * 2018-11-23 2019-03-15 军事科学院系统工程研究院卫勤保障技术研究所 A kind of method for cell count and system based on fluorescent image
US20200211189A1 (en) * 2018-12-31 2020-07-02 Tempus Labs, Inc. Artificial intelligence segmentation of tissue images
CN110136154A (en) * 2019-05-16 2019-08-16 西安电子科技大学 Remote sensing images semantic segmentation method based on full convolutional network and Morphological scale-space
CN110472616A (en) * 2019-08-22 2019-11-19 腾讯科技(深圳)有限公司 Image-recognizing method, device, computer equipment and storage medium
CN110610490A (en) * 2019-09-11 2019-12-24 哈尔滨理工大学 Method for positioning white blood cells in lesion cell image
CN110675411A (en) * 2019-09-26 2020-01-10 重庆大学 Cervical squamous intraepithelial lesion recognition algorithm based on deep learning
CN111145209A (en) * 2019-12-26 2020-05-12 北京推想科技有限公司 Medical image segmentation method, device, equipment and storage medium
CN111179273A (en) * 2019-12-30 2020-05-19 山东师范大学 Method and system for automatically segmenting leucocyte nucleoplasm based on deep learning
CN111353987A (en) * 2020-03-02 2020-06-30 中国科学技术大学 Cell nucleus segmentation method and device
CN111402267A (en) * 2020-03-13 2020-07-10 中山大学孙逸仙纪念医院 Segmentation method, device and terminal for epithelial cell nucleus in prostate cancer pathological image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ABHISHEK CHAURASIA等: ""LinkNet: Exploiting Encoder Representations for Efficient Semantic Segmentation"", 《ARXIV》, 17 January 2017 (2017-01-17), pages 1 - 5 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309280A (en) * 2022-12-16 2023-06-23 上海药明康德新药开发有限公司 Lymphocyte labeling method and system

Similar Documents

Publication Publication Date Title
CN110111366B (en) End-to-end optical flow estimation method based on multistage loss
CN111047551B (en) Remote sensing image change detection method and system based on U-net improved algorithm
CN115049936B (en) High-resolution remote sensing image-oriented boundary enhanced semantic segmentation method
CN113807355B (en) Image semantic segmentation method based on coding and decoding structure
CN112017191A (en) Method for establishing and segmenting liver pathology image segmentation model based on attention mechanism
CN111028217A (en) Image crack segmentation method based on full convolution neural network
CN110111334B (en) Crack segmentation method and device, electronic equipment and storage medium
CN111079847B (en) Remote sensing image automatic labeling method based on deep learning
CN113780296A (en) Remote sensing image semantic segmentation method and system based on multi-scale information fusion
CN111145209A (en) Medical image segmentation method, device, equipment and storage medium
CN112927253B (en) Rock core FIB-SEM image segmentation method based on convolutional neural network
CN113569865A (en) Single sample image segmentation method based on class prototype learning
CN115512103A (en) Multi-scale fusion remote sensing image semantic segmentation method and system
CN111784711A (en) Lung pathology image classification and segmentation method based on deep learning
CN114693924A (en) Road scene semantic segmentation method based on multi-model fusion
CN111914654A (en) Text layout analysis method, device, equipment and medium
CN115439493A (en) Method and device for segmenting cancerous region of breast tissue section
CN113223011B (en) Small sample image segmentation method based on guide network and full-connection conditional random field
CN115546466A (en) Weak supervision image target positioning method based on multi-scale significant feature fusion
CN113313669B (en) Method for enhancing semantic features of top layer of surface defect image of subway tunnel
CN112419352B (en) Small sample semantic segmentation method based on contour
CN113313668B (en) Subway tunnel surface disease feature extraction method
CN114581789A (en) Hyperspectral image classification method and system
CN114332122A (en) Cell counting method based on attention mechanism segmentation and regression
CN112634289B (en) Rapid feasible domain segmentation method based on asymmetric void convolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination