CN109493346B - Stomach cancer pathological section image segmentation method and device based on multiple losses - Google Patents

Stomach cancer pathological section image segmentation method and device based on multiple losses Download PDF

Info

Publication number
CN109493346B
CN109493346B CN201811285894.8A CN201811285894A CN109493346B CN 109493346 B CN109493346 B CN 109493346B CN 201811285894 A CN201811285894 A CN 201811285894A CN 109493346 B CN109493346 B CN 109493346B
Authority
CN
China
Prior art keywords
module
image
loss
segmentation
pathological section
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811285894.8A
Other languages
Chinese (zh)
Other versions
CN109493346A (en
Inventor
吴健
胡荷萍
王彦杰
舒景东
王文哲
陆逸飞
吴边
吴福理
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201811285894.8A priority Critical patent/CN109493346B/en
Publication of CN109493346A publication Critical patent/CN109493346A/en
Application granted granted Critical
Publication of CN109493346B publication Critical patent/CN109493346B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a stomach cancer pathological section image segmentation method and a stomach cancer pathological section image segmentation device based on multiple losses, which belong to the technical field of medical image processing, wherein a plurality of full-section samples are used for obtaining a 2048 × 2048-sized patch pathological graph and a labeled graph, each section can obtain about 200 effective patches, the effective patches are integrated into a training sample and a testing sample, the training sample and the testing sample are transmitted into a CLGCN network for training, after a model is converged, a testing pathological graph is input, a corresponding probability density graph is predicted, if the probability value of a certain pixel point is higher than a certain level, a pathological point is predicted, finally, a segmented patch graph is obtained, and the full-section splicing is carried out, so that a complete pathological section segmentation prediction graph is obtained. In the process of training the segmentation model, 5 classification modules and a plurality of loss sets at the tail end are added, and during training, 4 classification modules on the left side need to be pre-trained to obtain initial parameters of all the classification modules, and then on the basis of the initial parameters, a plurality of losses are subjected to weighted combined training.

Description

Stomach cancer pathological section image segmentation method and device based on multiple losses
Technical Field
The invention relates to the technical field of medical image processing, in particular to a method and a device for segmenting a gastric cancer pathological section image based on multiple losses.
Background
The pathological section viewing mode that hospital adopted at present uses pathological section as the basis to utilize the pathological change condition of microscope observation section, need constantly to remove the switching field of vision in this process, just can observe the sample condition of whole section, be a thing that wastes time and energy especially in essence.
With the development of pathological section digitalization, digital sections slowly approach hospitals, and in the process, the demand of how to find out interested parts in the digital sections by using computer aided doctors is more and more strong. The image segmentation is used as a digital section processing mode, interested and target areas can be outlined, and if the image segmentation can be well applied to observation of gastric cancer pathological sections, the workload of doctors on the aspect is greatly reduced.
Theoretically, image segmentation is a large direction of a deep neural network, and the application of image segmentation is deeply influenced by sensors in 40 s of the 20 th century, convolutional neural networks (LeNet) which are firstly proposed at the end of 90 s, and deep learning eruption on target recognition application in 2012. For an image processing target of image segmentation, the target is to obtain a category for each pixel point in an image, the traditional method is to classify a neural network based on a block formed around the pixel point, and traverse the whole image to obtain a segmentation map of the whole image, and the computation amount is very large. The proposed full convolutional neural network (FCN) effectively changes this situation, and forms a specific segmented network of end-to-end coding and decoding, and the current improvement work (U-Net, GCN, etc.) is based on such a basic framework, which is relatively fast. With the advancement of technology, application scenarios are gradually expanding, but different difficulties are still to be overcome for the application scenarios.
Since the whole digital pathological section is very large, the model cannot be input at one time, and the model needs to be cut into 2048 × 2048 or other-sized patches for segmentation, and the results are pieced together, so that in practice, in a segmentation prediction panorama of a whole slice, a good outline segmentation can be obtained for most lesion areas by using the image segmentation method of the GCN, but many fine false positive pixel blocks exist at the same time and are widely distributed in normal parts of the slice, and the false positive pixel blocks greatly influence the visual perception. Therefore, how to better utilize the diversity of the existing forms of the labels when training the model and change the way of the loss function to stimulate better segmentation prediction is a problem to be solved by the invention.
Disclosure of Invention
The invention aims to provide a method and a device for segmenting a gastric cancer pathological section image based on multiple losses, and solves the problems that in practical application, segmentation prediction of a gastric cancer pathological section generates too many false positives and prediction is inaccurate.
In order to achieve the above object, the present invention provides a method for segmenting an image of a pathological section of gastric cancer based on multiple losses, comprising the steps of:
1) scanning the gastric cancer pathological section to obtain an original digital pathological section image, cutting the original digital pathological section image into an original cutting image, and dividing the original digital pathological section image into a training set and a testing set;
2) dividing a pathological change region aiming at an original cutting image in a training set, and marking the pixel value of the pathological change region in the image as 1 and the pixel value of a non-pathological change region as 0; carrying out secondary annotation on whether the original cutting image has a lesion part or not, and marking the image containing the lesion area as 1, otherwise, marking the image as 0; meanwhile, the image is converted from one dimension to two dimensions, namely, the pixel value marked as 0 in the image is converted into (1, 0), and the pixel value marked as 1 is converted into (0, 1), so as to form a three-dimensional mark;
3) performing data set amplification processing on the marked cutting image, and inputting the cutting image into a GCN network, wherein the GCN network comprises four paths, sequentially selecting a second convolution operation module, a third convolution operation module, a fourth convolution operation module, a fifth convolution operation module and a first BR module of a last path according to the four paths, respectively connecting the first BR module with a CB module, and inputting a result output from the Score Map module into an LB module;
4) pre-training respective CB modules by utilizing feature maps output by second, third, fourth and fifth convolution operation modules in four paths of the GCN to determine initial parameters of the four CB modules and the four convolution operation modules, wherein the CB module connected with a first BR module of a last path of the GCN, namely the initial parameter of the last CB module is determined by the four CB modules;
5) calculating the total loss by utilizing an LB module, and updating the parameters of the GCN according to the total loss;
6) repeating the step 5), training samples once every iteration, and testing the gastric cancer pathological section image segmentation model by using the original cutting image in the test set until convergence to generate the gastric cancer pathological section image segmentation model;
7) inputting the digital pathological image to be detected into the gastric cancer pathological section image segmentation model to obtain a segmentation prediction image, and splicing all the segmentation prediction images to form a full-section segmentation prediction image.
In the technical scheme, firstly, a digital pathological graph generated by scanning and a plurality of full-slice samples marked by doctors are used for obtaining a patch pathological graph and a marked graph with the size of 2048 × 2048 (which can be converted into 512 × 512 for segmentation processing in actual use), each slice can obtain about 200 effective patches, the effective patches are integrated into a training and testing sample and are transmitted into a CLGCN network for training, after the model is converged, the testing pathological graph is input, a corresponding probability density graph is predicted, if the probability value of a certain pixel point is higher than a certain level, a lesion point is predicted, finally, the segmented patch graphs are obtained, and the full-slice splicing prediction graph is obtained, so that a complete pathological slice segmentation prediction graph is obtained. In the process of training the segmentation model, 5 classification CB modules and a plurality of loss sets at the tail end are added, during training, pre-training is needed to be performed on the 4 classification CB modules on the left side to obtain initial parameters of the pre-training module and the last classification CB module, the initial parameters of the modules in the graph 4 contained in the LB are generated by random numbers, and then weighting combined training is performed on a plurality of losses on the basis of the initial parameters.
Preferably, the CB module comprises two convolutional layers, one global pooling layer and two fully-connected layers connected in sequence. The loss ratio of the CB module is 0.3, and the loss ratio of the LB module is 0.7; the upper limit of the number of iterations is 200.
And 6), if the test effects of the test sets for 5 times continuously are in a continuously reduced state, exiting the current training. The test effect is in a continuously-descending state for 5 times continuously, which indicates that the parameters of the model are trained to be best or the parameters of the model are incorrect, and the confirmation is quit.
The calculation formula of the total loss in the step 5) is as follows:
Figure BDA0001849030270000041
wherein L is1Based on the segmentation loss, LAAFIs the loss of a multi-scale region of a certain pixel point, LCAnd x, beta and alpha respectively represent the weight proportion of the three types of losses, L represents a total loss value, and W represents weights of different scales in the multi-scale regional loss, wherein the sum of the classification loss of the last CB module and the classification loss calculated by outputting the Score Map to an LB module.
And 7), inputting the digital pathological image to be detected into the gastric cancer pathological section image segmentation model to obtain a predicted probability map, setting pixel points with the probability exceeding 0.5 as lesion points, thus obtaining a segmentation effect map, and combining to form a segmentation prediction map of a full section.
The invention also provides a device for segmenting the pathological section image of the gastric cancer based on multiple losses, which comprises: a memory storing computer-executable instructions and data for use or production in executing the computer-executable instructions; a processor communicatively coupled to the memory and configured to execute computer-executable instructions stored by the memory, the computer-executable instructions, when executed, implementing the multiple-loss based gastric cancer pathological section image segmentation method described above.
Compared with the prior art, the invention has the following beneficial effects:
1) the segmentation labels are utilized in various modes, so that the utilization rate of the segmentation labels is increased, and the prediction of false positives of the slices is reduced;
2) the adaptive similar field is utilized to enable the GCN to capture more accurate neighborhood pixel relation, form region loss and obtain more fine segmentation.
3) The workload of a pathologist is reduced.
Drawings
FIG. 1 is a block flow diagram of a multi-loss based gastric cancer pathological section image segmentation model in accordance with an embodiment of the present invention;
FIG. 2 is a block diagram of a CB module in the practice of the present invention;
fig. 3 is a schematic diagram of an adaptive similarity field (AAF) in an implementation of the present invention, in which (1) is an actual label graph, (2) is a probability prediction graph after iteration, and (3) represents loss of regions with different scale ranges;
FIG. 4 shows the connected classification modules of the LB module after output from the Score Map, which outputs a two-dimensional probability value similar to (x, y), which is classified as class 0 if the former is large, and class 1 otherwise.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described with reference to the following embodiments and accompanying drawings.
Examples
Referring to fig. 1 to 3, the method for segmenting the pathological section image of gastric cancer based on multiple losses of the present embodiment includes the following steps:
s101 generating a data set
Scanning pathological sections to obtain original digital pathological section images, labeling pathological parts on the original digital pathological section images, namely framing the pathological parts by using curves, extracting and dividing a pathological data set on the basis to obtain corresponding segmentation labels of the pathological images, wherein the multiple labels refer to:
based on the segmentation label, if the whole patch has no lesion point, the classification label is 0, otherwise, the classification label is 1;
when the segmented One-hot label is formed, for the label of a certain pixel point, if the segmented label is 0, the One-hot label is (1, 0), otherwise, the One-hot label is (0, 1).
Amplifying the marked full-section sample to 20 times (the visual field is clear and the sample size is acceptable), cutting out a partially overlapped pathological image and a corresponding marked image, removing a glass part, and dividing the positive image and the negative image into a data set on the basis that the ratio of the positive image to the negative image is approximately 1:2, wherein the ratio of the training set to the testing set is 8: 2. And aiming at training data, performing data expansion modes such as rotation, projection, noise addition, random cutting, normalization and the like, and only performing conventional normalization operation on the test set.
S102 maintaining positive and negative sample equalization
When the training effect of the model is seriously influenced by the unbalanced condition of the data of the training sample set, the balanced training data set needs to be manually extracted, so that the positive and negative samples reach the state of 1: 1.
S103 model training
Training CLGCN (namely a gastric cancer pathological section image segmentation model) is carried out on the training sample set obtained in S101, the ratio of loss of a CB module is 0.3, the ratio of loss of a tail end is 0.7, the upper limit of iteration times is set to be 200 times, effect prediction of a test set (actually a verification set) is carried out on the training sample once iteration, and if the test effects of the test set are in a continuously-descending state for 5 continuous times, the current training is quitted.
The structure of the stomach cancer pathological section image segmentation model is shown in fig. 1, five CB modules and one LB module are added on the basis of a GCN network, and each CB module comprises two convolution layers, a global pooling layer and two full-connection layers which are connected in sequence. The LB module includes three loss modules, one of which is L1: a basic segmentation loss module of which two are LAAF: loss of a multi-scale region of a certain pixel point is as follows: similar to the last CB module but with a different structure, the classification penalty is calculated in the specific form shown in fig. 4 (consisting of 2 fc layers), in fig. 1 the Score Map output is a 512 x 1 probability Map, the matrix is drawn into vectors and input to the structure of fig. 4, so that the fig. 4 output is a 2-dimensional vector, and the classification penalty is calculated here. The GCN network is based on resnet101 and resnet 50.
The first step is to train to get the weights of the first 4 CB modules: after the feature map is obtained by the infrastructure (4 res modules), the class judgment is obtained by the 4 CB modules respectively, the training on the class label is carried out, and the res and the initial weight of the CB module are obtained after the training is finished. Then, the initial weight of the last CB module is obtained through the weights of the first 4 CB modules, the training of the whole network except the first 4 CB modules is carried out, the res-2, res-3, res-4 and res-5 are respectively connected with the GCN module and the BR module behind the GCN module and the BR module, the GCN module and the BR module are combined with the upper-layer characteristics from bottom to top through up sampling, the characteristic maps Score Map of the uppermost layer are obtained through sequential combination, loss functions are obtained according to the segmentation labels, the classification labels and the AAF, the update parameters are propagated forward, and the training is carried out sequentially until the completion.
In the process of training the segmentation model, 5 CB classification modules and a plurality of loss sets (i.e., LB modules) at the extreme end are added, and referring to fig. 1, during training, pre-training needs to be performed on 4 CB modules on the left side to obtain initial parameters of all CB modules, and then, on the basis of the initial parameters, weighted joint training is performed on a plurality of losses of the LB module and the last CB module on all training data.
A number of loss functions here refer to: the last CB classification penalty and the Score Map output to the classification penalty (L) formed by the classification penalty formed in FIG. 4C) Dividing the base loss (L)1) Adaptive similar site correspondence multiscale regional loss (L)AAF) Sum of (a) and (b) the terminal multiple loss sets:
Figure BDA0001849030270000071
loss L of the multi-scale regionAAFFor a certain pixel point, the total loss is the sum of the loss values of all the pixel points, and L1Is the conventional underlying segmentation loss; l isCThe loss is increased in order to hopefully correctly judge the negative and positive of the patch under the condition of different characteristics of the current network, and reduce the generation of false positive patch; the multi-scale regional loss refers to that for a pixel point, the distribution similarity degree between other pixel points in a plurality of regions shown in fig. 3(multiple ranges for the pixel point) is calculated, and a pair of pixel points of the same label are encouraged to be similar as much as possible, that is, a smaller loss is obtained, predictions of different labels are deviated as much as possible, while a prediction Map in fig. 3 means that a middle pixel point and a pixel point of a black connecting line at the lower left corner have higher distribution similarity, that is, an acting force for enabling the AAF to have a relationship of drawing a black line exists, and a gray line causes a difference between the middle pixel point and the pixel at the upper right corner due to the difference of the labelsThe repulsion force, make the distribution relation that increases AAF loss can learn between the segmentation label pixel from this, let the image of segmenting more meticulous, the border is more clear. As shown in fig. 3, assuming a 3 × 3 image, 0 and 1 in the image represent labels corresponding to pixel points, see fig. 1, corresponding to fig. 1, after one iteration, a predicted probability value map (2) of corresponding pixel points is obtained, and with respect to the label in fig. 1, if the labels are the same, the black arrows in fig. 2 indicate, otherwise, the labels are not black, that is, the pixel points in the neighborhood of the center point are divided into two types with the same structure and different structures; referring to fig. 3, for the 4 × 4 image, the left lower-hand "× dot" is used as a reference, and the arrow links two pixel points with different distances, which represents different regions, i.e., a multi-scale metric. For other pixels, the boundary problem shown in fig. 3 is involved, that is, for pixels inside and outside the same label block, different weights need to be given:
Figure BDA0001849030270000081
wherein
Figure BDA0001849030270000082
k refers to a different scale range, c refers to different classes of channels (i.e., different channels for One-hot tags), b and
Figure BDA0001849030270000083
respectively for external and internal pixels, L, of the same labelbThe loss of a certain pixel point in a certain channel and a certain size range is referred, the loss of the area is formed by the sum of the loss of the certain point and the loss of all pixel pairs in the area, and the loss of the pixel pairs is expressed by KL divergence.
In practice, when a prediction segmentation map of a full slice is generated, various different patch sizes are adopted to meet the requirement of pathological section complexity, and the final value is obtained in a mean value taking mode for different prediction probability values of the same pixel point. In the data amplification mode of the training set, on the basis, random size cutting is carried out to obtain more training samples, and the test samples are only subjected to basic data normalization processing. Aiming at the problem that different patches overlap edges, the overlapped patches are taken for multiple times of prediction during prediction, so that the edges are smoother.
4) Slice segmentation prediction
For a complete full-slice digital image, sequentially cutting out a patch from left to right and then from top to bottom, segmenting a prediction patch in real time and filling the prediction patch in the corresponding position of the full slice, and for a certain patch, after predicting a probability density graph of the certain patch, taking the label of a pixel point with the probability greater than 0.5 as 1 and the rest as 0, and finally splicing into a full-slice segmentation result.

Claims (4)

1. A gastric cancer pathological section image segmentation method based on multiple losses is characterized by comprising the following steps:
1) scanning the gastric cancer pathological section to obtain an original digital pathological section image, cutting the original digital pathological section image into an original cutting image, and dividing the original digital pathological section image into a training set and a testing set;
2) dividing a pathological change region aiming at an original cutting image in a training set, and marking the pixel value of the pathological change region in the image as 1 and the pixel value of a non-pathological change region as 0; carrying out secondary annotation on whether the original cutting image has a lesion part or not, and marking the image containing the lesion area as 1, otherwise, marking the image as 0; meanwhile, the image is converted from one dimension to two dimensions, namely, the pixel value marked as 0 in the image is converted into (1, 0), and the pixel value marked as 1 is converted into (0, 1), so as to form a three-dimensional mark;
3) performing data set amplification processing on the marked cutting image, and inputting the cutting image into a GCN network, wherein the GCN network comprises four paths, sequentially selecting a second convolution operation module, a third convolution operation module, a fourth convolution operation module, a fifth convolution operation module and a first BR module of a last path according to the four paths, respectively connecting the first BR module with a CB module, and inputting a result output from the Score Map module into an LB module; the CB module comprises two convolution layers, a global pooling layer and two full-connection layers which are connected in sequence; the LB module comprises three loss modules;
4) pre-training respective CB modules by utilizing feature maps output by second, third, fourth and fifth convolution operation modules in four paths of the GCN to determine initial parameters of the four CB modules and the four convolution operation modules, wherein the CB module connected with a first BR module of a last path of the GCN, namely the initial parameter of the last CB module is determined by the four CB modules;
5) calculating the total loss by utilizing an LB module, and updating the parameters of the GCN according to the total loss; the total loss is composed of the loss of the LB module and the loss of the last CB module, the loss ratio of the CB module is 0.3, and the loss ratio of the LB module is 0.7; the upper limit of the iteration times is 200 times;
the total loss is calculated as:
Figure FDA0003101819510000011
wherein L is1Based on the segmentation loss, LAAFIs the loss of a multi-scale region of a certain pixel point, LCFor the sum of the classification loss of the last CB module and the classification loss calculated by outputting the Score Map to the LB module, x, beta and alpha respectively represent the weight proportion of three types of losses, L represents a total loss value, and W represents weights of different scales in the multi-scale regional loss;
6) repeating the step 5), training samples once every iteration, and testing the gastric cancer pathological section image segmentation model by using the original cutting image in the test set until convergence to generate the gastric cancer pathological section image segmentation model;
7) inputting the digital pathological image to be detected into the gastric cancer pathological section image segmentation model to obtain a segmentation prediction image, and splicing all the segmentation prediction images to form a full-section segmentation prediction image.
2. The method for segmenting gastric cancer pathological section image according to claim 1, wherein in step 6), if the test effect of each of the 5 consecutive test sets is in a state of continuously decreasing, the current training is exited.
3. The method for segmenting gastric cancer pathological section images according to claim 1, wherein in the step 7), the digital pathological images to be detected are input into the gastric cancer pathological section image segmentation model to obtain a predicted probability map, and pixel points with the probability exceeding 0.5 are set as lesion points, so that a segmentation effect map is obtained and combined to form a segmentation prediction map of a full section.
4. A gastric cancer pathological section image segmentation device based on multiple losses comprises: a memory storing computer-executable instructions and data for use or production in executing the computer-executable instructions; a processor communicatively coupled to the memory and configured to execute computer-executable instructions stored by the memory, wherein the computer-executable instructions, when executed, implement the multiple-loss based gastric cancer pathological section image segmentation method as claimed in claims 1-3.
CN201811285894.8A 2018-10-31 2018-10-31 Stomach cancer pathological section image segmentation method and device based on multiple losses Active CN109493346B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811285894.8A CN109493346B (en) 2018-10-31 2018-10-31 Stomach cancer pathological section image segmentation method and device based on multiple losses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811285894.8A CN109493346B (en) 2018-10-31 2018-10-31 Stomach cancer pathological section image segmentation method and device based on multiple losses

Publications (2)

Publication Number Publication Date
CN109493346A CN109493346A (en) 2019-03-19
CN109493346B true CN109493346B (en) 2021-09-07

Family

ID=65691950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811285894.8A Active CN109493346B (en) 2018-10-31 2018-10-31 Stomach cancer pathological section image segmentation method and device based on multiple losses

Country Status (1)

Country Link
CN (1) CN109493346B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109979546A (en) * 2019-04-04 2019-07-05 成都大学 Network model analysis platform and construction method based on artificial intelligence number pathology
CN110245551B (en) * 2019-04-22 2022-12-06 中国科学院深圳先进技术研究院 Identification method of field crops under multi-grass working condition
CN110310253B (en) * 2019-05-09 2021-10-12 杭州迪英加科技有限公司 Digital slice classification method and device
CN110110799B (en) * 2019-05-13 2021-11-16 广州锟元方青医疗科技有限公司 Cell sorting method, cell sorting device, computer equipment and storage medium
CN110197493B (en) * 2019-05-24 2021-04-23 清华大学深圳研究生院 Fundus image blood vessel segmentation method
CN110415230B (en) * 2019-07-25 2022-12-06 东北大学 CT slice image semantic segmentation system and method based on deep learning
CN110838100A (en) * 2019-10-11 2020-02-25 浙江大学 Colonoscope pathological section screening and segmenting system based on sliding window
CN111047559B (en) * 2019-11-21 2023-04-18 万达信息股份有限公司 Method for rapidly detecting abnormal area of digital pathological section
CN111047606B (en) * 2019-12-05 2022-10-04 北京航空航天大学 Pathological full-section image segmentation algorithm based on cascade thought
CN111127471B (en) * 2019-12-27 2023-08-29 之江实验室 Gastric cancer pathological section image segmentation method and system based on double-label loss
CN111369515A (en) * 2020-02-29 2020-07-03 上海交通大学 Tunnel water stain detection system and method based on computer vision
CN111311613B (en) * 2020-03-03 2021-09-07 推想医疗科技股份有限公司 Image segmentation model training method, image segmentation method and device
CN111340128A (en) * 2020-03-05 2020-06-26 上海市肺科医院(上海市职业病防治院) Lung cancer metastatic lymph node pathological image recognition system and method
CN111710394A (en) * 2020-06-05 2020-09-25 沈阳智朗科技有限公司 Artificial intelligence assisted early gastric cancer screening system
CN111724371B (en) * 2020-06-19 2023-05-23 联想(北京)有限公司 Data processing method and device and electronic equipment
CN111951274A (en) * 2020-07-24 2020-11-17 上海联影智能医疗科技有限公司 Image segmentation method, system, readable storage medium and device
CN112070725B (en) * 2020-08-17 2024-09-17 清华大学 Deep learning-based grape embryo slice image processing method and device
CN112541919B (en) * 2020-12-29 2024-09-17 申建常 Picture segmentation processing method and processing system
CN113034462B (en) * 2021-03-22 2022-09-23 福州大学 Method and system for processing gastric cancer pathological section image based on graph convolution
CN113077698A (en) * 2021-04-13 2021-07-06 浙江省肿瘤医院 Teaching aid is used in stomach cancer pathology teaching
CN115359325B (en) * 2022-10-19 2023-01-10 腾讯科技(深圳)有限公司 Training method, device, equipment and medium for image recognition model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610146A (en) * 2017-09-29 2018-01-19 北京奇虎科技有限公司 Image scene segmentation method, apparatus, computing device and computer-readable storage medium
CN108509976A (en) * 2018-02-12 2018-09-07 北京佳格天地科技有限公司 The identification device and method of animal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10019657B2 (en) * 2015-05-28 2018-07-10 Adobe Systems Incorporated Joint depth estimation and semantic segmentation from a single image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610146A (en) * 2017-09-29 2018-01-19 北京奇虎科技有限公司 Image scene segmentation method, apparatus, computing device and computer-readable storage medium
CN108509976A (en) * 2018-02-12 2018-09-07 北京佳格天地科技有限公司 The identification device and method of animal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Large Kernel Matters—Improve Semantic Segmentation by Global Convolutional Network;Chao Peng等;《2017 IEEE Conference on Computer Vision and Pattern Recognition》;20171109;1743-1751 *

Also Published As

Publication number Publication date
CN109493346A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN109493346B (en) Stomach cancer pathological section image segmentation method and device based on multiple losses
CN108319972B (en) End-to-end difference network learning method for image semantic segmentation
CN109145939B (en) Semantic segmentation method for small-target sensitive dual-channel convolutional neural network
CN111523521B (en) Remote sensing image classification method for double-branch fusion multi-scale attention neural network
CN114943876A (en) Cloud and cloud shadow detection method and device for multi-level semantic fusion and storage medium
CN112489164B (en) Image coloring method based on improved depth separable convolutional neural network
CN112164077B (en) Cell instance segmentation method based on bottom-up path enhancement
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN113420643A (en) Lightweight underwater target detection method based on depth separable cavity convolution
CN111553296B (en) Two-value neural network stereo vision matching method based on FPGA
CN114926722A (en) Method and storage medium for detecting scale self-adaptive target based on YOLOv5
CN112270259A (en) SAR image ship target rapid detection method based on lightweight convolutional neural network
CN112365511A (en) Point cloud segmentation method based on overlapped region retrieval and alignment
CN110738132A (en) target detection quality blind evaluation method with discriminant perception capability
CN114998688B (en) YOLOv4 improved algorithm-based large-view-field target detection method
CN116258877A (en) Land utilization scene similarity change detection method, device, medium and equipment
CN116091823A (en) Single-feature anchor-frame-free target detection method based on fast grouping residual error module
CN115222754A (en) Mirror image segmentation method based on knowledge distillation and antagonistic learning
CN114120359A (en) Method for measuring body size of group-fed pigs based on stacked hourglass network
CN118097268A (en) Long-tail target detection method based on monitoring scene
CN113989296A (en) Unmanned aerial vehicle wheat field remote sensing image segmentation method based on improved U-net network
CN112270370B (en) Vehicle apparent damage assessment method
CN113361496A (en) City built-up area statistical method based on U-Net
CN116823782A (en) Reference-free image quality evaluation method based on graph convolution and multi-scale features
CN109583584B (en) Method and system for enabling CNN with full connection layer to accept indefinite shape input

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Wu Jian

Inventor after: Ye Zhiqian

Inventor after: Hu Heping

Inventor after: Wang Yanjie

Inventor after: Shu Jingdong

Inventor after: Wang Wenzhe

Inventor after: Lu Yifei

Inventor after: Wu Bian

Inventor after: Wu Fuli

Inventor before: Wu Jian

Inventor before: Hu Heping

Inventor before: Wang Yanjie

Inventor before: Shu Jingdong

Inventor before: Wang Wenzhe

Inventor before: Lu Yifei

Inventor before: Wu Bian

Inventor before: Wu Fuli