CN109191472A - Based on the thymocyte image partition method for improving U-Net network - Google Patents

Based on the thymocyte image partition method for improving U-Net network Download PDF

Info

Publication number
CN109191472A
CN109191472A CN201810983829.6A CN201810983829A CN109191472A CN 109191472 A CN109191472 A CN 109191472A CN 201810983829 A CN201810983829 A CN 201810983829A CN 109191472 A CN109191472 A CN 109191472A
Authority
CN
China
Prior art keywords
image
thymus gland
network
segmentation
net network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810983829.6A
Other languages
Chinese (zh)
Inventor
于海滨
贝琛圆
潘勉
吕帅帅
和文杰
于彦贞
刘爱林
李子璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Hangzhou Electronic Science and Technology University
Original Assignee
Hangzhou Electronic Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Electronic Science and Technology University filed Critical Hangzhou Electronic Science and Technology University
Priority to CN201810983829.6A priority Critical patent/CN109191472A/en
Publication of CN109191472A publication Critical patent/CN109191472A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The invention discloses based on the thymocyte image partition method for improving U-Net network, comprising the following steps: UCSB breast image data set is carried out image preprocessing;In U-Net network, empty residual error module is added and pays attention to power module;U-Net network is trained according to the Training strategy of setting;Establishing includes that F1 scores, and the evaluation index of object level Dice coefficient and Hausdorff distance optimizes network by evaluation index, obtains optimal models;The cell image that need to divide is inputted into optimal models, up-samples to obtain segmentation masking-out by feature extraction and feature.The present invention creates a new cell image segmentation method, precision is lower during solving the problems, such as thymus gland Image Automatic Segmentation, improves the accuracy and efficiency of segmentation by improving a based fragmentation network.

Description

Based on the thymocyte image partition method for improving U-Net network
Technical field
The invention belongs to field of image processings, are related to a kind of based on the thymocyte image segmentation side for improving U-Net network Method.
Background technique
In recent years, the disease incidence of colon cancer is higher and higher, and colorectal cancer is the cancer and woman that third is common in male Second common cancer in female, wherein about 95% colorectal cancer is gland cancer.Under normal conditions, a typical body of gland is It is made of the epithelial nucleus around the lumen region of formation inner tubular structure and cytoplasm.It is generated by galandular epithelium pernicious Tumour, also referred to as gland cancer are most common cancer forms.In histopathological examination, Glands morphology is widely used in commenting Estimate several gland cancer, including mammary gland, prostate and colon.And accurate body of gland segmentation is to obtain reliable morphology statistical information One crucial prerequisite, these statistical informations show the invasion of tumour.Former, body of gland segmentation is by assessment biopsy sample The problems such as pathology expert of gland structure carries out in this, but annotation there are reproducibilities limited, heavy workload by hand, time-consuming. With pathological appearance is calculated, digitlization histology lantern slide is being widely used, and needs to analyze large-scale tissue disease Data of science.Therefore, very high requirement is proposed to automatic division method in clinical practice, to improve segmentation efficiency and reliability, Reduce the workload of virologist.
In the prior art, the body of gland in tissue pathological image usually is analyzed using various manual features or priori knowledge Structure, such as based on the method for figure, polar coordinate space random field models, random polygon model etc..In recent years, deep learning is because of it Powerful feature representation ability achieves huge success in the image recognition inter-related task of computer vision, also pushes The development of medical image analysis.For example, U-Net achieves excellent properties in body of gland segmentation task, although U-Net be one compared with For effective and simple model, but since the depth of model is inadequate, cause its feature representation ability limited.In order to further increase Body of gland example segmentation performance, a depth profile sensing network for possessing specific profile loss function are suggested, and Optimum performance is obtained in the challenge of MICCAI body of gland segmentation (Gland Segmentation, GlaS) scene.In addition, in the prior art There are also the frame combined is supervised with side in complicated multichannel region and boundary scheme, realize that body of gland example is divided with this.
However, automatic body of gland segmentation is still a challenging task due to several key factors.Firstly, smart It is extremely important to Morphological measurement result is extracted really to describe body of gland boundary, but since the down-sampling in neural network will lead to target The detailed information at edge is lost, so that the characteristic pattern resolution ratio after up-sampling is not high, reduces segmentation precision.Secondly, to be split Body of gland have different size and shapes, in particular with the increase of cancer grade, there is structure differentiation phenomenon in body of gland, increase Segmentation difficulty.
Summary of the invention
To solve the above problems, the present invention creates new cell image point by improving a based fragmentation network Segmentation method, precision is lower during solving the problems, such as thymus gland Image Automatic Segmentation, improves the accuracy and efficiency of segmentation.
To achieve the above object, the technical scheme is that it is a kind of based on the thymocyte image for improving U-Net network Dividing method, comprising the following steps:
UCSB breast image data set is subjected to image preprocessing;
In U-Net network, empty residual error module is added and pays attention to power module;
U-Net network is trained according to the Training strategy of setting;
Establishing includes that F1 scores, and the evaluation index of object level Dice coefficient and Hausdorff distance is excellent by evaluation index Change network, obtains optimal models;
The cell image that need to divide is inputted into optimal models, up-samples to obtain segmentation masking-out by feature extraction and feature.
Preferably, described image pretreatment includes rotation, cutting and the turning-over changed input picture for obtaining fixed size.
Preferably, described image pretreatment is that similarity transformation enhances method, and similarity transformation is obtained by following:
Formula is indicated with M1, constraint condition expression formula are as follows:
The minimum value for seeking M1 obtains matrix M are as follows:
Wherein, μsIt is equal to
It brings matrix M into M1 and obtains the warping function of similarity transformation:
Wherein, AiIt is only related with the set p at control point, it is obtained by following formula:
Preferably, the attention module definition is as follows:
Wherein,WithIt respectively represents and outputs and inputs, giThe gate signal that high-level contextual information provides is represented,Represent sigmoid activation primitive, ΘattIt include: linear transformation parameter And biasing
Preferably, the empty convolution kernel in the empty residual error module in traditional convolution kernel by being inserted into different scale Zero obtains.
Preferably, the Training strategy set is training method end to end, network random cropping from original image The region of one 464*464 exports the contour prediction mask of thymus gland as input;Training has 75 stages altogether, and every batch of 20 is opened Image, initial learning rate be 0.001, last classification layer learning rate be 0.01,1000 learning rates of every iteration multiplied by 0.1, and Using 0.9 momentum and the decaying of 0.0005 weight.
Preferably, the F1 scoring is defined by the formula:
Wherein, TP representative is originally used for thymus gland and is detected as thymus gland, and FP represents non-thymus gland originally but is detected as thymus gland, and FN is represented It is originally used for thymus gland but is detected as non-thymus gland, Precision represents accuracy rate, and Recall represents recall rate.
Preferably, the object level Dice coefficient is set measuring similarity function, calculates X by following formula, Y sample it is similar Degree:
Preferably, the Hausdorff distance is to be obtained for the distance between X in metric space, Y subset by following formula:
Beneficial effects of the present invention are as follows:
We train our model with the training set of UCSB breast data set, while in the test of the data set It is tested on collection.By replacing the common convolution in tradition U-net model with empty convolution, performance is promoted, compared to biography Model of uniting improves 4.8%.After increasing attention power module, the performance of model has reached highest 89.9%, has compared to basic model Compared with much progress.
Meanwhile we also compare and analyze with several outstanding parted patterns, including SegNet, FCN-8 and Deeplab-v3 etc., and obtained the result of assessment: only from the point of view of F1score, the property of FCN-8 model and tradition U-Net model Can be relatively poor, reason is that the number of plies of the two models is less, not good enough to the extraction of feature;SegNet model and The number of plies of Deeplab-v3 model is deeper, and performance is also relatively preferable, but on UCSB breast data set, is still not so good as us Model.
The invention proposes a kind of for dividing the improvement U-Net model of thymocyte, which divides in thymocyte In have preferable performance, have benefited from introducing empty residual error module and pay attention to power module, preferably resolve thymus gland in segmentation task The problem of thymus gland resolution ratio caused by not of uniform size and down-sampling declines compares other medical image segmentation models, the present invention The higher segmentation masking-out of precision can not only be generated in cell segmentation task, and splitting speed is faster, robustness is more excellent.And This model is more general, is easy to apply to the segmentation task of other medical images by training and fine tuning.
Detailed description of the invention
The step of Fig. 1 is the thymocyte image partition method based on improvement U-Net network of embodiment of the present invention method Flow chart;
Fig. 2 is embodiment of the present invention method based in S20 in the thymocyte image partition method for improving U-Net network Pay attention to power module concrete operation figure.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
On the contrary, the present invention cover it is any be defined by the claims the substitution made on the essence and scope of the present invention, modification, Equivalent method and scheme.Further, in order to make the public have a better understanding the present invention, details of the invention is retouched below It is detailed to describe some specific detail sections in stating.The description of part without these details for a person skilled in the art The present invention can also be understood completely.
It is technical solution of the present invention based on the thymocyte image partition method for improving U-Net network referring to Fig. 1, Fig. 1 Embodiment step flow chart, comprising the following steps:
UCSB breast image data set is carried out image preprocessing by S10;
S20 is added empty residual error module and pays attention to power module in U-Net network;
S30 is trained U-Net network according to the Training strategy of setting;
S40, establishing includes that F1 scores, and the evaluation index of object level Dice coefficient and Hausdorff distance is referred to by assessment Mark optimization network, obtains optimal models;
The cell image that need to divide is inputted optimal models by S50, up-samples to obtain segmentation illiteracy by feature extraction and feature Version.
In specific embodiment, the image preprocessing in S10 includes rotation, cuts and turning-over changed obtain the defeated of fixed size Enter image, be specifically as follows similarity transformation enhancing method, similarity transformation is obtained by following:
Formula is indicated with M1, constraint condition expression formula are as follows:
The minimum value for seeking M1 obtains matrix M are as follows:
Wherein, μsIt is equal to
It brings matrix M into M1 and obtains the warping function of similarity transformation:
Wherein, AiIt is only related with the set p at control point, it is obtained by following formula:
Referring to fig. 2, it is the operation schematic diagram that power module is paid attention in S20, attention module definition is as follows:
Wherein,WithIt respectively represents and outputs and inputs, giThe gate signal that high-level contextual information provides is represented,Represent sigmoid activation primitive, ΘattIt include: linear transformation parameter And biasing
Thymus gland segmentation is a complicated task, needs very deep network just and can be carried out significant feature extraction.Cause This, we realize that effective gradient is propagated using residual unit in network frame.Traditional residual unit can be with is defined as:
Y=f (x, Wi)+x
Wherein x and y is respectively and outputs and inputs, WiFor weight, f representative function W2(σ(W1X)), wherein σ is ReLU function. It inputs x and f and passes through phase add operationIt is combined together.
Traditional convolutional neural networks (convolutional neural network, CNN) using maximum pond layer with Convolutional layer in conjunction with mode increase receptive field, and maximum pond layer will lead to the loss of low-level information, thus to Accurate Segmentation It causes to seriously affect.In order to weaken the influence of low-level information loss, other than using traditional residual unit, we mention in feature It also added a kind of residual unit during taking: i.e. empty residual unit.Difference between empty residual sum conventional residual is Empty residual error uses empty convolution, and empty convolution kernel is obtained by being inserted into the zero of different scale in traditional convolution kernel 's.Compared to traditional convolution operation, empty convolution can obtain biggish receptive field in the case where not increasing parameter, and obtain with it is defeated Enter the identical characteristic pattern of size.Each 3 × 3 common convolution need to only be replaced with 3 × 3 empty convolution just by we in the present invention Empty convolution can be merged into residual unit.During network initial down-sampling, we can also be by maximum pond layer With the combination of residual unit because excessively will lead to the sharp increase of model parameter using empty residual unit.
To solve the problems, such as multiscale target, invention introduces empty residual error modules, but empty convolution is a kind of sparse meter It calculates, this, which may result in, generates grid pseudomorphism.Therefore it attempts to be extracted from the high-level feature of CNN accurately with attention power module Pixel-level attention feature, the module are able to suppress the activation of extraneous areas, can thus weaken the positioning of nontarget area.
In specific embodiment, respectively to high-level feature g and low level feature xl1 × 1 convolution operation is executed, to reduce The port number of CNN characteristic pattern.Two kinds of characteristic patterns are merged again, and are successively normalized by ReLU function, 1 × 1 convolution, batch (batch normalization, BN), sigmoid function and up-sampling, obtain weighted value a, finally, high-level feature with plus Low level feature after powerIt is added and carries out upper sampling process gradually.After AG module is added, model can be made further to subtract Few positioning to non-body of gland region, and focus more on the study of thymus structure.
The Training strategy set in S30 is training method end to end, network random cropping one from original image The region of 464*464 exports the contour prediction mask of thymus gland as input;Training has 75 stages altogether, and every batch of 20 opens image, Initial learning rate is 0.001, and last classification layer learning rate is 0.01, and 1000 learning rates of every iteration are used multiplied by 0.1 0.9 momentum and 0.0005 weight decaying.
In S40, F1 scoring is defined by the formula for assessing the accuracy of single thymus gland detection, F1 scoring:
Wherein, TP representative is originally used for thymus gland and is detected as thymus gland, and FP represents non-thymus gland originally but is detected as thymus gland, and FN is represented It is originally used for thymus gland but is detected as non-thymus gland, Precision represents accuracy rate, and Recall represents recall rate.
Object level Dice coefficient is set measuring similarity function, for assess thymus gland and divide mask based on volume Accuracy calculates X by following formula, the similarity of Y sample:
Hausdorff distance is for the distance between X in metric space, Y subset, for assessing thymus gland and segmentation mask The similitude based on boundary, obtained by following formula:
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.

Claims (9)

1. a kind of based on the thymocyte image partition method for improving U-Net network, which comprises the following steps:
UCSB breast image data set is subjected to image preprocessing;
In U-Net network, empty residual error module is added and pays attention to power module;
U-Net network is trained according to the Training strategy of setting;
Establishing includes that F1 scores, and the evaluation index of object level Dice coefficient and Hausdorff distance optimizes net by evaluation index Network obtains optimal models;
The cell image that need to divide is inputted into optimal models, up-samples to obtain segmentation masking-out by feature extraction and feature.
2. the method according to claim 1, wherein described image pretreatment includes that rotation, cutting and overturning become Get the input picture of fixed size in return.
3. the method according to claim 1, wherein described image pretreatment be similarity transformation enhance method, it is similar Transformation is obtained by following:
Formula is indicated with M1, constraint condition expression formula are as follows:
The minimum value for seeking M1 obtains matrix M are as follows:
Wherein, μsIt is equal to
It brings matrix M into M1 and obtains the warping function of similarity transformation:
Wherein, AiIt is only related with the set p at control point, it is obtained by following formula:
4. the method according to claim 1, wherein the attention module definition is as follows:
Wherein,WithIt respectively represents and outputs and inputs, giThe gate signal that high-level contextual information provides is represented,Represent sigmoid activation primitive, ΘattIt include: linear transformation parameterAnd biasing
5. the method according to claim 1, wherein empty convolution kernel in the cavity residual error module by The zero of different scale is inserted into traditional convolution kernel to obtain.
6. the method according to claim 1, wherein the Training strategy set is training side end to end Formula, the region of network one 464*464 of random cropping from original image export the contour prediction mask of thymus gland as input; Training has 75 stages altogether, and every batch of 20 opens image, and initial learning rate is 0.001, and last classification layer learning rate is 0.01, often 1000 learning rates of iteration are decayed multiplied by 0.1, and using the weight of 0.9 momentum and 0.0005.
7. the method according to claim 1, wherein F1 scoring is defined by the formula:
Wherein, TP representative is originally used for thymus gland and is detected as thymus gland, and FP represents non-thymus gland originally but is detected as thymus gland, and FN is represented originally For thymus gland but it is detected as non-thymus gland, Precision represents accuracy rate, and Recall represents recall rate.
8. the method according to claim 1, wherein the object level Dice coefficient is set measuring similarity letter Number, calculates X by following formula, the similarity of Y sample:
9. the method according to claim 1, wherein Hausdorff distance is for X in metric space, Y The distance between subset is obtained by following formula:
CN201810983829.6A 2018-08-28 2018-08-28 Based on the thymocyte image partition method for improving U-Net network Pending CN109191472A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810983829.6A CN109191472A (en) 2018-08-28 2018-08-28 Based on the thymocyte image partition method for improving U-Net network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810983829.6A CN109191472A (en) 2018-08-28 2018-08-28 Based on the thymocyte image partition method for improving U-Net network

Publications (1)

Publication Number Publication Date
CN109191472A true CN109191472A (en) 2019-01-11

Family

ID=64916280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810983829.6A Pending CN109191472A (en) 2018-08-28 2018-08-28 Based on the thymocyte image partition method for improving U-Net network

Country Status (1)

Country Link
CN (1) CN109191472A (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886971A (en) * 2019-01-24 2019-06-14 西安交通大学 A kind of image partition method and system based on convolutional neural networks
CN110110617A (en) * 2019-04-22 2019-08-09 腾讯科技(深圳)有限公司 Medical image dividing method, device, electronic equipment and storage medium
CN110288611A (en) * 2019-06-12 2019-09-27 上海工程技术大学 Coronary vessel segmentation method based on attention mechanism and full convolutional neural networks
CN110288605A (en) * 2019-06-12 2019-09-27 三峡大学 Cell image segmentation method and device
CN110415231A (en) * 2019-07-25 2019-11-05 山东浪潮人工智能研究院有限公司 A kind of CNV dividing method based on attention pro-active network
CN110503014A (en) * 2019-08-08 2019-11-26 东南大学 Demographic method based on multiple dimensioned mask perception feedback convolutional neural networks
CN110570350A (en) * 2019-09-11 2019-12-13 深圳开立生物医疗科技股份有限公司 two-dimensional follicle detection method and device, ultrasonic equipment and readable storage medium
CN110610480A (en) * 2019-08-02 2019-12-24 成都上工医信科技有限公司 MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism
CN110930397A (en) * 2019-12-06 2020-03-27 陕西师范大学 Magnetic resonance image segmentation method and device, terminal equipment and storage medium
CN110992352A (en) * 2019-12-13 2020-04-10 北京小白世纪网络科技有限公司 Automatic infant head circumference CT image measuring method based on convolutional neural network
CN111028236A (en) * 2019-11-18 2020-04-17 浙江工业大学 Cancer cell image segmentation method based on multi-scale convolution U-Net
CN111062347A (en) * 2019-12-21 2020-04-24 武汉中海庭数据技术有限公司 Traffic element segmentation method in automatic driving, electronic device and storage medium
CN111311548A (en) * 2020-01-20 2020-06-19 清华四川能源互联网研究院 Forming method of aggregate detection model and method for detecting aggregate of stilling pool bottom plate
CN111612790A (en) * 2020-04-29 2020-09-01 杭州电子科技大学 Medical image segmentation method based on T-shaped attention structure
CN111723635A (en) * 2019-03-20 2020-09-29 北京四维图新科技股份有限公司 Real-time scene understanding system
CN112102323A (en) * 2020-09-17 2020-12-18 陕西师范大学 Adherent nucleus segmentation method based on generation of countermeasure network and Caps-Unet network
CN112164069A (en) * 2020-07-29 2021-01-01 南通大学 CT abdominal blood vessel segmentation method based on deep learning
CN112651978A (en) * 2020-12-16 2021-04-13 广州医软智能科技有限公司 Sublingual microcirculation image segmentation method and device, electronic equipment and storage medium
CN113298826A (en) * 2021-06-09 2021-08-24 东北大学 Image segmentation method based on LA-Net network
CN113299374A (en) * 2021-06-03 2021-08-24 广东财经大学 Thyroid nodule ultrasonic image automatic segmentation system based on deep learning
CN113362350A (en) * 2021-07-26 2021-09-07 海南大学 Segmentation method and device for cancer medical record image, terminal device and storage medium
CN113706544A (en) * 2021-08-19 2021-11-26 天津师范大学 Medical image segmentation method based on complete attention convolution neural network
CN113793345A (en) * 2021-09-07 2021-12-14 复旦大学附属华山医院 Medical image segmentation method and device based on improved attention module
CN114092477A (en) * 2022-01-21 2022-02-25 浪潮云信息技术股份公司 Image tampering detection method, device and equipment
CN114332122A (en) * 2021-12-30 2022-04-12 福州大学 Cell counting method based on attention mechanism segmentation and regression
CN114937045A (en) * 2022-06-20 2022-08-23 四川大学华西医院 Hepatocellular carcinoma pathological image segmentation system
CN115239716A (en) * 2022-09-22 2022-10-25 杭州影想未来科技有限公司 Medical image segmentation method based on shape prior U-Net

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243636A (en) * 2015-11-27 2016-01-13 武汉工程大学 Method and system for image deformation based on MRLS-TPS
AU2018100325A4 (en) * 2018-03-15 2018-04-26 Nian, Xilai MR A New Method For Fast Images And Videos Coloring By Using Conditional Generative Adversarial Networks
CN107977963A (en) * 2017-11-30 2018-05-01 北京青燕祥云科技有限公司 Decision method, device and the realization device of Lung neoplasm
CN108053417A (en) * 2018-01-30 2018-05-18 浙江大学 A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature
CN108376558A (en) * 2018-01-24 2018-08-07 复旦大学 A kind of multi-modal nuclear magnetic resonance image Case report no automatic generation method
CN108376392A (en) * 2018-01-30 2018-08-07 复旦大学 A kind of image motion ambiguity removal method based on convolutional neural networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243636A (en) * 2015-11-27 2016-01-13 武汉工程大学 Method and system for image deformation based on MRLS-TPS
CN107977963A (en) * 2017-11-30 2018-05-01 北京青燕祥云科技有限公司 Decision method, device and the realization device of Lung neoplasm
CN108376558A (en) * 2018-01-24 2018-08-07 复旦大学 A kind of multi-modal nuclear magnetic resonance image Case report no automatic generation method
CN108053417A (en) * 2018-01-30 2018-05-18 浙江大学 A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature
CN108376392A (en) * 2018-01-30 2018-08-07 复旦大学 A kind of image motion ambiguity removal method based on convolutional neural networks
AU2018100325A4 (en) * 2018-03-15 2018-04-26 Nian, Xilai MR A New Method For Fast Images And Videos Coloring By Using Conditional Generative Adversarial Networks

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LIANG-CHIEH CHEN ET AL.: "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
OZAN OKTAY等: "Attention U-Net:Learning Where to Look for the Pancreas", 《ARXIV:1804.03999V3》 *
周俊昊: "基于图像的乱针绣绣稿生成技术", 《中国优秀硕士学位论文全文数据库》 *
周鲁科等: "基于U_net网络的肺部肿瘤图像分割算法研究", 《信息与电脑(理论版)》 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886971A (en) * 2019-01-24 2019-06-14 西安交通大学 A kind of image partition method and system based on convolutional neural networks
CN111723635A (en) * 2019-03-20 2020-09-29 北京四维图新科技股份有限公司 Real-time scene understanding system
CN111723635B (en) * 2019-03-20 2023-08-18 北京四维图新科技股份有限公司 Real-time scene understanding system
CN110110617A (en) * 2019-04-22 2019-08-09 腾讯科技(深圳)有限公司 Medical image dividing method, device, electronic equipment and storage medium
CN110110617B (en) * 2019-04-22 2021-04-20 腾讯科技(深圳)有限公司 Medical image segmentation method and device, electronic equipment and storage medium
CN110288611A (en) * 2019-06-12 2019-09-27 上海工程技术大学 Coronary vessel segmentation method based on attention mechanism and full convolutional neural networks
CN110288605A (en) * 2019-06-12 2019-09-27 三峡大学 Cell image segmentation method and device
CN110415231A (en) * 2019-07-25 2019-11-05 山东浪潮人工智能研究院有限公司 A kind of CNV dividing method based on attention pro-active network
CN110610480A (en) * 2019-08-02 2019-12-24 成都上工医信科技有限公司 MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism
CN110503014B (en) * 2019-08-08 2023-04-07 东南大学 People counting method based on multi-scale mask sensing feedback convolutional neural network
CN110503014A (en) * 2019-08-08 2019-11-26 东南大学 Demographic method based on multiple dimensioned mask perception feedback convolutional neural networks
CN110570350A (en) * 2019-09-11 2019-12-13 深圳开立生物医疗科技股份有限公司 two-dimensional follicle detection method and device, ultrasonic equipment and readable storage medium
CN111028236A (en) * 2019-11-18 2020-04-17 浙江工业大学 Cancer cell image segmentation method based on multi-scale convolution U-Net
CN110930397B (en) * 2019-12-06 2022-10-18 陕西师范大学 Magnetic resonance image segmentation method and device, terminal equipment and storage medium
CN110930397A (en) * 2019-12-06 2020-03-27 陕西师范大学 Magnetic resonance image segmentation method and device, terminal equipment and storage medium
CN110992352A (en) * 2019-12-13 2020-04-10 北京小白世纪网络科技有限公司 Automatic infant head circumference CT image measuring method based on convolutional neural network
CN111062347A (en) * 2019-12-21 2020-04-24 武汉中海庭数据技术有限公司 Traffic element segmentation method in automatic driving, electronic device and storage medium
CN111311548A (en) * 2020-01-20 2020-06-19 清华四川能源互联网研究院 Forming method of aggregate detection model and method for detecting aggregate of stilling pool bottom plate
CN111612790A (en) * 2020-04-29 2020-09-01 杭州电子科技大学 Medical image segmentation method based on T-shaped attention structure
CN111612790B (en) * 2020-04-29 2023-10-17 杭州电子科技大学 Medical image segmentation method based on T-shaped attention structure
CN112164069A (en) * 2020-07-29 2021-01-01 南通大学 CT abdominal blood vessel segmentation method based on deep learning
CN112102323B (en) * 2020-09-17 2023-07-07 陕西师范大学 Adhesion cell nucleus segmentation method based on generation of countermeasure network and Caps-Unet network
CN112102323A (en) * 2020-09-17 2020-12-18 陕西师范大学 Adherent nucleus segmentation method based on generation of countermeasure network and Caps-Unet network
CN112651978A (en) * 2020-12-16 2021-04-13 广州医软智能科技有限公司 Sublingual microcirculation image segmentation method and device, electronic equipment and storage medium
CN113299374A (en) * 2021-06-03 2021-08-24 广东财经大学 Thyroid nodule ultrasonic image automatic segmentation system based on deep learning
CN113299374B (en) * 2021-06-03 2023-08-29 广东财经大学 Thyroid nodule ultrasonic image automatic segmentation system based on deep learning
CN113298826A (en) * 2021-06-09 2021-08-24 东北大学 Image segmentation method based on LA-Net network
CN113298826B (en) * 2021-06-09 2023-11-14 东北大学 Image segmentation method based on LA-Net network
CN113362350A (en) * 2021-07-26 2021-09-07 海南大学 Segmentation method and device for cancer medical record image, terminal device and storage medium
CN113362350B (en) * 2021-07-26 2024-04-02 海南大学 Method, device, terminal equipment and storage medium for segmenting cancer medical record image
CN113706544B (en) * 2021-08-19 2023-08-29 天津师范大学 Medical image segmentation method based on complete attention convolutional neural network
CN113706544A (en) * 2021-08-19 2021-11-26 天津师范大学 Medical image segmentation method based on complete attention convolution neural network
CN113793345B (en) * 2021-09-07 2023-10-31 复旦大学附属华山医院 Medical image segmentation method and device based on improved attention module
CN113793345A (en) * 2021-09-07 2021-12-14 复旦大学附属华山医院 Medical image segmentation method and device based on improved attention module
CN114332122A (en) * 2021-12-30 2022-04-12 福州大学 Cell counting method based on attention mechanism segmentation and regression
CN114092477A (en) * 2022-01-21 2022-02-25 浪潮云信息技术股份公司 Image tampering detection method, device and equipment
CN114937045A (en) * 2022-06-20 2022-08-23 四川大学华西医院 Hepatocellular carcinoma pathological image segmentation system
CN115239716A (en) * 2022-09-22 2022-10-25 杭州影想未来科技有限公司 Medical image segmentation method based on shape prior U-Net

Similar Documents

Publication Publication Date Title
CN109191472A (en) Based on the thymocyte image partition method for improving U-Net network
CN109191471A (en) Based on the pancreatic cell image partition method for improving U-Net network
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
US10019656B2 (en) Diagnostic system and method for biological tissue analysis
WO2021203795A1 (en) Pancreas ct automatic segmentation method based on saliency dense connection expansion convolutional network
CN106846317B (en) Medical image retrieval method based on feature extraction and similarity matching
CN107451615A (en) Thyroid papillary carcinoma Ultrasound Image Recognition Method and system based on Faster RCNN
CN110188792A (en) The characteristics of image acquisition methods of prostate MRI 3-D image
CN110097974A (en) A kind of nasopharyngeal carcinoma far-end transfer forecasting system based on deep learning algorithm
CN113674253A (en) Rectal cancer CT image automatic segmentation method based on U-transducer
CN111179237A (en) Image segmentation method and device for liver and liver tumor
Xu et al. Using transfer learning on whole slide images to predict tumor mutational burden in bladder cancer patients
CN112263217B (en) Improved convolutional neural network-based non-melanoma skin cancer pathological image lesion area detection method
CN110751636A (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
CN112990214A (en) Medical image feature recognition prediction model
CN113223004A (en) Liver image segmentation method based on deep learning
CN115546605A (en) Training method and device based on image labeling and segmentation model
Razavi et al. Minugan: Dual segmentation of mitoses and nuclei using conditional gans on multi-center breast h&e images
CN103903015A (en) Cell mitosis detection method
CN115471701A (en) Lung adenocarcinoma histology subtype classification method based on deep learning and transfer learning
CN114581474A (en) Automatic clinical target area delineation method based on cervical cancer CT image
CN110706217A (en) Deep learning-based lung tumor automatic delineation method
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
CN116363647A (en) Lung cancer pathological tissue typing system based on deep semantic segmentation network
Lin et al. Curvelet-based classification of prostate cancer histological images of critical gleason scores

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190111