CN110223291A - A kind of training retinopathy height segmentation network method based on loss function - Google Patents

A kind of training retinopathy height segmentation network method based on loss function Download PDF

Info

Publication number
CN110223291A
CN110223291A CN201910534317.6A CN201910534317A CN110223291A CN 110223291 A CN110223291 A CN 110223291A CN 201910534317 A CN201910534317 A CN 201910534317A CN 110223291 A CN110223291 A CN 110223291A
Authority
CN
China
Prior art keywords
function
segmentation
training
network
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910534317.6A
Other languages
Chinese (zh)
Other versions
CN110223291B (en
Inventor
郭松
李涛
王恺
康宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nankai University
Original Assignee
Nankai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nankai University filed Critical Nankai University
Priority to CN201910534317.6A priority Critical patent/CN110223291B/en
Publication of CN110223291A publication Critical patent/CN110223291A/en
Application granted granted Critical
Publication of CN110223291B publication Critical patent/CN110223291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The present invention discloses a kind of training retinopathy height segmentation network method based on loss function, trains depth segmentation network, using loss function efficiently to be divided to eyeground pathological changes point.According to indicator function as a result, judging that negative sample is retained or abandons, indicator function value is 1 and retains negative sample, on the contrary then abandon negative sample.The discriminating power and learning rate of network are improved with this, wherein easily divide sample with higher probability dropping, difficulty divides sample with lower probability dropping;In the case where reservation difficulty divides sample, a large amount of samples selection time can be saved, so that network, which concentrates on hardly possible, divides sample in study.The present invention can solve the problems, such as segmentation network caused by class balanced, crossover entropy loss function, and accidentally point situation is mostly lower with learning efficiency, is efficiently divided to eyeground pathological changes point.

Description

A kind of training retinopathy height segmentation network method based on loss function
Technical field
The invention belongs to nerual network technique field, in particular to a kind of training retinopathy height based on loss function point Cut network method.
Background technique
Depth convolutional neural networks are as a kind of deep learning model, in image classification, target detection and Target Segmentation etc. State-of-the-art performance is all achieved in many Computer Vision Tasks.In recent years, the semantic segmentation model based on deep learning obtains Extensive research has been arrived, significant effect is achieved.However, as far as we know, most of existing models all concentrate on normally On the object of size, such as animal and vehicle.The semantic segmentation of wisp is adequately studied not yet.Such as medicine neck Retinopathy height test problems in domain.However, segmentation minor illness height is different from the object of segmentation normal size.Small object segmentation In be constantly present class imbalance problem, this is very common in medical image, for example, the lesion point ratio in eye fundus image can be with Down to 0.1%.The loss function made in big object segmentation can not be applicable in by this extreme imbalance in wisp segmentation, because To be easy to for all pixels to be classified as background, and obtain meaningless 99.9% accuracy.One intuitive solution classification The method of imbalance problem is that different weights is distributed for different classifications, and the method is known as class balanced, crossover entropy loss by us Function.The pixel of small scale is endowed high weight, and a high proportion of pixel is endowed low weight.But this method does not consider sample This weight, all negative samples are treated equally, and share identical weight.Therefore, it in class balanced, crossover entropy loss, bears Sample is often mistakenly classified as positive sample, because the loss of misclassification background pixel is more much smaller than the loss of misclassification positive sample.
For above-mentioned problem, currently no effective solution has been proposed.
Summary of the invention
The present invention for the technical problems in the prior art, provides a kind of training eyeground pathological changes based on loss function Point segmentation network method, can solve caused by class balanced, crossover entropy loss function segmentation network accidentally point situation mostly with learning efficiency compared with Low problem efficiently divides eyeground pathological changes point.
In order to solve the above technical problems, the technical solution adopted by the present invention is that: a kind of training eyeground based on loss function Lesion point divides network method, comprising the following steps:
Step 1: the pretreatment eyeground IDRiD data set chooses a certain number of eye fundus images as training set and survey respectively Examination collection;With certain resolution ratio to every image down sampling;Data enhancing is carried out to training set;The super ginseng of setting segmentation network Number;
Step 2: the weight of initialization segmentation each layer of network;
Step 3: selecting an eye fundus image at random from the training set after expansion;And it is cut out one at random from image The region of scale cun;
Step 4: eye fundus image obtains the input of loss function layer through the processing module in over-segmentation network, i.e. segmentation is general Rate figure p and corresponding mark y;
Step 5: according to setting, selecting a kind of discarding function;
Step 6: to each of image negative sample (background pixel);According to indicator function as a result, determining its quilt Retain or abandons;
Step 7: weight factor β is calculated,
Wherein | Y+| be positive number of samples, i.e. segmentation sampled pixel number, | Y-| for the negative sample retained This number, i.e. background pixel number;
Step 8: propagated forward loss is calculated,
Loss function is as follows:
Wherein β be retain negative sample number divided by the sum of itself and positive sample number, calculated using formula in step 7 It arrives;
Step 9: to each of image sample, calculating respective gradient information;
Step 10: the gradient backpropagation of loss layer updates the weighting parameter in segmentation network characterization processing module;
Step 11: if network is still not converged, or not up to maximum number of iterations, return step 3;
Step 12: after network training terminates, aneurysms segmentation being carried out to eye fundus image on test set, according to segmentation As a result, calculating PR curve.
Preferably, carrying out data enhancing by the way of rotation, mirror image to training set in step 1.
Preferably, abandoning function will activate probability to be mapped to drop probability, and loss function is strong according to abandoning in step 5 The difference of degree and calculating cost is as follows respectively there are three types of discarding function:
It is linear to abandon function: pdrop(pj)=1.0-pj
Square abandon function: pdrop(pj)=(1.0-pj)2
Logarithm abandons function: pdrop(pj)=1.0+log (1.0-pj)
For the negative sample of discarding, its loss contributed is 0.
Preferably, indicator function is as follows in step 6,
Wherein, 0 r, the random number between 1;
pdrop(pj) it is to abandon function, retain negative sample for 1, it is on the contrary then abandon negative sample.
Preferably, gradient information calculation method is as follows in step 9:
Initialize the gradient of all samples;
To a certain sample i, gradient giIt is initialized as: pi-yi, wherein piTo activate probability value, yiFor mark;
For positive sample, gradient updating are as follows: gi=β × gi
For the negative sample of reservation, gradient updating are as follows: gi=(1.0- β) × gi
For the negative sample of discarding, gradient is set as 0.
Compared with prior art, the present invention has the beneficial effects that when the present invention solves segmentation retinopathy height The unbalanced problem of classification, alleviate to a certain extent lesion point segmentation network kind misrecognition problem, accelerate network Learning rate, and a variety of deep learning models can be acted on.
Detailed description of the invention
Fig. 1 is segmentation network training and test flow chart;
Fig. 2 is a kind of optional retinopathy height segmentation network diagram according to an embodiment of the present invention;
Fig. 3 is three kinds of discarding function schematic diagrames;
Fig. 4 is the PR curve comparison figure of the arterioles of fundus tumor segmentation result of distinct methods;
Fig. 5 is the PR curve comparison figure of the fundus hemorrhage point segmentation result of distinct methods;
Fig. 6 a is expert's mark figure of arterioles of fundus tumor;
Fig. 6 b is the corresponding segmentation figure of class balanced, crossover entropy loss of arterioles of fundus tumor;
Fig. 6 c is the corresponding segmentation figure of new loss function of arterioles of fundus tumor;
Fig. 7 a is expert's mark figure of fundus hemorrhage point;
Fig. 7 b is the corresponding segmentation figure of class balanced, crossover entropy loss of fundus hemorrhage point;
Fig. 7 c is the corresponding segmentation figure of new loss function of fundus hemorrhage point.
Specific embodiment
Technical solution in order to enable those skilled in the art to better understand the present invention, in the following with reference to the drawings and specific embodiments It elaborates to the present invention.
Embodiment 1
In the relevant technologies, such as class balanced, crossover entropy loss function, the pixel of small scale is endowed high weight, a high proportion of Pixel is endowed low weight.But this method does not consider the weight between sample, all negative samples are treated equally, and are shared Identical weight.Therefore, in class balanced, crossover entropy loss, negative sample is often mistakenly classified as positive sample.In addition, this loss The training effectiveness of function is not high.Therefore a kind of new loss function is proposed in order to solve the technical problem in the present embodiment Train the lesion point to divide network so that network concentrate on it is difficult divide sample in study, improve network learning efficiency and Anti-interference.
Before introducing the technical solution of the embodiment of the present invention, it is necessary first to the lesion used in the embodiment of the present invention Point segmentation network is defined, and in embodiments of the present invention, divides network as shown in Figure 1 for a kind of optional lesion point, by Three parts composition: input module, processing module and output module.Input module pre-processes training data, for example contracts It puts, expand.Processing module includes convolution operation and pondization operation etc. in neural network.Output module divides lesion point As a result it is saved and is visualized.
In eye fundus image aneurysms segmentation task, the segmentation network in loss function training Fig. 1, specific method are used The following steps are included:
Step 1: the pretreatment eyeground IDRiD data set, 54 eye fundus images are as training set, and 27 images are as test Collection.Since the size of image is too big, computer hardware is difficult to handle, and therefore, the resolution ratio of every image is down sampled to 1440 ×960.Using rotation, the modes such as mirror image carry out data enhancing to training set.The hyper parameter of setting segmentation network;
Step 2: the weight of initialization segmentation each layer of network;
Step 3: selecting an eye fundus image at random from the training set after expansion.And it is cut out at random from image 800 × 800 region;
Step 4: eye fundus image obtains the input of loss function layer through the processing module in over-segmentation network, i.e. segmentation is general Rate figure p and corresponding mark y;
Step 5: according to setting, abandoning function from linear, square discarding function and logarithm abandon in function and select one kind.Three The schematic diagram of kind function is as shown in Figure 2;
Step 6: to each of image negative sample (background pixel), according to indicator function as a result, determining its quilt Retaining or abandons, indicator function is as follows,
Wherein, 0 r, the random number between 1, pdrop(pj) it is to abandon function, the effect for abandoning function is will to activate probability It is mapped to drop probability, loss function provides three kinds of discarding functions according to the difference for abandoning intensity and calculating cost, fixed respectively Justice is as follows:
It is linear to abandon function: pdrop(pj)=1.0-pj
Square abandon function: pdrop(pj)=(1.0-pj)2
Logarithm abandons function: pdrop(pj)=1.0+log (1.0-pj)
For the negative sample of discarding, its loss contributed is 0;
Step 7: weight factor β is calculated,
Wherein | Y+| it is positive sample (aneurysms pixel) number, | Y-| for the negative sample (back retained Scene element) number.
Step 8: propagated forward loss is calculated,
Loss function is as follows:
Wherein β be retain negative sample number divided by the sum of itself and positive sample number, calculated using formula in step 7 It arrives.
Step 9: to each of image sample, calculating respective gradient information;
Step 9.1: initializing the gradient of all samples.
To a certain sample i, gradient giIt is initialized as: pi-yi, wherein piTo activate probability value, yiFor mark;
Step 9.2: for positive sample, gradient updating are as follows: gi=β × gi
Step 9.3: for the negative sample of reservation, gradient updating are as follows: gi=(1.0- β) × gi
Step 9.4: for the negative sample of discarding, gradient is set as 0.
Step 10: the gradient backpropagation of loss layer updates the weighting parameter in segmentation network characterization processing module;
Step 11: if network is still not converged, or not up to maximum number of iterations, return step 3;
Step 12: after network training terminates, aneurysms segmentation being carried out to eye fundus image on test set, according to segmentation As a result, calculating PR curve, as a result as shown in Figure 4.The segmentation probability graph of aneurysms is as fig. 6 c on test set.
Table 1: aneurysms segmentation effect comparing result
It is in table 1 the result shows that, loss function compared with prior art in the segmentation of aneurysms there are clear superiority, The effect of three kinds of discarding functions is superior to class balanced, crossover entropy loss function.
Embodiment 2
In eye fundus image blutpunkte segmentation task, the segmentation network in loss function training Fig. 1, specific method packet are used Include following steps:
Step 1: the pretreatment eyeground IDRiD data set, 54 eye fundus images are as training set, and 27 images are as test Collection.Since the size of image is too big, computer hardware is difficult to handle, and therefore, the resolution ratio of every image is down sampled to 1440 ×960.Using rotation, the modes such as mirror image carry out data enhancing to training set.The hyper parameter of setting segmentation network;
Step 2: the weight of initialization segmentation each layer of network;
Step 3: selecting an eye fundus image at random from the training set after expansion.And it is cut out at random from image 800 × 800 region;
Step 4: eye fundus image obtains the input of loss function layer through the processing module in over-segmentation network, i.e. segmentation is general Rate figure p and corresponding mark y;
Step 5: according to setting, abandoning function from linear, square discarding function and logarithm abandon in function and select one kind.Three The schematic diagram of kind function is as shown in Figure 3;
Step 6: to each of image negative sample (background pixel), according to indicator function as a result, determining its quilt Retaining or abandons, indicator function is as follows,
Wherein, 0 r, the random number between 1, pdrop(pj) it is to abandon function, the effect for abandoning function is will to activate probability It is mapped to drop probability, loss function provides three kinds of discarding functions according to the difference for abandoning intensity and calculating cost, fixed respectively Justice is as follows:
It is linear to abandon function: pdrop(pj)=1.0-pj
Square abandon function: pdrop(pj)=(1.0-pj)2
Logarithm abandons function: pdrop(pj)=1.0+log (1.0-pj)
For the negative sample of discarding, its loss contributed is 0;
Step 7: weight factor β is calculated,
Wherein | Y+| it is positive sample (blutpunkte pixel) number, | Y-| for the negative sample (background retained Pixel) number.
Step 8: propagated forward loss is calculated,
Loss function is as follows:
Wherein β be retain negative sample number divided by the sum of itself and positive sample number, calculated using formula in step 7 It arrives.
Step 9: to each of image sample, calculating respective gradient information;
Step 9.1: initializing the gradient of all samples.
To a certain sample i, gradient giIt is initialized as: pi-yi, wherein piTo activate probability value, yiFor mark;
Step 9.2: for positive sample, gradient updating are as follows: gi=β × gi
Step 9.3: for the negative sample of reservation, gradient updating are as follows: gi=(1.0- β) × gi
Step 9.4: for the negative sample of discarding, gradient is set as 0.
Step 10: the gradient backpropagation of loss layer updates the weighting parameter in segmentation network characterization processing module;
Step 11: if network is still not converged, or not up to maximum number of iterations, return step 3;
Step 12: after network training terminates, blutpunkte segmentation being carried out to eye fundus image on test set, is tied according to segmentation Fruit calculates PR curve, as a result as shown in Figure 5.The segmentation probability graph of blutpunkte is as shown in Figure 7 c on test set.
Table 2: blutpunkte segmentation effect comparing result
It is in table 2 the result shows that, loss function is compared with prior art in the segmentation of blutpunkte there are clear superiority, three The effect that kind abandons function is superior to class balanced, crossover entropy loss function.
It is described the invention in detail above by embodiment, but the content is only exemplary implementation of the invention Example, should not be considered as limiting the scope of the invention.Protection scope of the present invention is defined by the claims.All utilizations Technical solutions according to the invention or those skilled in the art are under the inspiration of technical solution of the present invention, in reality of the invention In matter and protection scope, designs similar technical solution and reach above-mentioned technical effect, or to made by application range All the changes and improvements etc. should still belong to patent of the invention and cover within protection scope.It should be noted that in order to clear It is stated, part and protection scope of the present invention is omitted in explanation of the invention without being directly significantly associated with but this field skill The statement of component known to art personnel and processing.

Claims (5)

1. a kind of training retinopathy height based on loss function divides network method, which comprises the following steps:
Step 1: the pretreatment eyeground IDRiD data set chooses a certain number of eye fundus images as training set and test set respectively; With certain resolution ratio to every image down sampling;Data enhancing is carried out to training set;The hyper parameter of setting segmentation network;
Step 2: the weight of initialization segmentation each layer of network;
Step 3: selecting an eye fundus image at random from the training set after expansion;And it is cut out a scale at random from image Very little region;
Step 4: eye fundus image obtains the input of loss function layer through the processing module in over-segmentation network, i.e. segmentation probability graph p With corresponding mark y;
Step 5: according to setting, selecting a kind of discarding function;
Step 6: to each of image negative sample, according to indicator function as a result, determining that it is retained or abandons;
Step 7: weight factor β is calculated,
Wherein | Y+| be positive number of samples, i.e. segmentation sampled pixel number, | Y-| for the negative sample retained Number, i.e. background pixel number;
Step 8: propagated forward loss is calculated,
Loss function is as follows:
Wherein β be retain negative sample number divided by the sum of itself and positive sample number, be calculated using formula in step 7;
Step 9: to each of image sample, calculating respective gradient information;
Step 10: the gradient backpropagation of loss layer updates the weighting parameter in segmentation network characterization processing module;
Step 11: if network is still not converged, or not up to maximum number of iterations, return step 3;
Step 12: after network training terminates, aneurysms segmentation being carried out to eye fundus image on test set, is tied according to segmentation Fruit calculates PR curve.
2. a kind of training retinopathy height based on loss function according to claim 1 divides network method, feature It is, in step 1, data enhancing is carried out by the way of rotation, mirror image to training set.
3. a kind of training retinopathy height based on loss function according to claim 1 divides network method, feature It is, in step 5, abandoning function will activate probability to be mapped to drop probability, and loss function is according to discarding intensity and calculates cost It is different there are three types of abandoning function, it is as follows respectively:
It is linear to abandon function: pdrop(pj)=1.0-pj
Square abandon function: pdrop(pj)=(1.0-pj)2
Logarithm abandons function: pdrop(pj)=1.0+log (1.0-pj)
For the negative sample of discarding, its loss contributed is 0.
4. a kind of training retinopathy height based on loss function according to claim 1 divides network method, feature It is, in step 6, indicator function is as follows,
Wherein, 0 r, the random number between 1;
pdrop(pj) it is to abandon function, retain negative sample for 1, it is on the contrary then abandon negative sample.
5. a kind of training retinopathy height based on loss function according to claim 1 divides network method, feature It is, in step 9, gradient information calculation method is as follows:
Initialize the gradient of all samples;
To a certain sample i, gradient giIt is initialized as: pi-yi, wherein piTo activate probability value, yiFor mark;
For positive sample, gradient updating are as follows: gi=β × gi
For the negative sample of reservation, gradient updating are as follows: gi=(1.0- β) × gi
For the negative sample of discarding, gradient is set as 0.
CN201910534317.6A 2019-06-20 2019-06-20 Network method for training fundus lesion point segmentation based on loss function Active CN110223291B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910534317.6A CN110223291B (en) 2019-06-20 2019-06-20 Network method for training fundus lesion point segmentation based on loss function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910534317.6A CN110223291B (en) 2019-06-20 2019-06-20 Network method for training fundus lesion point segmentation based on loss function

Publications (2)

Publication Number Publication Date
CN110223291A true CN110223291A (en) 2019-09-10
CN110223291B CN110223291B (en) 2021-03-19

Family

ID=67814254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910534317.6A Active CN110223291B (en) 2019-06-20 2019-06-20 Network method for training fundus lesion point segmentation based on loss function

Country Status (1)

Country Link
CN (1) CN110223291B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260665A (en) * 2020-01-17 2020-06-09 北京达佳互联信息技术有限公司 Image segmentation model training method and device
CN111666997A (en) * 2020-06-01 2020-09-15 安徽紫薇帝星数字科技有限公司 Sample balancing method and target organ segmentation model construction method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268568A (en) * 2014-09-17 2015-01-07 电子科技大学 Behavior recognition method based on intelligent sub-space networks
CN107679557A (en) * 2017-09-19 2018-02-09 平安科技(深圳)有限公司 Driving model training method, driver's recognition methods, device, equipment and medium
CN108021916A (en) * 2017-12-31 2018-05-11 南京航空航天大学 Deep learning diabetic retinopathy sorting technique based on notice mechanism
CN108345846A (en) * 2018-01-29 2018-07-31 华东师范大学 A kind of Human bodys' response method and identifying system based on convolutional neural networks
US20190156154A1 (en) * 2017-11-21 2019-05-23 Nvidia Corporation Training a neural network to predict superpixels using segmentation-aware affinity loss
CN109829877A (en) * 2018-09-20 2019-05-31 中南大学 A kind of retinal fundus images cup disc ratio automatic evaluation method
CN109886965A (en) * 2019-04-09 2019-06-14 山东师范大学 The layer of retina dividing method and system that a kind of level set and deep learning combine
KR101953752B1 (en) * 2018-05-31 2019-06-17 주식회사 뷰노 Method for classifying and localizing images using deep neural network and apparatus using the same

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268568A (en) * 2014-09-17 2015-01-07 电子科技大学 Behavior recognition method based on intelligent sub-space networks
CN107679557A (en) * 2017-09-19 2018-02-09 平安科技(深圳)有限公司 Driving model training method, driver's recognition methods, device, equipment and medium
US20190156154A1 (en) * 2017-11-21 2019-05-23 Nvidia Corporation Training a neural network to predict superpixels using segmentation-aware affinity loss
CN108021916A (en) * 2017-12-31 2018-05-11 南京航空航天大学 Deep learning diabetic retinopathy sorting technique based on notice mechanism
CN108345846A (en) * 2018-01-29 2018-07-31 华东师范大学 A kind of Human bodys' response method and identifying system based on convolutional neural networks
KR101953752B1 (en) * 2018-05-31 2019-06-17 주식회사 뷰노 Method for classifying and localizing images using deep neural network and apparatus using the same
CN109829877A (en) * 2018-09-20 2019-05-31 中南大学 A kind of retinal fundus images cup disc ratio automatic evaluation method
CN109886965A (en) * 2019-04-09 2019-06-14 山东师范大学 The layer of retina dividing method and system that a kind of level set and deep learning combine

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUOSONG等: "deeply supervised neural network with short connection for retinal vessel segmentation", 《ARXIV》 *
褚晶辉等: "一种基于级联卷积网络的三维脑肿瘤精细分割", 《激光与光电子学进展》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260665A (en) * 2020-01-17 2020-06-09 北京达佳互联信息技术有限公司 Image segmentation model training method and device
CN111260665B (en) * 2020-01-17 2022-01-21 北京达佳互联信息技术有限公司 Image segmentation model training method and device
CN111666997A (en) * 2020-06-01 2020-09-15 安徽紫薇帝星数字科技有限公司 Sample balancing method and target organ segmentation model construction method
CN111666997B (en) * 2020-06-01 2023-10-27 安徽紫薇帝星数字科技有限公司 Sample balancing method and target organ segmentation model construction method

Also Published As

Publication number Publication date
CN110223291B (en) 2021-03-19

Similar Documents

Publication Publication Date Title
CN109902715B (en) Infrared dim target detection method based on context aggregation network
CN109272107A (en) A method of improving the number of parameters of deep layer convolutional neural networks
CN104408707B (en) Rapid digital imaging fuzzy identification and restored image quality assessment method
CN108961208A (en) A kind of aggregation leucocyte segmentation number system and method
CN110110709A (en) A kind of red white corpuscle differential counting method, system and equipment based on image procossing
CN114359738B (en) Cross-scene robust indoor people number wireless detection method and system
CN110096994A (en) A kind of small sample PolSAR image classification method based on fuzzy label semanteme priori
CN110569782A (en) Target detection method based on deep learning
CN111275686B (en) Method and device for generating medical image data for artificial neural network training
CN109102498B (en) Method for segmenting cluster type cell nucleus in cervical smear image
CN104751147A (en) Image recognition method
CN109344851A (en) Image classification display methods and device, analysis instrument and storage medium
CN108664838A (en) Based on the monitoring scene pedestrian detection method end to end for improving RPN depth networks
Ru et al. Speedy performance estimation for neural architecture search
CN110349167A (en) A kind of image instance dividing method and device
CN108122221A (en) The dividing method and device of diffusion-weighted imaging image midbrain ischemic area
CN105718942A (en) Hyperspectral image imbalance classification method based on mean value drifting and oversampling
CN109448854A (en) A kind of construction method of pulmonary tuberculosis detection model and application
CN110223291A (en) A kind of training retinopathy height segmentation network method based on loss function
CN108734200A (en) Human body target visible detection method and device based on BING features
Wollmann et al. DetNet: Deep neural network for particle detection in fluorescence microscopy images
Sun et al. Localizing optic disc and cup for glaucoma screening via deep object detection networks
Shoohi et al. DCGAN for Handling Imbalanced Malaria Dataset based on Over-Sampling Technique and using CNN.
Sasmal et al. A survey on the utilization of Superpixel image for clustering based image segmentation
CN110610488A (en) Classification training and detecting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant