CN108230322A - A kind of eyeground feature detection device based on weak sample labeling - Google Patents

A kind of eyeground feature detection device based on weak sample labeling Download PDF

Info

Publication number
CN108230322A
CN108230322A CN201810080532.9A CN201810080532A CN108230322A CN 108230322 A CN108230322 A CN 108230322A CN 201810080532 A CN201810080532 A CN 201810080532A CN 108230322 A CN108230322 A CN 108230322A
Authority
CN
China
Prior art keywords
eyeground
feature
characteristic pattern
characteristic
center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810080532.9A
Other languages
Chinese (zh)
Other versions
CN108230322B (en
Inventor
吴健
林志文
郭若乾
吴边
陈为
吴福理
吴朝晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201810080532.9A priority Critical patent/CN108230322B/en
Publication of CN108230322A publication Critical patent/CN108230322A/en
Application granted granted Critical
Publication of CN108230322B publication Critical patent/CN108230322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of eyeground feature detection device based on weak sample labeling, including:Characteristic extracting module extracts the eyeground feature in the eyeground figure of input, exports eyeground characteristic pattern;Differentiate feature learning module, dimension-reduction treatment is carried out to the eyeground characteristic pattern of input, calculates the center per class eyeground feature, each eyeground feature is calculated to the distance at generic center, using distance convergence as target, continuous iteration is determined per class eyeground eigencenter;Sampling module calculates each feature vector corresponding with background area in the characteristic pattern of dimensionality reduction eyeground and, to the L2 distances of affiliated eyeground feature class center, if L2 distances are less than threshold value, the corresponding feature vector in the background area is deleted, output sampling characteristic pattern;Feature detection module carries out sampling characteristic pattern feature detection classification, exports the class prediction probability of eyeground feature and eyeground feature corresponding position.

Description

A kind of eyeground feature detection device based on weak sample labeling
Technical field
The invention belongs to image processing fields, and in particular to a kind of eyeground feature detection device based on weak sample labeling.
Background technology
At present, some team begin to use the algorithm of deep learning to solve the detection of diabetic retina.It is traditional from Dynamicization sugar network detecting method, supposedly there is no use the method for deep learning effective, while the data that deep learning uses Amount is more, and generalization is more preferable.The network structure frame of deep learning is most of with VGG network models and Google Net network moulds It is built based on type, deep neural network oneself extraction feature, the specified needs for not needing to think can carry in a network Take those features, the feature extracted can classify again according to the full articulamentum of network so that feature extraction and point Class is combined togather, and trained result is outstanding compared to conventional method.In addition, deep learning method detection eyeground feature prediction The required time is also shorter than conventional method, and what trained network can be quickly judges input.
Existing eyeground feature detection model needs largely to mark complete sample image, to learn marker characteristic, then Predict position and the probability of feature.If it is labeled in training sample accidentally or imperfect, then network can be learned in study The feature for practising wrong feature or study is imperfect, it could even be possible to without calligraphy learning to feature, so that network training Poor effect, therefore, the correctness that existing model marks sample are very sensitive.Then, at present, differentiate that sugared net is characterized in Calculate the round spot (1 grade of eyeground feature) of a diameter of 10~30 pixel and size not advising for 50~100 pixels in ophthalmoscopic image The then number of kermesinus region (2 grades of eyeground features).But since there are many 1 grade of eyeground feature and 2 grades of usual quantity of eyeground feature, and Area very little.It is difficult to it is completely marked.This frequently results in model without calligraphy learning to correct feature in training pattern, Greatly reduce the performance of model.
Therefore, it is a kind of can be based on the eyeground lesion detection device that weak marker samples are learnt, it has also become it is academic at present Boundary and the eager demand of industrial quarters.
Invention content
The object of the present invention is to provide a kind of eyeground feature detection device based on weak sample labeling, in the device, increase Differentiation feature learning module using the result for differentiating feature learning module as foundation is sampled, samples training sample, It prevents unlabelled noise data from participating in training, influences training result, to solve because sample labeling is imperfect, and leads to model The problem of learning effect is poor.
For achieving the above object, the present invention provides following technical scheme:
A kind of eyeground feature detection device based on weak sample labeling, including:
Characteristic extracting module extracts the eyeground feature in the eyeground figure of input, exports eyeground characteristic pattern;
Differentiate feature learning module, dimension-reduction treatment is carried out to the eyeground characteristic pattern of input, is calculated per in the feature of class eyeground Heart position calculates each eyeground feature to the distance at generic center, using distance convergence as target, continuous iteration, really Fixed every class eyeground eigencenter;
Sampling module calculates in the characteristic pattern of dimensionality reduction eyeground each feature vector corresponding with background area to affiliated eyeground The Euclidean distance of feature class center, if the Euclidean distance be less than threshold value, by the corresponding feature in the background area to Amount deletes, output sampling characteristic pattern;
Feature detection module carries out sampling characteristic pattern feature detection classification, exports the class prediction probability of eyeground feature With eyeground feature corresponding position.
Wherein, the characteristic extracting module uses VGG16 network models.Specifically, successively including two convolution kernel sizes The convolutional layer for being 64 for 3, port number, two convolution kernel sizes are 3, the convolutional layer that port number is 128, and three convolution kernel sizes are 3rd, port number is 256 convolutional layer, and three convolution kernel sizes are 3, the convolutional layer that port number is 512, and three convolution kernel sizes are 3rd, port number is 512 convolutional layer.VGG network models enable to the eyeground of extraction as common one kind in detection network Characteristic pattern accurately describes the characteristic information of former eyeground figure.
Wherein, the differentiation feature learning module includes:
Full articulamentum, for carrying out Jiang Wei processing to the eyeground characteristic pattern of input, output dimensionality reduction eyeground characteristic pattern, at dimensionality reduction Reason can reduce network parameter, reduce the calculation amount of center loss function below, reduce computing cost, promote computational efficiency;
Eyeground eigencenter determining module first, according to network mapping relationship, determines each special in the characteristic pattern of dimensionality reduction eyeground Sign vector corresponds to the position being originally inputted representated by the figure of eyeground, then, according to determining original sample feature locations, judges to drop It is characteristic area or background area to tie up in the characteristic pattern of eyeground that each feature vector is corresponding, if characteristic area, then this feature Vector finally, averages to all differentiation features of every class eyeground feature as feature is differentiated, using the mean value as per class eyeground The center of feature is denoted as eyeground characteristic mean;
Center costing bio disturbance module according to eyeground characteristic mean, is calculated in the characteristic pattern of dimensionality reduction eyeground, it is each differentiate feature with The Euclidean distance of the eyeground characteristic mean of its generic.
Wherein, center costing bio disturbance mould loss function in the block is:
Wherein, xiRepresent i-th of differentiation feature of sample, cyiRepresent the center corresponding to i-th of sample generic yi Feature, during each iteration, class center is with newly measuring:
By continuous iteration, the center per class eyeground feature is determined.
Eyeground feature is divided into two classes, respectively 1 grade of eyeground feature and 2 grades of eyeground features, a diameter of 10~30 in the figure of eyeground The round spot of pixel is 1 grade of eyeground feature, and size is that the irregular kermesinus region of 50~100 pixels is 2 grades of features.Eyeground feature In the determining module of center, in determining every class eyeground feature, that is, 1 grade of eyeground eigencenter, 2 grades of eyeground eigencenters.
Specifically, the input of the sampling module is eyeground characteristic pattern, dimensionality reduction eyeground characteristic pattern and every class eyeground feature Center is exported to sample characteristic pattern.In sampling module, threshold value 10% that is, can be immediate background characteristics and eyeground feature Preceding 10% gets rid of.The setting of the threshold value can the lesion region of error flag to the greatest extent get rid of, do not allow next The feature of eyeground feature detection device study mistake, while correct background characteristics will not be reduced too much again, there is stronger Shandong Stick.
The feature detection module that is set as of sampling module provides study guarantee, makes feature detection module only to characteristic area Study, avoids module from obscuring during study background area and characteristic area, the detection of lifting feature detection module Correctness.
Wherein, the feature detection module including a sequentially connected convolution kernel size be 3, the volume that port number is 1024 Lamination, a convolution kernel size is 1, the convolutional layer that port number is 1024, and a convolution kernel size is 1, the volume that port number is 256 Lamination, a convolution kernel size is 3, the convolutional layer that port number is 512, and a convolution kernel size is 1, the volume that port number is 128 Lamination, a convolution kernel size is 3, the convolutional layer that port number is 256, and a convolution kernel size is 1, the volume that port number is 128 Lamination, a convolution kernel size is 3, the convolutional layer that port number is 256, and a convolution kernel size is 1, the volume that port number is 128 Lamination, a convolution kernel size is 3, the convolutional layer that port number is a 256 and convolution kernel size is 3, port number is 9 × (4 + 3) convolutional layer.
Wherein, the loss function of feature detection zone such as formula (3):
Wherein, α presentation classes loss LconfL is lost with positioninglocBetween ratio, the present invention be set to 10, N represent training The number of eyeground figure is included in sample,
Lloc(x, l, g) represents positioning loss function, wherein xij kI-th of prediction block and j-th of true frame are about classification k No matching, value be 1 or be 0 respectively represent matching and mismatch;li mIt is expressed as the horizontal stroke of the center of i-th of prediction block (cx), (cy) coordinate, long (w), wide (h) and the difference given tacit consent between frame corresponding to it, such as l are indulgedi cxRepresent i-th of prediction block The difference of the abscissa of center and its corresponding acquiescence frame center abscissa;g^j mRepresent the center of j-th of true frame Position horizontal stroke (cx), vertical (cy) coordinate, the difference between long (w), wide (h) and acquiescence frame, such as g^j cxIt represents in j-th of true frame The center difference of heart position abscissa and acquiescence frame;gj cx、gj cy、gj w、gj cxThe centre bit of j-th of true frame is represented respectively Put horizontal (cx), vertical (cy) coordinate, long (w), wide (h);di cx、di cy、di w、di cxRepresent that the center of i-th of acquiescence frame is horizontal respectively (cx), (cy) coordinate, long (w), wide (h) are indulged.As shown in formula (4):
Lconf(x, c) presentation class loss function, wherein xij pI-th of prediction block and j-th of true frame are about classification p No matching, value be 1 or be 0 respectively represent matching and mismatch;ci pRepresent that prediction ith zone belongs to the probability of p classifications;For ci pNormalization represent;N represents characteristic area number.As shown in formula (5):
Specifically, before by eyeground figure input detection device, eyeground figure is pre-processed, concrete processing procedure is:It is defeated Pleasing to the eye base map subtracts the mean value of all input eyeground figures, reprocesses the variance of all eyeground figures.It handles in this way, enables to input Eyeground figure distribution is conducive to entire model learning close to standardized normal distribution.
Compared with prior art, the device have the advantages that being:
The present invention is in order to solve the problems, such as that sample labeling is incomplete and designs, so for weak marker samples to be detected It can accurately identify eyeground characteristic area, the characteristic area that missing inspection measures is less.
Description of the drawings
Fig. 1 is the structure diagram for the eyeground feature detection device based on weak sample labeling that embodiment provides.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention more comprehensible, with reference to the accompanying drawings and embodiments to this Invention is described in further detail.It should be appreciated that the specific embodiments described herein are only used to explain the present invention, Do not limit protection scope of the present invention.
Fig. 1 is the structure diagram for the eyeground feature detection device based on weak sample labeling that embodiment provides.Such as Fig. 1 institutes Show, feature detection device in eyeground provided in this embodiment includes:
Characteristic extracting module 101 extracts the eyeground feature in the eyeground figure of input, exports eyeground characteristic pattern;
Differentiate feature learning module 102, which specifically includes:
Full articulamentum 1021 carries out Jiang Wei processing, output dimensionality reduction eyeground characteristic pattern, drop for the eyeground characteristic pattern to input Dimension processing can reduce network parameter, reduce the calculation amount of center loss function below, reduce computing cost, promoted and calculate effect Rate;
Eyeground eigencenter determining module 1022 first, according to network mapping relationship, determines every in the characteristic pattern of dimensionality reduction eyeground A feature vector corresponds to the position being originally inputted representated by the figure of eyeground, then, according to determining original sample feature locations, sentences It is characteristic area or background area that each feature vector is corresponding in disconnected dimensionality reduction eyeground characteristic pattern, if characteristic area, then should Feature vector finally, averages to all differentiation features of every class eyeground feature as feature is differentiated, using the mean value as per class The center of eyeground feature is denoted as eyeground characteristic mean;
Center costing bio disturbance module 1023 according to eyeground characteristic mean, is calculated in the characteristic pattern of dimensionality reduction eyeground, each to differentiate spy Sign and the Euclidean distance of the eyeground characteristic mean of its generic;
Wherein, center costing bio disturbance mould loss function in the block is:
Wherein, xiRepresent i-th of differentiation feature of sample, cyiRepresent the center corresponding to i-th of sample generic yi Feature, during each iteration, class center is with newly measuring:
By continuous iteration, the center per class eyeground feature is determined.
Sampling module 103 calculates in the characteristic pattern of dimensionality reduction eyeground each feature vector corresponding with background area to affiliated The Euclidean distance of eyeground feature class center, if the Euclidean distance is less than threshold value, by the corresponding spy in the background area Sign vector deletes, output sampling characteristic pattern;
Feature detection module 104, carries out sampling characteristic pattern feature detection classification, and the class prediction for exporting eyeground feature is general Rate and eyeground feature corresponding position.
Initial data first carries out being originally inputted the process of pretreatment, that is, it is equal to subtract training data before modules are entered Value divided by training data variance, mean value cause training data distribution being just distributed very much close to standard.
Training data initially enters characteristic extracting module 101, and the module is by all convolutional layers of VGG16 network models with swashing Function composition living.Setting is since VGG16 network models are most common a kind of feature extraction moulds in detection network model in this way Type, it is relatively good for the extraction effect of feature.The module can obtain focus characteristic figure.
Then, eyeground characteristic pattern, which enters, differentiates feature learning module 102, and the module is first to characteristic pattern dimensionality reduction, even if eye Bottom characteristic pattern obtains dimensionality reduction eyeground characteristic pattern, the feature dimensions of dimensionality reduction eyeground characteristic pattern by the full articulamentum that characteristic dimension is 128 Degree is become by original 512 for 128;Next according to network mapping relationship, determine in the characteristic pattern of dimensionality reduction eyeground each feature to Amount corresponds to the position representated by original sample input picture, and according to original sample eyeground position judgment dimensionality reduction eyeground characteristic pattern In each feature vector be eyeground region or background area;Then equal are asked to all feature vectors of each feature classification respectively Value, can obtain the feature vector mean value corresponding to the feature of every class eyeground, this feature vector mean value is exactly the spy of each feature Sign center;Then, the spy of the feature vector in each eyeground region and its belonging feature classification in the characteristic pattern of dimensionality reduction eyeground is calculated The Euclidean distance of the vectorial mean value of sign, which is exactly to differentiate the penalty values of feature learning module.
Then, in sampling module 103, each background area feature vector in the characteristic pattern of dimensionality reduction eyeground is calculated first and is arrived The Euclidean distance of feature vector mean value corresponding to each feature;Next, to obtained all Euclids away from From ascending sequence is carried out, it is considered herein that background area feature vector of the ranking preceding 10%, greatly may be spill tag The characteristic area of note, because they and feature are even more like, then the feature detection module in the case where connecing just does not use the background area Domain is as training data.The output of sampling module is the characteristic pattern after sampling, is denoted as sampling characteristic pattern.
Finally, sampling characteristic pattern is by feature detection module 104, and the module is using the more anchor outputs of single channel, anchor It is some fixed size positions and the rectangular area of length-width ratio in input picture, 3 kinds of length-width ratios (1 is used in the present invention:1,1:2, 2:1) anchor of 3*3 kind different size shapes is formed with 3 kinds of fixed sizes (60,120,180).Lesion sampling characteristic pattern warp Cross a series of mould convolution being sequentially connected in the block so that there is 9* (4+3) a output in each position of lesion sampling characteristic pattern, Each output contains the coordinate position of each anchor and belongs to the probability of each feature classification.The present invention finally takes pre- The predicted position that class probability is more than 70% is surveyed, and using these predicted positions as the position of final predicted characteristics.
Technical scheme of the present invention and advantageous effect is described in detail in above-described specific embodiment, Ying Li Solution is the foregoing is merely presently most preferred embodiment of the invention, is not intended to restrict the invention, all principle models in the present invention Interior done any modification, supplementary, and equivalent replacement etc. are enclosed, should all be included in the protection scope of the present invention.

Claims (5)

1. a kind of eyeground feature detection device based on weak sample labeling, which is characterized in that including:
Characteristic extracting module extracts the eyeground feature in the eyeground figure of input, exports eyeground characteristic pattern;
Differentiate feature learning module, dimension-reduction treatment is carried out to the eyeground characteristic pattern of input, calculates the centre bit per class eyeground feature It puts, calculates each eyeground feature to the distance at generic center, using distance convergence as target, continuous iteration determines every Class eyeground eigencenter;
Sampling module calculates in the characteristic pattern of dimensionality reduction eyeground each feature vector corresponding with background area to affiliated eyeground feature If the Euclidean distance is less than threshold value, the corresponding feature vector in the background area is deleted for the Euclidean distance of class center Fall, output sampling characteristic pattern;
Feature detection module carries out sampling characteristic pattern feature detection classification, exports the class prediction probability and eye of eyeground feature Bottom feature corresponding position.
2. the eyeground feature detection device based on weak sample labeling as described in claim 1, which is characterized in that the feature carries Modulus block uses VGG16 network models.
3. the eyeground feature detection device based on weak sample labeling as described in claim 1, which is characterized in that described to differentiate spy Sign study module includes:
Full articulamentum carries out Jiang Wei processing, output dimensionality reduction eyeground characteristic pattern, dimension-reduction treatment energy for the eyeground characteristic pattern to input Network parameter is enough reduced, reduces the calculation amount of center loss function below, reduces computing cost, promotes computational efficiency;
Eyeground eigencenter determining module, first, according to network mapping relationship, determine in the characteristic pattern of dimensionality reduction eyeground each feature to Amount corresponds to the position being originally inputted representated by the figure of eyeground, then, according to determining original sample feature locations, judges dimensionality reduction eye It is characteristic area or background area that each feature vector is corresponding in the characteristic pattern of bottom, if characteristic area, then this feature is vectorial It as feature is differentiated, finally, averages to all differentiation features of every class eyeground feature, using the mean value as per class eyeground feature Center, be denoted as eyeground characteristic mean;
Center costing bio disturbance module according to eyeground characteristic mean, is calculated in the characteristic pattern of dimensionality reduction eyeground, each to differentiate feature and its institute Belong to the Euclidean distance of the eyeground characteristic mean of classification;
Wherein, center costing bio disturbance mould loss function in the block is:
Wherein, xiRepresent i-th of differentiation feature of sample, cyiRepresent the central feature corresponding to i-th of sample generic yi, During each iteration, class center is with newly measuring:
By continuous iteration, the center per class eyeground feature is determined.
4. the eyeground feature detection device based on weak sample labeling as described in claim 1, which is characterized in that sampling module In, threshold value 10% can get rid of background characteristics and eyeground feature immediate preceding 10%.
5. the eyeground feature detection device based on weak sample labeling as described in claim 1, which is characterized in that the feature inspection Survey region including a sequentially connected convolution kernel size be 3, the convolutional layer that port number is 1024, a convolution kernel size is 1, Port number is 1024 convolutional layer, and a convolution kernel size is 1, the convolutional layer that port number is 256, convolution kernel size is 3, Port number is 512 convolutional layer, and a convolution kernel size is 1, the convolutional layer that port number is 128, convolution kernel size is 3, Port number is 256 convolutional layer, and a convolution kernel size is 1, the convolutional layer that port number is 128, convolution kernel size is 3, Port number is 256 convolutional layer, and a convolution kernel size is 1, the convolutional layer that port number is 128, convolution kernel size is 3, The convolutional layer and a convolution kernel size that port number is 256 are 3, the convolutional layer that port number is 9 × (4+3).
CN201810080532.9A 2018-01-28 2018-01-28 Eye ground characteristic detection device based on weak sample mark Active CN108230322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810080532.9A CN108230322B (en) 2018-01-28 2018-01-28 Eye ground characteristic detection device based on weak sample mark

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810080532.9A CN108230322B (en) 2018-01-28 2018-01-28 Eye ground characteristic detection device based on weak sample mark

Publications (2)

Publication Number Publication Date
CN108230322A true CN108230322A (en) 2018-06-29
CN108230322B CN108230322B (en) 2021-11-09

Family

ID=62667843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810080532.9A Active CN108230322B (en) 2018-01-28 2018-01-28 Eye ground characteristic detection device based on weak sample mark

Country Status (1)

Country Link
CN (1) CN108230322B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325942A (en) * 2018-09-07 2019-02-12 电子科技大学 Eye fundus image Structural Techniques based on full convolutional neural networks
CN110309810A (en) * 2019-07-10 2019-10-08 华中科技大学 A kind of pedestrian's recognition methods again based on batch center similarity
CN110473192A (en) * 2019-04-10 2019-11-19 腾讯医疗健康(深圳)有限公司 Digestive endoscope image recognition model training and recognition methods, apparatus and system

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870838A (en) * 2014-03-05 2014-06-18 南京航空航天大学 Eye fundus image characteristics extraction method for diabetic retinopathy
CN104463215A (en) * 2014-12-10 2015-03-25 东北大学 Tiny aneurysm occurrence risk prediction system based on retina image processing
CN104573716A (en) * 2014-12-31 2015-04-29 浙江大学 Eye fundus image arteriovenous retinal blood vessel classification method based on breadth first-search algorithm
CN104573712A (en) * 2014-12-31 2015-04-29 浙江大学 Arteriovenous retinal blood vessel classification method based on eye fundus image
US20160217586A1 (en) * 2015-01-28 2016-07-28 University Of Florida Research Foundation, Inc. Method for the autonomous image segmentation of flow systems
CN106408564A (en) * 2016-10-10 2017-02-15 北京新皓然软件技术有限责任公司 Depth-learning-based eye-fundus image processing method, device and system
CN106529598A (en) * 2016-11-11 2017-03-22 北京工业大学 Classification method and system based on imbalanced medical image data set
US20170086698A1 (en) * 2013-11-15 2017-03-30 Yibing Wu Life maintenance mode, a brain inhibition therapy and a personal health information platform
CN106599804A (en) * 2016-11-30 2017-04-26 哈尔滨工业大学 Retina fovea centralis detection method based on multi-feature model
US20170112372A1 (en) * 2015-10-23 2017-04-27 International Business Machines Corporation Automatically detecting eye type in retinal fundus images
CN106815853A (en) * 2016-12-14 2017-06-09 海纳医信(北京)软件科技有限责任公司 To the dividing method and device of retinal vessel in eye fundus image
CN106920227A (en) * 2016-12-27 2017-07-04 北京工业大学 Based on the Segmentation Method of Retinal Blood Vessels that deep learning is combined with conventional method
CN107045720A (en) * 2017-05-04 2017-08-15 深圳硅基智能科技有限公司 Artificial neural network and system for recognizing eye fundus image lesion
CN107203758A (en) * 2017-06-06 2017-09-26 哈尔滨理工大学 Diabetes patient's retinal vascular images dividing method
CN107633513A (en) * 2017-09-18 2018-01-26 天津大学 The measure of 3D rendering quality based on deep learning

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170086698A1 (en) * 2013-11-15 2017-03-30 Yibing Wu Life maintenance mode, a brain inhibition therapy and a personal health information platform
CN103870838A (en) * 2014-03-05 2014-06-18 南京航空航天大学 Eye fundus image characteristics extraction method for diabetic retinopathy
CN104463215A (en) * 2014-12-10 2015-03-25 东北大学 Tiny aneurysm occurrence risk prediction system based on retina image processing
CN104573716A (en) * 2014-12-31 2015-04-29 浙江大学 Eye fundus image arteriovenous retinal blood vessel classification method based on breadth first-search algorithm
CN104573712A (en) * 2014-12-31 2015-04-29 浙江大学 Arteriovenous retinal blood vessel classification method based on eye fundus image
US20160217586A1 (en) * 2015-01-28 2016-07-28 University Of Florida Research Foundation, Inc. Method for the autonomous image segmentation of flow systems
US20170112372A1 (en) * 2015-10-23 2017-04-27 International Business Machines Corporation Automatically detecting eye type in retinal fundus images
CN106408564A (en) * 2016-10-10 2017-02-15 北京新皓然软件技术有限责任公司 Depth-learning-based eye-fundus image processing method, device and system
CN106529598A (en) * 2016-11-11 2017-03-22 北京工业大学 Classification method and system based on imbalanced medical image data set
CN106599804A (en) * 2016-11-30 2017-04-26 哈尔滨工业大学 Retina fovea centralis detection method based on multi-feature model
CN106815853A (en) * 2016-12-14 2017-06-09 海纳医信(北京)软件科技有限责任公司 To the dividing method and device of retinal vessel in eye fundus image
CN106920227A (en) * 2016-12-27 2017-07-04 北京工业大学 Based on the Segmentation Method of Retinal Blood Vessels that deep learning is combined with conventional method
CN107045720A (en) * 2017-05-04 2017-08-15 深圳硅基智能科技有限公司 Artificial neural network and system for recognizing eye fundus image lesion
CN107203758A (en) * 2017-06-06 2017-09-26 哈尔滨理工大学 Diabetes patient's retinal vascular images dividing method
CN107633513A (en) * 2017-09-18 2018-01-26 天津大学 The measure of 3D rendering quality based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KUMAR,SJJ等: "An Improved Medical Decision Support System to Identify the Diabetic Retinopathy Using Fundus Images", 《JOURNAL OF MEDICAL SYSTEMS 36》 *
杨毅: "视网膜血管分割与动静脉分类方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325942A (en) * 2018-09-07 2019-02-12 电子科技大学 Eye fundus image Structural Techniques based on full convolutional neural networks
CN109325942B (en) * 2018-09-07 2022-03-25 电子科技大学 Fundus image structure segmentation method based on full convolution neural network
CN110473192A (en) * 2019-04-10 2019-11-19 腾讯医疗健康(深圳)有限公司 Digestive endoscope image recognition model training and recognition methods, apparatus and system
CN110309810A (en) * 2019-07-10 2019-10-08 华中科技大学 A kind of pedestrian's recognition methods again based on batch center similarity

Also Published As

Publication number Publication date
CN108230322B (en) 2021-11-09

Similar Documents

Publication Publication Date Title
CN111223553B (en) Two-stage deep transfer learning traditional Chinese medicine tongue diagnosis model
Ying et al. Multi-attention object detection model in remote sensing images based on multi-scale
CN105869178B (en) A kind of complex target dynamic scene non-formaldehyde finishing method based on the convex optimization of Multiscale combination feature
CN108399386A (en) Information extracting method in pie chart and device
CN110334578B (en) Weak supervision method for automatically extracting high-resolution remote sensing image buildings through image level annotation
CN108053419A (en) Inhibited and the jamproof multiscale target tracking of prospect based on background
CN109376576A (en) The object detection method for training network from zero based on the intensive connection of alternately update
CN106022232A (en) License plate detection method based on deep learning
CN108256462A (en) A kind of demographic method in market monitor video
CN110689000B (en) Vehicle license plate recognition method based on license plate sample generated in complex environment
CN107038416A (en) A kind of pedestrian detection method based on bianry image modified HOG features
CN109635726B (en) Landslide identification method based on combination of symmetric deep network and multi-scale pooling
CN108230322A (en) A kind of eyeground feature detection device based on weak sample labeling
CN109508661A (en) A kind of person's of raising one's hand detection method based on object detection and Attitude estimation
CN104463240B (en) A kind of instrument localization method and device
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN106874913A (en) A kind of vegetable detection method
CN114998220A (en) Tongue image detection and positioning method based on improved Tiny-YOLO v4 natural environment
CN112528845A (en) Physical circuit diagram identification method based on deep learning and application thereof
CN110197113A (en) A kind of method for detecting human face of high-precision anchor point matching strategy
Tu et al. Instance segmentation based on mask scoring R-CNN for group-housed pigs
CN109697727A (en) Method for tracking target, system and storage medium based on correlation filtering and metric learning
CN112084860A (en) Target object detection method and device and thermal power plant detection method and device
CN113989287A (en) Urban road remote sensing image segmentation method and device, electronic equipment and storage medium
Hu et al. A bag of tricks for fine-grained roof extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant