CN108876776A - A kind of method of generating classification model, eye fundus image classification method and device - Google Patents

A kind of method of generating classification model, eye fundus image classification method and device Download PDF

Info

Publication number
CN108876776A
CN108876776A CN201810607909.1A CN201810607909A CN108876776A CN 108876776 A CN108876776 A CN 108876776A CN 201810607909 A CN201810607909 A CN 201810607909A CN 108876776 A CN108876776 A CN 108876776A
Authority
CN
China
Prior art keywords
image
eyeground
original image
retina
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810607909.1A
Other languages
Chinese (zh)
Other versions
CN108876776B (en
Inventor
王晓婷
栾欣泽
何光宇
孟健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201810607909.1A priority Critical patent/CN108876776B/en
Publication of CN108876776A publication Critical patent/CN108876776A/en
Application granted granted Critical
Publication of CN108876776B publication Critical patent/CN108876776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Abstract

The embodiment of the present application discloses a kind of method of generating classification model, eye fundus image classification method and device, this method:By by eyeground original image, the corresponding feature vector image of eyeground original image, one of corresponding pretreatment image of eyeground original image is a variety of as eyeground training image, initial depth learning model is trained using eyeground training image and eyeground training image corresponding retina tag along sort, generate retina disaggregated model, retina disaggregated model generated can the view film type to eye fundus image classify, automatically and rapidly classify to the view film type of eye fundus image to realize, and the result of classification eliminates the influence of subjectivity, also more accurate.Meanwhile effectively having expanded the quantity of training image using a variety of images as training image, keep the retina disaggregated model generated more accurate.

Description

A kind of method of generating classification model, eye fundus image classification method and device
Technical field
This application involves technical field of image processing, and in particular to a kind of method of generating classification model and device, a kind of eye Bottom image classification method and device.
Background technique
With information acquiring technology development and big data it is universal, can by the image of acquisition handled with Obtain effective information.For example, have already appeared at present it is some using intelligent terminal to the positions such as human body such as tongue body, eyeground carry out The scheme of Image Acquisition brings great convenience to the information collection of human body to people.
In the prior art, collected eye fundus image can be transferred to professional and carry out eyeground with the presence or absence of retina The screening of lesion, but the subjectivity of artificial judgment is strong, is difficult to quantify, and efficiency is more low, therefore, lacks in the prior art The weary view film type in eye fundus image carries out quick, Accurate classification mode.
Summary of the invention
In view of this, the embodiment of the present application provides a kind of method of generating classification model and device, a kind of eye fundus image classification Method and device, to solve the technology that can not quickly, accurately classify in the prior art to the view film type of eye fundus image Problem.
To solve the above problems, technical solution provided by the embodiments of the present application is as follows:
A kind of method of generating classification model, the method includes:
Obtain eyeground original image;
By the eyeground original image, the corresponding feature vector image of the eyeground original image, the eyeground original graph As one of corresponding pretreatment image or a variety of conduct eyeground training images;
According to the eyeground training image and the corresponding retina tag along sort of the eyeground training image to initial depth Degree learning model is trained, and generates retina disaggregated model.
In one possible implementation, the generating process packet of the corresponding feature vector image of the eyeground original image It includes:
Extract the image feature vector of the eyeground original image;
The image feature vector of the eyeground original image is plotted as the corresponding feature vector of the eyeground original image Image.
In one possible implementation, the image feature vector by the eyeground original image is plotted as described The corresponding feature vector image of eyeground original image, including:
The image feature vector of the eyeground original image is plotted as original feature vector image;
Dimensional variation processing is carried out to the original feature vector image and generates the corresponding feature of the eyeground original image Vector-valued image.
In one possible implementation, described image feature vector include Scale invariant features transform feature vector with And Corner Detection feature vector.
In one possible implementation, the generating process packet of the corresponding pretreatment image of the eyeground original image It includes:
The eyeground original image is subjected to dimensional variation processing, shear treatment and/or overturning processing, generates the eyeground The corresponding pretreatment image of original image.
In one possible implementation, described according to the eyeground training image and the eyeground training image pair The retina tag along sort answered is trained initial deep learning model, generates retina disaggregated model, including:
Initial deep learning model is trained using general training image set, generates universal classification model;
According to the eyeground training image and the corresponding retina tag along sort of the eyeground training image to described logical It is trained with disaggregated model, generates retina disaggregated model.
A kind of eye fundus image classification method, the method includes:
Obtain eyeground original image to be sorted;
By the eyeground original image to be sorted, the corresponding feature vector image of the eyeground original image to be sorted, institute One of corresponding pretreatment image of eyeground original image to be sorted or a variety of input retina disaggregated models are stated, is obtained at least A kind of retina classification results determine the eye to be sorted from least one retina classification results according to voting mechanism The retina classification results of bottom original image, the retina disaggregated model are given birth to according to above-mentioned method of generating classification model At.
In one possible implementation, the generation of the corresponding feature vector image of the eyeground original image to be sorted Process includes:
Extract the image feature vector of the eyeground original image to be sorted;
The image feature vector of the eyeground original image to be sorted is plotted as the eyeground original image pair to be sorted The feature vector image answered.
In one possible implementation, the image feature vector by the eyeground original image to be sorted is drawn For the corresponding feature vector image of the eyeground original image to be sorted, including:
The image feature vector of the eyeground original image to be sorted is plotted as original feature vector image;
It is corresponding that the dimensional variation processing generation eyeground original image to be sorted is carried out to the original feature vector image Feature vector image.
In one possible implementation, described image feature vector include Scale invariant features transform feature vector with And Corner Detection feature vector.
In one possible implementation, the generation of the corresponding pretreatment image of the eyeground original image to be sorted Journey includes:
The eyeground original image to be sorted is subjected to dimensional variation processing, shear treatment and/or overturning processing, generates institute State the corresponding pretreatment image of eyeground original image to be sorted.
A kind of disaggregated model generating means, described device include:
First acquisition unit, for obtaining eyeground original image;
Second acquisition unit is used for the eyeground original image, the corresponding feature vector chart of the eyeground original image One of picture, the corresponding pretreatment image of the eyeground original image are a variety of as eyeground training image;
Generation unit, for being classified according to the eyeground training image and the corresponding retina of the eyeground training image Label is trained initial deep learning model, generates retina disaggregated model.
In one possible implementation, the generating process packet of the corresponding feature vector image of the eyeground original image It includes:
Extract the image feature vector of the eyeground original image;
The image feature vector of the eyeground original image is plotted as the corresponding feature vector of the eyeground original image Image.
In one possible implementation, the image feature vector by the eyeground original image is plotted as described The corresponding feature vector image of eyeground original image, including:
The image feature vector of the eyeground original image is plotted as original feature vector image;
Dimensional variation processing is carried out to the original feature vector image and generates the corresponding feature of the eyeground original image Vector-valued image.
In one possible implementation, described image feature vector include Scale invariant features transform feature vector with And Corner Detection feature vector.
In one possible implementation, the generating process packet of the corresponding pretreatment image of the eyeground original image It includes:
The eyeground original image is subjected to dimensional variation processing, shear treatment and/or overturning processing, generates the eyeground The corresponding pretreatment image of original image.
In one possible implementation, the generation unit includes:
First generates subelement, raw for being trained using general training image set to initial deep learning model At universal classification model;
Second generates subelement, for according to the eyeground training image and the corresponding view of the eyeground training image Film tag along sort is trained the universal classification model, generates retina disaggregated model.
A kind of eye fundus image sorter, described device include:
Acquiring unit, for obtaining eyeground original image to be sorted;
Obtaining unit, for by the eyeground original image to be sorted, the corresponding spy of the eyeground original image to be sorted It levies one of vector-valued image, the corresponding pretreatment image of the eyeground original image to be sorted or a variety of input retinas is classified Model obtains at least one retina classification results, true from least one retina classification results according to voting mechanism The retina classification results of the fixed eyeground original image to be sorted, the retina disaggregated model is by above-mentioned disaggregated model Generating means are generated.
In one possible implementation, the generation of the corresponding feature vector image of the eyeground original image to be sorted Process includes:
Extract the image feature vector of the eyeground original image to be sorted;
The image feature vector of the eyeground original image to be sorted is plotted as the eyeground original image pair to be sorted The feature vector image answered.
In one possible implementation, the image feature vector by the eyeground original image to be sorted is drawn For the corresponding feature vector image of the eyeground original image to be sorted, including:
The image feature vector of the eyeground original image to be sorted is plotted as original feature vector image;
It is corresponding that the dimensional variation processing generation eyeground original image to be sorted is carried out to the original feature vector image Feature vector image.
In one possible implementation, described image feature vector include Scale invariant features transform feature vector with And Corner Detection feature vector.
In one possible implementation, the generation of the corresponding pretreatment image of the eyeground original image to be sorted Journey includes:
The eyeground original image to be sorted is subjected to dimensional variation processing, shear treatment and/or overturning processing, generates institute State the corresponding pretreatment image of eyeground original image to be sorted.
A kind of computer readable storage medium is stored with instruction in the computer readable storage medium storing program for executing, works as described instruction When running on the terminal device, so that the terminal device executes above-mentioned method of generating classification model or above-mentioned eyeground figure As classification method.
A kind of computer program product, when the computer program product is run on the terminal device, so that the terminal Equipment executes above-mentioned method of generating classification model or above-mentioned eye fundus image classification method.
It can be seen that the embodiment of the present application has the advantages that:
The embodiment of the present application passes through eyeground original image, the corresponding feature vector image of eyeground original image, eyeground is former One of corresponding pretreatment image of beginning image is a variety of as eyeground training image, utilizes eyeground training image and eyeground The corresponding retina tag along sort of training image is trained initial deep learning model, generates retina disaggregated model, institute The retina disaggregated model of generation can the view film type to eye fundus image classify, thus realize it is automatically and rapidly right The view film type of eye fundus image is classified, and the result classified eliminates the influence of subjectivity, also more accurate.Meanwhile The quantity for effectively having expanded training image using a variety of images as training image, make generate retina disaggregated model more subject to Really.
Detailed description of the invention
Fig. 1 is a kind of flow chart of method of generating classification model provided by the embodiments of the present application;
Fig. 2 is the flow chart of disaggregated model training provided by the embodiments of the present application;
Fig. 3 is the process of the generating process of the corresponding feature vector image of eyeground original image provided by the embodiments of the present application Figure;
Fig. 4 is a kind of flow chart of disaggregated model verification method provided by the embodiments of the present application;
Fig. 5 is a kind of flow chart of eye fundus image classification method provided by the embodiments of the present application;
Fig. 6 is a kind of structural schematic diagram of disaggregated model generating means provided by the embodiments of the present application;
Fig. 7 is a kind of structural schematic diagram of eye fundus image sorter provided by the embodiments of the present application.
Specific embodiment
In order to make the above objects, features, and advantages of the present application more apparent, with reference to the accompanying drawing and it is specific real Mode is applied to be described in further detail the embodiment of the present application.
Technical solution provided by the present application in order to facilitate understanding below first carries out the research background of technical scheme Simple declaration.
In recent years, with the continuous development of computer technology, people can use more advanced technology to the image of acquisition It is handled to obtain effective information.For example, can use the intelligent terminals such as included camera mobile phone to human body such as tongue body, eyes Equal positions carry out the scheme of Image Acquisition, bring great convenience to people to the information collection of human body.
But it is directed to collected eye fundus image at present, still view film type can only be carried out by professional Identification and classification, the mode subjectivity of this artificial judgment is strong, be difficult to quantify and efficiency is more low, classifies to eye fundus image and knows Other accuracy rate is not high.
Based on this, present applicant proposes a kind of method of generating classification model, eye fundus image classification method and device, training lifes At retina disaggregated model, and using the model can the view film type to eye fundus image classify, to realize automatic And the result rapidly classified to the view film type of eye fundus image, and classified eliminates the influence of subjectivity, also more Accurately.
Method of generating classification model provided by the embodiments of the present application is illustrated with reference to the accompanying drawing.
Referring to Fig. 1, it illustrates a kind of flow chart of method of generating classification model provided by the embodiments of the present application, such as Fig. 1 Shown, this method includes:
Step 101:Obtain eyeground original image.
In practical applications, in order to realize that the view film type to eye fundus image is classified, it is necessary first to pass through training A kind of retina disaggregated model is generated, and in the generating process of disaggregated model, it is necessary first to eyeground original image is obtained, In, eyeground original image refers to being used to carry out one group of primary image of disaggregated model training, and eyeground original image can be with Eyeground is shot to obtain by dedicated eyeground mirror device.
The eyeground training image for retina disaggregated model training may further be generated using eyeground original image, by This can continue to execute step 102 after getting eyeground original image.
Step 102:By eyeground original image, the corresponding feature vector image of eyeground original image, eyeground original image pair One of pretreatment image answered is a variety of as eyeground training image.
It in practical applications,, not only can be as a kind of eye after getting eyeground original image by step 101 Bottom training image further in the limited situation of eyeground raw image data amount, generates retina classification mould to improve The classification accuracy of type can also generate eyeground original image pair using eyeground original image by the way of data increment The various fundus training image such as the feature vector image answered and the corresponding pretreatment image of eyeground original image, to realize Using a variety of images as training image, the data volume of training image is effectively expanded, and then generation disaggregated model can be improved Accuracy.
It illustrates:Assuming that getting eyeground original image is 100 width, then it not only can be by this 100 width eyeground original graph As being used as eyeground training image, further, it can also utilize and this 100 width eyeground original image is generated into 100 corresponding spies Vector-valued image and 100 corresponding pretreatment images are levied as eyeground training image, so as to select according to the actual situation It will be in eyeground original image, the corresponding feature vector image of eyeground original image, the corresponding pretreatment image of eyeground original image Any one or more as eyeground training image carry out disaggregated model training.
It should be noted that the corresponding feature vector image of eyeground original image can be by carrying out eyeground original image Vector is generated after feature extraction, recycles the vector to carry out drafting pattern and form feature vector image, to eyeground original graph After carrying out feature extraction, the vector of generation may include Scale invariant features transform feature vector and Corner Detection feature to Amount.The corresponding pretreatment image of eyeground original image can be by the pre- place such as zooming in and out to eyeground original image, shearing, overturn It is obtained after reason, eyeground original image, the corresponding feature vector image of eyeground original image, eyeground original image is corresponding pre- It handles one of image or a variety of as after the training image of eyeground, step 103 can be continued to execute, wherein eyeground original image Corresponding feature vector image and the specific generating mode of the corresponding pretreatment image of eyeground original image will be in subsequent implementations It is described in detail in example.
Step 103:According to eyeground training image and the corresponding retina tag along sort of eyeground training image to initial depth Degree learning model is trained, and generates retina disaggregated model.
During specific implementation, by step 102, by eyeground original image, the corresponding feature of eyeground original image to After one of spirogram picture, the corresponding pretreatment image of eyeground original image or a variety of conduct eyeground training images, further, It can be according to the eyeground training image and the corresponding retina tag along sort of tongue body training image to initial deep learning model It is trained, and then generates retina disaggregated model.
Wherein, every width eyeground training image has known retina tag along sort, the corresponding view of eyeground training image Film tag along sort refers to label corresponding to the view film type of the eye fundus image marked in advance, for example, the view of eye fundus image Film classification can be generally divided into the retina for small blutpunkte occur, the retina for blood spots occur, the retina for velveteen spot occur, Six classes such as the retina for new vessels occur, the retina for fibroproliferation occur, the retina for nethike embrane disengaging occur, then accordingly , training image corresponding retina tag along sort in eyeground also can be used different characters and be identified, such as label 1 is corresponding There is the corresponding corresponding mark of the retina for showing blood cake, label 3 that identifies of the retina of small blutpunkte, label 2 and cotton occurs in mark There is the corresponding mark of the retina of new vessels, label 5 and the view of fibroproliferation occurs in the corresponding mark of retina, label 4 of suede spot There is the retina of nethike embrane disengaging in the corresponding mark of nethike embrane, label 6.It should be noted that the specific classification and classification of retina Corresponding label form can be configured according to the actual situation, and the embodiment of the present application is to this without limiting.
In the embodiment of the present application, a kind of optional embodiment is, the initial deep learning model in the application can be with For a kind of Google's network model (GoogLeNet), the depth network which is one 22 layers can be incited somebody to action using GoogLeNet Full articulamentum becomes partially connected layer, so that it is limited to solve the problems, such as depth and width, and then can be improved retina classification mould The accuracy of type classification results.
As can be seen from the above-described embodiment, the application is by by eyeground original image, the corresponding feature of eyeground original image One of vector-valued image, the corresponding pretreatment image of eyeground original image are a variety of as eyeground training image, utilize eyeground Training image and the corresponding retina tag along sort of eyeground training image are trained initial deep learning model, generate view Nethike embrane disaggregated model, retina disaggregated model generated can the view film type to eye fundus image classify, thus real The result now automatically and rapidly classified to the view film type of eye fundus image, and classified eliminates the influence of subjectivity, Also more accurate.Meanwhile effectively having expanded the quantity of training image using a variety of images as training image, make the view generated Film disaggregated model is more accurate.
Referring to fig. 2, it illustrates the flow charts of disaggregated model training provided by the embodiments of the present application, as shown in Fig. 2, dividing During class model training, the application firstly the need of getting eyeground original image, it is then possible to eyeground original image into (Harris) feature extraction of row Corner Detection and Scale invariant features transform (Scale-invariant feature Transform, abbreviation SIFT) feature extraction, and then generate corresponding feature vector image, can also to eyeground original image into The pretreatments such as row scaling, shearing, overturning obtain the corresponding pretreatment image of eyeground original image, further, can be by eyeground One of the corresponding feature vector image of original image, eyeground original image, the corresponding pretreatment image of eyeground original image Or it is a variety of be incorporated as eyeground training image, initial deep learning model such as GoogLeNet model is trained, with generate view Nethike embrane disaggregated model.
Next, to the specific generating mode of the corresponding feature vector image of eyeground original image in above-mentioned steps 102 into Row explanation.
Referring to Fig. 3, it illustrates in a kind of optional embodiment, the corresponding feature vector image of eyeground original image Generating process include:
Step 301:Extract the image feature vector of eyeground original image.
In practical applications, in order to generate the corresponding feature vector image of eyeground original image, as shown in Fig. 2, needing first Characteristic vector pickup, in some possible implementations of the application, the characteristics of image of acquisition are carried out to eyeground original image Vector includes Scale invariant features transform feature vector and Corner Detection feature vector.Next, respectively to shown in Fig. 2 right Eyeground original image carries out SIFT feature extraction and the specific embodiment of Harris feature extraction is illustrated.
(1) SIFT feature is extracted
SIFT feature is a kind of description for field of image processing.SIFT is the convolution using original image and Gaussian kernel Scale space is established, and extracts the characteristic point of scale invariability on difference of Gaussian spatial pyramid.The algorithm has one Fixed affine-invariant features, unchanged view angle, rotational invariance and illumination invariant, so being obtained in terms of image characteristics extraction Widest application.
And the realization process of SIFT feature extraction algorithm is substantially:
(1) the pyramidal building of difference of Gaussian;
(2) search of characteristic point;
(3) feature describes.
In practical applications, in conjunction with the realization substantially process of above-mentioned SIFT feature extraction algorithm, to eyeground original image into The detailed process for specific each step that row SIFT feature is extracted is as follows:
(1) in the pyramidal building process of the application volume difference of Gaussian, a tool is constructed using group and the structure of layer The pyramid structure of wired sexual intercourse, so as to search characteristic point on continuous Gaussian kernel scale.
(2) in the feature point search process of the application, main committed step is the interpolation of extreme point, because discrete Space in, Local Extremum may not be extreme point truly, and real extreme point may fall in discrete point Gap in.So to carry out interpolation to these gap positions, the coordinate position of extreme point is then sought again.
(3) during the description of the feature of the application, the direction of characteristic point asks method to need in feature vertex neighborhood The gradient direction of point carries out statistics with histogram, chooses the maximum direction of specific gravity in histogram and is characterized principal direction a little, can be with Select an auxiliary direction.When calculating characteristic vector, need that topography rotate along principal direction, then again into neighborhood Histogram of gradients count (4x4x8).
And then the feature vector of image can be got by SIFT feature extraction algorithm, [a can be used1,…,an] indicate.
The algorithm has certain affine-invariant features, unchanged view angle, rotational invariance and illumination invariant, to image After carrying out feature extraction, facilitate the subsequent accuracy rate for improving Classification and Identification.
(2) Harris feature extraction
Harris Corner Detection is a kind of first derivative matrix detection method based on image grayscale.The main think of of detector Think to be local self-similarity/autocorrelation, i.e., image block and the window after minute movement in all directions in some local window The similitude of image block in mouthful.
In pixel neighborhood of a point, Jacobian matrix describes the situation of change of data-signal.Assuming that in neighborhood of pixel points Movable block region on any direction, if acute variation has occurred in intensity, the pixel at variation is angle point.Define 2 × 2 Harris matrix is:
Wherein, CxAnd CyRespectively indicate the first derivative of the strength information of point x=(x, y) in the x and y direction, ω (x, y) Indicate the weight of corresponding position.By calculating the angle point response D of Harris matrix to determine whether being angle point, calculation formula For:
D=det A-m (traceA)2=(ac-b)2-m(a+c)2
Wherein, det and trace indicates the operator of determinant and mark, and m is the constant that value is 0.04~0.06.Work as angle Point response is greater than the thresholding of setting, and when for local maximum in the vertex neighborhood, then the point as angle point.
Therefore, the eyeground original image with label can be carried out feature extraction by above-mentioned Harris algorithm by the application It calculates, obtains corresponding feature vector, [b can be used1,…,bn] indicate.
By the above-mentioned means, can extract after carrying out SIFT feature extraction and Harris feature extraction to eyeground original image To the feature vector [a of two groups of images of eyeground original image1,…,an] and [b1,…,bn], further, step can be continued to execute Rapid 302.
Step 302:The image feature vector of eyeground original image is plotted as the corresponding feature vector of eyeground original image Image.
During specific implementation, by step 301, after the image feature vector for extracting eyeground original image, into one Step, the image feature vector of eyeground original image can be depicted as respectively to the corresponding feature vector image of eyeground original image, The plot function that can be used for example in matlab is drawn.
In some possible implementations of the application, the realization process of above-mentioned steps 302 is specifically included:
Step A:The image feature vector of eyeground original image is plotted as original feature vector image;
Step B:To original feature vector image carry out dimensional variation processing generate the corresponding feature of eyeground original image to Spirogram picture.
In practical applications, after the image feature vector that eyeground original image is extracted by step 301, further, The image feature vector of these eyeground original images is plotted as original feature vector by the plot function that can use in matlab Image, that is, two groups of one-dimensional vectors are distinguished into drafting pattern picture, it is to be understood that the feature of every one kind eyeground original image Vector be it is similar, then, after the completion of Image Rendering, for the size of uniform characteristics figure, need to be implemented step B, i.e., to drawing Manufactured image carries out rescaling, i.e., carries out dimensional variation processing to original feature vector image and generate eyeground original image pair The feature vector image answered, such as feature vector image adjusted are the image of unified 256*256 size.
After carrying out SIFT feature extraction and Harris feature extraction to eyeground original image, it can extract original to eyeground Feature vector [a of two groups of images of image1,…,an] and [b1,…,bn], then by after its drafting pattern, it is original to produce eyeground The corresponding feature vector image of image, and can be used as one kind of eyeground training image, and then increase eyeground training image Quantity improves the classification accuracy that training generates retina disaggregated model.
Next, the specific generating mode to original image corresponding pretreatment image in eyeground in above-mentioned steps 102 carries out Explanation.
In a kind of optional embodiment, the generating process of the corresponding pretreatment image of eyeground original image includes:
Eyeground original image is subjected to dimensional variation processing, shear treatment and/or overturning processing, generates eyeground original image Corresponding pretreatment image.
In practical applications, in order to increase the quantity of eyeground training image, point of the retina disaggregated model of generation is improved Class accuracy, can by the way of data increment, by eyeground original image carry out dimensional variation processing, shear treatment and/or Overturning processing generates the corresponding pretreatment image of eyeground original image as eyeground training image, to increase eyeground training The quantity of image, for example, it is assumed that getting eyeground original image is 100 width, then it not only can be by this 100 width eyeground original image As eyeground training image, this 100 width eyeground original image can also be carried out respectively dimensional variation processing, shear treatment and After overturning processing, 100 corresponding pretreatment images are generated respectively, so that it is corresponding to generate 300 width eyeground original images altogether Pretreatment image, and then the training of disaggregated model can be carried out using this 300 width pretreatment image as eyeground training image, to increase Add the quantity of eyeground training image.
It is understood that carrying out dimensional variation processing for the size of uniform characteristics figure to eyeground original image, cutting It cuts the image size needs exported after processing and features described above vector-valued image is unified, such as be 256*256, then carry out along vertical D-axis or so overturning processing, exports the image after image preprocessing, and then can be used as a kind of eyeground training image, effectively The data volume of training image is expanded.
One kind optionally be achieved in that, the application to eyeground original image progress dimensional variation processing when, use Dimensional variation processing generation eyeground original graph is carried out to original feature vector image in dimensional variation processing method and above-mentioned steps B As corresponding feature vector chart seem it is identical, can use bilinear interpolation algorithm, this algorithm be also known as in bilinearity It inserts.Mathematically, bilinear interpolation is the linear interpolation extension of the interpolating function there are two variable, and core concept is at two Direction carries out once linear interpolation respectively.
If expecting unknown function f in the value of point P=(x, y), it is assumed that known function f is in Q11=(x1,y1), Q12= (x1,y2), Q21=(x2,y1) and Q22=(x2,y2) four points value.
Linear interpolation is carried out in the direction x first, is obtained:
Wherein, R1=(x, y1)。
Wherein, R2=(x, y2)。
Then, linear interpolation is carried out in the direction y, obtained:
In this way, it is as follows just to obtain desired result f (x, y):
If selection one coordinate system make f four known point coordinates be respectively (0,0), (0,1), (1,0) and (1, 1), then corresponding interpolation formula can be reduced to:
f(x,y)≈f(0,0)(1-x)(1-y)+f(1,0)x(1-y)+f(0,1)(1-x)y+f(1,1)xy
Or it is expressed as with matrix operation:
The result of this interpolation method be not usually it is linear, the result of linear interpolation and the sequence of interpolation are unrelated.First The interpolation in the direction y is carried out, then carries out the interpolation in the direction x, it is obtained the result is that the same.
It, can after carrying out dimensional variation processing, shear treatment and/or overturning processing to eyeground original image through the above way The corresponding pretreatment image of eyeground original image is generated, and can be used as one kind of eyeground training image, and then increase eyeground The quantity of training image improves the classification accuracy that training generates retina disaggregated model.
In turn, by step 103, can use the corresponding feature vector image of eyeground original image, eyeground original image, One of corresponding pretreatment image of eyeground original image or a variety of conduct training images and its corresponding retina classification Label (such as the corresponding retina for small blutpunkte occur of label 1, the corresponding retina for blood spots occur of label 2, label 3 are corresponding The corresponding retina for new vessels occur of retina, label 4, the label 5 for velveteen spot occur correspond to the view for fibroproliferation occur The corresponding retina for nethike embrane disengaging occur of film, label 6) GoogLeNet model is trained, and then generate retina classification mould Type.
In some possible implementations of the application, above-mentioned steps 103 are " according to eyeground training image and eyeground training The corresponding retina tag along sort of image is trained initial deep learning model, generate retina disaggregated model " realization Process specifically includes:
Step C:Initial deep learning model is trained using general training image set, generates universal classification mould Type;
In practical applications, the initial deep learning training model that the application uses, should for GoogLeNet model The general frame of GoogLeNet model is as follows:
(1) include Inception module all convolution, all used amendment linear unit (ReLU);
(2) RGB color channel is used, and RGB color is 224 × 224, and subtracts mean value;
(3) #3x3reduce and #5x5reduce reduces 1x1 filter in layer before respectively indicating the convolution of 3x3 and 5x5 Number;The number of 1x1 filter in projection layer after the max-pooling of pool proj expression insertion;Reduce layer and projection Layer will use ReLU;
(4) network includes 22 layers (if it is considered that pooling layers are exactly 27 layers) with parameter, and independent blocking layer is in total Have that there are about 100;
(5) feature that the level among network generates can have distinction, increase some subsidiary classification devices to these layers.This A little classifiers are placed in the output of Inception (4a) and Inception (4b) in the form of small convolutional network.In training process In, loss can be added in total losses according to the weight (discount weight is 0.3) after discount.
During specific model training, the application is first with general training image set imagnet data set pair Above-mentioned GoogLeNet model is trained, and the model after generating training as universal classification model, and then can continue to execute step Rapid D.
Step D:According to eyeground training image and the corresponding retina tag along sort of eyeground training image to described general Disaggregated model is trained, and generates retina disaggregated model.
In practical applications, after generating universal classification model by step C, further, it can use eyeground original graph One of picture, the corresponding feature vector image of eyeground original image, the corresponding pretreatment image of eyeground original image are a variety of As training image and its corresponding retina tag along sort (such as corresponding retina, the label for small blutpunkte occur of label 1 The 2 corresponding retinas for blood spots occur, the corresponding retina for velveteen spot occur of label 3, label 4 correspond to the view for new vessels occur The corresponding retina for fibroproliferation occur of nethike embrane, label 5, the corresponding retina for nethike embrane disengaging occur of label 6) to universal classification mould Type is trained, and then generates retina disaggregated model.
Wherein, the application is determined by many experiments:When being trained for eyeground training image, using different Walk stochastic gradient descent, momentum 0.9, the every 8 epoch decline 4% of learning rate.The patch size of image sampling is from image 8% to 100%, the length-width ratio of selection is between 3/4 to 4/3, so that luminosity distortion advantageously reduces over-fitting, and also Image size is adjusted in conjunction with the change of other hyper parameters using bilinear interpolation method.
Through the above way it is found that the GoogLeNet model that the application uses is one 22 layers of depth network, can incite somebody to action Full articulamentum becomes partially connected layer, so that it is limited to solve the problems, such as depth and width, and then can be improved retina classification The accuracy of category of model result.
Through the foregoing embodiment, it can use the training of eyeground training image and generate retina disaggregated model, then further, It can use eyeground authentication image to verify the retina disaggregated model of generation.
Disaggregated model verification method provided by the embodiments of the present application is illustrated with reference to the accompanying drawing.
Referring to fig. 4, it illustrates a kind of flow chart of disaggregated model verification method provided by the embodiments of the present application, such as Fig. 4 Shown, this method includes:
Step 401:Obtain eyeground original image.
Step 402:By eyeground original image, the corresponding feature vector image of eyeground original image, eyeground original image pair One of pretreatment image answered is a variety of as eyeground authentication image.
It should be noted that step 401~step 402 is similar with step 101~step 102 in above-described embodiment, Related place refers to the explanation of above-described embodiment, and details are not described herein,.
Step 403:Eyeground authentication image is inputted into retina disaggregated model, obtains the retina classification of eyeground authentication image As a result.
During specific implementation, by step 402, by eyeground original image, the corresponding feature of eyeground original image to After one of spirogram picture, the corresponding pretreatment image of eyeground original image or a variety of conduct eyeground authentication images, further, Eyeground authentication image retina disaggregated model be can be inputted into, the retina classification results of eyeground authentication image, Jin Erke obtained Continue to execute step 404.
Specifically, eyeground authentication image can be inputted retina disaggregated model, at least one retina classification knot is obtained Fruit determines the retina classification results of eyeground authentication image according to voting mechanism from least one retina classification results. Detailed description about voting mechanism may refer to subsequent embodiment.
Step 404:When the retina classification corresponding with eyeground authentication image of the retina classification results of eyeground authentication image Label is inconsistent, and eyeground authentication image is re-used as eyeground training image, is updated to retina disaggregated model.
In practical applications, by step 403, the retina classification results of eyeground authentication image are obtained, wherein work as eyeground It, can be by the eye when retina classification results of authentication image retina tag along sort corresponding with eyeground authentication image is inconsistent Bottom authentication image is re-used as eyeground training image, is updated to retina disaggregated model.For example, in retina tag along sort In, if it is the retina for small blutpunkte occur that label 1 is corresponding, and is there is into the eyeground proof diagram of small blutpunkte in a width retina After input retina disaggregated model, the retina tag along sort for obtaining eyeground authentication image is label 2, this indicates that eyeground is tested The retina classification results retina tag along sort corresponding with eyeground authentication image for demonstrate,proving image is inconsistent, then can regard the width The eyeground authentication image that small blutpunkte occurs in nethike embrane is re-used as eyeground training image, is updated to retina disaggregated model, Improve the accuracy of retina disaggregated model classification.
Through the foregoing embodiment, it can use eyeground authentication image effectively to verify retina disaggregated model, work as eye It, can be timely when the retina classification results of bottom authentication image retina tag along sort corresponding with eyeground authentication image is inconsistent Adjustment updates retina disaggregated model, and then helps to improve the nicety of grading and accuracy of disaggregated model.
The above are a kind of specific implementations of method of generating classification model provided by the embodiments of the present application, are based on above-mentioned reality The retina disaggregated model in example is applied, the embodiment of the present application also provides a kind of eye fundus image classification methods.
Referring to Fig. 5, it illustrates a kind of flow chart of eye fundus image classification method provided by the embodiments of the present application, such as Fig. 5 Shown, this method includes:
Step 501:Obtain eyeground original image to be sorted.
In practical applications, the retina disaggregated model generated based on the above embodiment, can be original to the eyeground of acquisition Image is classified, in assorting process, it is necessary first to obtain eyeground original image to be sorted, related description may refer to above-mentioned Content described in the step 101 of embodiment, details are not described herein.After getting eyeground original image to be sorted, due in view In the training process of film disaggregated model by the way of data increment, therefore during obtaining retina classification results, Can also by the way of data increment, using eyeground original image generate the corresponding feature vector image of eyeground original image with And a variety of eye fundus images to be sorted such as corresponding pretreatment image of eyeground original image, and then can be according to executable step 502.
Step 502:By eyeground original image to be sorted, the corresponding feature vector image of eyeground original image to be sorted, to One of corresponding pretreatment image of classification eyeground original image or a variety of input retina disaggregated models obtain at least one Retina classification results determine eyeground original image to be sorted according to voting mechanism from at least one retina classification results Retina classification results.
In practical applications, eyeground original image to be sorted, eyeground original image pair to be sorted are got by step 501 It, further, can be by therein one after the feature vector image answered, the corresponding pretreatment image of eyeground original image to be sorted Kind or a variety of eye fundus images to be sorted input retina disaggregated model, and then can obtain at least one retina classification results, It is understood that the eye fundus image to be sorted of the input needs to generate the eyeground training image of retina classification mode with training Data type it is consistent, if such as training generate retina disaggregated model eyeground training image include eyeground original image, Then the eye fundus image type to be sorted of the input just should include eyeground original image, similarly, if training generates retina classification The eyeground training image of mode includes the corresponding feature vector image of eyeground original image, then the eye fundus image to be sorted of the input Type just should include the corresponding feature vector image of eyeground original image, similar, if training generates retina classification mode Eyeground training image include the corresponding pretreatment image of eyeground original image, then the eye fundus image type to be sorted of the input is just It should include corresponding pretreatment image of eyeground original image etc..
A kind of retina classification results then can be obtained after eyeground original image to be sorted inputs retina disaggregated model, Another view can be obtained after the corresponding feature vector image input retina disaggregated model of eyeground original image to be sorted Film classification results can obtain another after the corresponding pretreatment image input retina disaggregated model of eyeground original image to be sorted A kind of retina classification results, various retina classification results possibility are identical may also be different, need to determine by voting mechanism The retina classification results of final eyeground original image to be sorted.
It is i.e. further, after obtaining at least one retina classification results, can according to voting mechanism from this at least one The retina classification results of eyeground original image to be sorted are determined in kind retina classification results.Wherein, voting mechanism refers to A kind of view of the result for selecting quantity most in a variety of retina classification results as final eyeground original image to be sorted Nethike embrane classification results or quantity it is most retina classification results it is not unique when, selection sort recognition accuracy highest one Retina classification results of the kind as final eyeground original image to be sorted.Classification and Identification accuracy rate be in assorting process, It can be exported by retina disaggregated model.
For example, when using eyeground original image to be sorted, the corresponding feature vector image of eyeground original image to be sorted, to Corresponding these three eye fundus images to be sorted of pretreatment image of classification eyeground original image carry out retina Classification and Identification respectively, obtain The retina classification results arrived are respectively classify 1, classification 1, classification 2, then the retina classification of final eye fundus image to be sorted It as a result is classification 1;In another example when utilizing eyeground original image to be sorted, the corresponding feature vector of eyeground original image to be sorted Image, the corresponding pretreatment image of eyeground original image to be sorted these three eye fundus images to be sorted carry out retina classification respectively Identification, obtained retina classification results are respectively classify 1, classification 2, classification 3, and recognition accuracy is followed successively by 80%, 85% And 90%, then according to voting mechanism, the highest retina classification results of Classification and Identification accuracy rate (classification 3) are determined as wait divide The retina classification results of class eyeground original image.
Wherein, retina disaggregated model is generated according to the method for generating classification model in above-described embodiment.
In some possible implementations of the application, the life of the corresponding feature vector image of eyeground original image to be sorted Include at process:
Extract the image feature vector of eyeground original image to be sorted;
The image feature vector of eyeground original image to be sorted is plotted as the corresponding feature of eyeground original image to be sorted Vector-valued image.
It should be noted that the specific implementation process of this implementation can refer to the correlation of 301~step 302 of above-mentioned steps Description, details are not described herein.
In some possible implementations of the application, the image feature vector of eyeground original image to be sorted is plotted as The process of the corresponding feature vector image of eyeground original image to be sorted specifically includes:
The image feature vector of eyeground original image to be sorted is plotted as original feature vector image;
To original feature vector image carry out dimensional variation processing generate the corresponding feature of eyeground original image to be sorted to Spirogram picture.
It is retouched it should be noted that the specific implementation process of this implementation can refer to above-mentioned steps A~step B correlation It states, details are not described herein.
In some possible implementations of the application, image feature vector includes Scale invariant features transform feature vector And Corner Detection feature vector.
In some possible implementations of the application, the generation of the corresponding pretreatment image of eyeground original image to be sorted Process includes:
Eyeground original image to be sorted is subjected to dimensional variation processing, shear treatment and/or overturning processing, is generated to be sorted The corresponding pretreatment image of eyeground original image.
It should be noted that the specific implementation process of this implementation can refer to the associated description of above-described embodiment, herein It repeats no more.
In some possible implementations of the application, the dimensional variation processing in the application is calculated using bilinear interpolation Method, specific implementation process can refer to the associated description of above-described embodiment, and details are not described herein.
As can be seen from the above-described embodiment, the application is by by eyeground original image, the corresponding feature of eyeground original image One of vector-valued image, the corresponding pretreatment image of eyeground original image are a variety of as eyeground training image, utilize eyeground Training image and the corresponding retina tag along sort of eyeground training image are trained initial deep learning model, generate view Nethike embrane disaggregated model, retina disaggregated model generated can the view film type to eye fundus image classify, thus real The result now automatically and rapidly classified to the view film type of eye fundus image, and classified eliminates the influence of subjectivity, Also more accurate.Meanwhile effectively having expanded the quantity of training image using a variety of images as training image, make the view generated Film disaggregated model is more accurate.
Referring to Fig. 6, the application also provides a kind of disaggregated model generating means embodiment, may include:
First acquisition unit 601, for obtaining eyeground original image;
Second acquisition unit 602 is used for the eyeground original image, the corresponding feature vector of the eyeground original image One of image, the corresponding pretreatment image of the eyeground original image are a variety of as eyeground training image;
Generation unit 603, for according to the eyeground training image and the corresponding retina of the eyeground training image Tag along sort is trained initial deep learning model, generates retina disaggregated model.
In some possible implementations of the application, the generation of the corresponding feature vector image of the eyeground original image Process includes:
Extract the image feature vector of the eyeground original image;
The image feature vector of the eyeground original image is plotted as the corresponding feature vector of the eyeground original image Image.
In some possible implementations of the application, the image feature vector by the eyeground original image is drawn For the corresponding feature vector image of the eyeground original image, including:
The image feature vector of the eyeground original image is plotted as original feature vector image;
Dimensional variation processing is carried out to the original feature vector image and generates the corresponding feature of the eyeground original image Vector-valued image.
In some possible implementations of the application, described image feature vector includes Scale invariant features transform feature Vector and Corner Detection feature vector.
In some possible implementations of the application, the generation of the corresponding pretreatment image of the eyeground original image Journey includes:
The eyeground original image is subjected to dimensional variation processing, shear treatment and/or overturning processing, generates the eyeground The corresponding pretreatment image of original image.
In some possible implementations of the application, the dimensional variation processing uses bilinear interpolation algorithm.
In some possible implementations of the application, the generation unit 603 includes:
First generates subelement, raw for being trained using general training image set to initial deep learning model At universal classification model;
Second generates subelement, for according to the eyeground training image and the corresponding view of the eyeground training image Film tag along sort is trained the universal classification model, generates retina disaggregated model.
As can be seen from the above-described embodiment, the application is by by eyeground original image, the corresponding feature of eyeground original image One of vector-valued image, the corresponding pretreatment image of eyeground original image are a variety of as eyeground training image, utilize eyeground Training image and the corresponding retina tag along sort of eyeground training image are trained initial deep learning model, generate view Nethike embrane disaggregated model, retina disaggregated model generated can the view film type to eye fundus image classify, thus real The result now automatically and rapidly classified to the view film type of eye fundus image, and classified eliminates the influence of subjectivity, Also more accurate.Meanwhile effectively having expanded the quantity of training image using a variety of images as training image, make the view generated Film disaggregated model is more accurate.
Shown in Figure 7, the application also provides a kind of eye fundus image sorter embodiment, may include:
Acquiring unit 701, for obtaining eyeground original image to be sorted;
Obtaining unit 702, for the eyeground original image to be sorted, the eyeground original image to be sorted is corresponding One of feature vector image, the corresponding pretreatment image of the eyeground original image to be sorted or a variety of input retinas point Class model obtains at least one retina classification results, according to voting mechanism from least one retina classification results Determine that the retina classification results of the eyeground original image to be sorted, the retina disaggregated model are above-mentioned disaggregated models Generating means are generated.
In some possible implementations of the application, the corresponding feature vector image of the eyeground original image to be sorted Generating process include:
Extract the image feature vector of the eyeground original image to be sorted;
The image feature vector of the eyeground original image to be sorted is plotted as the eyeground original image pair to be sorted The feature vector image answered.
In some possible implementations of the application, the characteristics of image by the eyeground original image to be sorted to Amount is plotted as the corresponding feature vector image of the eyeground original image to be sorted, including:
The image feature vector of the eyeground original image to be sorted is plotted as original feature vector image;
It is corresponding that the dimensional variation processing generation eyeground original image to be sorted is carried out to the original feature vector image Feature vector image.
In some possible implementations of the application, described image feature vector includes Scale invariant features transform feature Vector and Corner Detection feature vector.
In some possible implementations of the application, the corresponding pretreatment image of the eyeground original image to be sorted Generating process includes:
The eyeground original image to be sorted is subjected to dimensional variation processing, shear treatment and/or overturning processing, generates institute State the corresponding pretreatment image of eyeground original image to be sorted.
In some possible implementations of the application, the dimensional variation processing uses bilinear interpolation algorithm.
In addition, the embodiment of the present application also provides a kind of computer readable storage medium, the computer readable storage medium storing program for executing In be stored with instruction, when described instruction is run on the terminal device, so that the terminal device executes above-mentioned disaggregated model Generation method or above-mentioned eye fundus image classification method.
The embodiment of the present application also provides a kind of computer program product, and the computer program product is transported on the terminal device When row, so that the terminal device executes above-mentioned method of generating classification model or above-mentioned eye fundus image classification method.
As can be seen from the above-described embodiment, the application is by by eyeground original image, the corresponding feature of eyeground original image One of vector-valued image, the corresponding pretreatment image of eyeground original image are a variety of as eyeground training image, utilize eyeground Training image and the corresponding retina tag along sort of eyeground training image are trained initial deep learning model, generate view Nethike embrane disaggregated model, retina disaggregated model generated can the view film type to eye fundus image classify, thus real The result now automatically and rapidly classified to the view film type of eye fundus image, and classified eliminates the influence of subjectivity, Also more accurate.Meanwhile effectively having expanded the quantity of training image using a variety of images as training image, make the view generated Film disaggregated model is more accurate.
It should be noted that each embodiment in this specification is described in a progressive manner, each embodiment emphasis is said Bright is the difference from other embodiments, and the same or similar parts in each embodiment may refer to each other.For reality For applying system or device disclosed in example, since it is corresponded to the methods disclosed in the examples, so being described relatively simple, phase Place is closed referring to method part illustration.
It should be appreciated that in this application, " at least one (item) " refers to one or more, and " multiple " refer to two or two More than a."and/or" indicates may exist three kinds of relationships, for example, " A and/or B " for describing the incidence relation of affiliated partner It can indicate:A is only existed, B is only existed and exists simultaneously tri- kinds of situations of A and B, wherein A, B can be odd number or plural number.Word Symbol "/" typicallys represent the relationship that forward-backward correlation object is a kind of "or"." at least one of following (a) " or its similar expression, refers to Any combination in these, any combination including individual event (a) or complex item (a).At least one of for example, in a, b or c (a) can indicate:A, b, c, " a and b ", " a and c ", " b and c ", or " a and b and c ", wherein a, b, c can be individually, can also To be multiple.
It should also be noted that, herein, relational terms such as first and second and the like are used merely to one Entity or operation are distinguished with another entity or operation, without necessarily requiring or implying between these entities or operation There are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant are intended to contain Lid non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
The step of method described in conjunction with the examples disclosed in this document or algorithm, can directly be held with hardware, processor The combination of capable software module or the two is implemented.Software module can be placed in random access memory (RAM), memory, read-only deposit Reservoir (ROM), electrically programmable ROM, electrically erasable ROM, register, hard disk, moveable magnetic disc, CD-ROM or technology In any other form of storage medium well known in field.
The foregoing description of the disclosed embodiments makes professional and technical personnel in the field can be realized or use the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the application.Therefore, the application It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest scope of cause.

Claims (10)

1. a kind of method of generating classification model, which is characterized in that the method includes:
Obtain eyeground original image;
By the eyeground original image, the corresponding feature vector image of the eyeground original image, the eyeground original image pair One of pretreatment image answered is a variety of as eyeground training image;
According to the eyeground training image and the corresponding retina tag along sort of the eyeground training image to initial depth It practises model to be trained, generates retina disaggregated model.
2. the method according to claim 1, wherein the eyeground original image corresponding feature vector image Generating process includes:
Extract the image feature vector of the eyeground original image;
The image feature vector of the eyeground original image is plotted as the corresponding feature vector image of the eyeground original image.
3. according to the method described in claim 2, it is characterized in that, the image feature vector by the eyeground original image It is plotted as the corresponding feature vector image of the eyeground original image, including:
The image feature vector of the eyeground original image is plotted as original feature vector image;
Dimensional variation processing is carried out to the original feature vector image and generates the corresponding feature vector of the eyeground original image Image.
4. according to the method described in claim 2, it is characterized in that, described image feature vector includes Scale invariant features transform Feature vector and Corner Detection feature vector.
5. the method according to claim 1, wherein the life of the corresponding pretreatment image of the eyeground original image Include at process:
The eyeground original image is subjected to dimensional variation processing, shear treatment and/or overturning processing, it is original to generate the eyeground The corresponding pretreatment image of image.
6. a kind of eye fundus image classification method, which is characterized in that the method includes:
Obtain eyeground original image to be sorted;
By the eyeground original image to be sorted, the corresponding feature vector image of the eyeground original image to be sorted, it is described to One of corresponding pretreatment image of classification eyeground original image or a variety of input retina disaggregated models obtain at least one Retina classification results determine that the eyeground to be sorted is former from least one retina classification results according to voting mechanism The retina classification results of beginning image, the retina disaggregated model are classification moulds according to claim 1-5 Type generation method is generated.
7. a kind of disaggregated model generating means, which is characterized in that described device includes:
First acquisition unit, for obtaining eyeground original image;
Second acquisition unit is used for the eyeground original image, the corresponding feature vector image of the eyeground original image, institute State one of corresponding pretreatment image of eyeground original image or a variety of as eyeground training image;
Generation unit, for according to the eyeground training image and the corresponding retina tag along sort of the eyeground training image Initial deep learning model is trained, retina disaggregated model is generated.
8. a kind of eye fundus image sorter, which is characterized in that described device includes:
Acquiring unit, for obtaining eyeground original image to be sorted;
Obtaining unit, for by the eyeground original image to be sorted, the corresponding feature of the eyeground original image to be sorted to One of spirogram picture, the corresponding pretreatment image of the eyeground original image to be sorted or a variety of input retinas are classified mould Type obtains at least one retina classification results, is determined from least one retina classification results according to voting mechanism The retina classification results of the eyeground original image to be sorted, the retina disaggregated model is by as claimed in claim 7 Disaggregated model generating means are generated.
9. a kind of computer readable storage medium, which is characterized in that it is stored with instruction in the computer readable storage medium storing program for executing, when When described instruction is run on the terminal device, so that the terminal device perform claim requires the described in any item classification moulds of 1-5 Type generation method or eye fundus image classification method as claimed in claim 6.
10. a kind of computer program product, which is characterized in that when the computer program product is run on the terminal device, make It obtains the terminal device perform claim and requires the described in any item method of generating classification model or as claimed in claim 6 of 1-5 Eye fundus image classification method.
CN201810607909.1A 2018-06-13 2018-06-13 Classification model generation method, fundus image classification method and device Active CN108876776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810607909.1A CN108876776B (en) 2018-06-13 2018-06-13 Classification model generation method, fundus image classification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810607909.1A CN108876776B (en) 2018-06-13 2018-06-13 Classification model generation method, fundus image classification method and device

Publications (2)

Publication Number Publication Date
CN108876776A true CN108876776A (en) 2018-11-23
CN108876776B CN108876776B (en) 2021-08-24

Family

ID=64338281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810607909.1A Active CN108876776B (en) 2018-06-13 2018-06-13 Classification model generation method, fundus image classification method and device

Country Status (1)

Country Link
CN (1) CN108876776B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960260A (en) * 2018-07-12 2018-12-07 东软集团股份有限公司 A kind of method of generating classification model, medical image image classification method and device
CN109602391A (en) * 2019-01-04 2019-04-12 平安科技(深圳)有限公司 Automatic testing method, device and the computer readable storage medium of fundus hemorrhage point
CN110147715A (en) * 2019-04-01 2019-08-20 江西比格威医疗科技有限公司 A kind of retina OCT image Bruch film angle of release automatic testing method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850845A (en) * 2015-05-30 2015-08-19 大连理工大学 Traffic sign recognition method based on asymmetric convolution neural network
CN104881639A (en) * 2015-05-14 2015-09-02 江苏大学 Method of detection, division, and expression recognition of human face based on layered TDP model
US20160292856A1 (en) * 2015-04-06 2016-10-06 IDx, LLC Systems and methods for feature detection in retinal images
CN106408037A (en) * 2015-07-30 2017-02-15 阿里巴巴集团控股有限公司 Image recognition method and apparatus
CN106530295A (en) * 2016-11-07 2017-03-22 首都医科大学 Fundus image classification method and device of retinopathy
US20170112372A1 (en) * 2015-10-23 2017-04-27 International Business Machines Corporation Automatically detecting eye type in retinal fundus images
CN106934798A (en) * 2017-02-20 2017-07-07 苏州体素信息科技有限公司 Diabetic retinopathy classification stage division based on deep learning
CN108021916A (en) * 2017-12-31 2018-05-11 南京航空航天大学 Deep learning diabetic retinopathy sorting technique based on notice mechanism
CN108095683A (en) * 2016-11-11 2018-06-01 北京羽医甘蓝信息技术有限公司 The method and apparatus of processing eye fundus image based on deep learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160292856A1 (en) * 2015-04-06 2016-10-06 IDx, LLC Systems and methods for feature detection in retinal images
CN104881639A (en) * 2015-05-14 2015-09-02 江苏大学 Method of detection, division, and expression recognition of human face based on layered TDP model
CN104850845A (en) * 2015-05-30 2015-08-19 大连理工大学 Traffic sign recognition method based on asymmetric convolution neural network
CN106408037A (en) * 2015-07-30 2017-02-15 阿里巴巴集团控股有限公司 Image recognition method and apparatus
US20170112372A1 (en) * 2015-10-23 2017-04-27 International Business Machines Corporation Automatically detecting eye type in retinal fundus images
CN106530295A (en) * 2016-11-07 2017-03-22 首都医科大学 Fundus image classification method and device of retinopathy
CN108095683A (en) * 2016-11-11 2018-06-01 北京羽医甘蓝信息技术有限公司 The method and apparatus of processing eye fundus image based on deep learning
CN106934798A (en) * 2017-02-20 2017-07-07 苏州体素信息科技有限公司 Diabetic retinopathy classification stage division based on deep learning
CN108021916A (en) * 2017-12-31 2018-05-11 南京航空航天大学 Deep learning diabetic retinopathy sorting technique based on notice mechanism

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
GUANGYU HE ET AL.: "A novel feature map for human activity recognition", 《2017 SECOND INTERNATIONAL CONFERENCE ON MECHANICAL, CONTROL AND COMPUTER ENGINEERING》 *
SHU_QDHAO: "深度学习图片分类增强数据集的方法汇总", 《CSDN博客HTTPS://BLOG.CSDN.NET/WEIXIN_37203756/ARTICLE/DETAILS/80071299》 *
清华大学数据科学研究院: "什么是迁移学习 (Transfer Learning)?这个领域历史发展前景如何?", 《知乎-HTTPS://WWW.ZHIHU.COM/QUESTION/41979241/ANSWER/247421889》 *
陈慧岩 等: "《无人驾驶车辆理论与设计》", 31 March 2018, 北京:北京理工大学出版社 *
黄孝平: "《当代机器深度学习方法与应用研究》", 30 November 2017, 成都:电子科技大学出版社 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960260A (en) * 2018-07-12 2018-12-07 东软集团股份有限公司 A kind of method of generating classification model, medical image image classification method and device
CN108960260B (en) * 2018-07-12 2020-12-29 东软集团股份有限公司 Classification model generation method, medical image classification method and medical image classification device
CN109602391A (en) * 2019-01-04 2019-04-12 平安科技(深圳)有限公司 Automatic testing method, device and the computer readable storage medium of fundus hemorrhage point
CN110147715A (en) * 2019-04-01 2019-08-20 江西比格威医疗科技有限公司 A kind of retina OCT image Bruch film angle of release automatic testing method

Also Published As

Publication number Publication date
CN108876776B (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN107292256B (en) Auxiliary task-based deep convolution wavelet neural network expression recognition method
CN105224951B (en) A kind of vehicle type classification method and sorter
CN108647588A (en) Goods categories recognition methods, device, computer equipment and storage medium
CN112614119B (en) Medical image region of interest visualization method, device, storage medium and equipment
CN107945153A (en) A kind of road surface crack detection method based on deep learning
CN109978918A (en) A kind of trajectory track method, apparatus and storage medium
CN108062543A (en) A kind of face recognition method and device
CN109446889A (en) Object tracking method and device based on twin matching network
CN108960260A (en) A kind of method of generating classification model, medical image image classification method and device
CN109635811A (en) The image analysis method of spatial plant
CN108876776A (en) A kind of method of generating classification model, eye fundus image classification method and device
Chawathe Rice disease detection by image analysis
CN109344851A (en) Image classification display methods and device, analysis instrument and storage medium
CN103984954B (en) Image combining method based on multi-feature fusion
CN109993187A (en) A kind of modeling method, robot and the storage device of object category for identification
CN108734200A (en) Human body target visible detection method and device based on BING features
CN108595558A (en) A kind of image labeling method of data balancing strategy and multiple features fusion
CN111739017A (en) Cell identification method and system of microscopic image under sample unbalance condition
CN113705655A (en) Full-automatic classification method for three-dimensional point cloud and deep neural network model
CN109508640A (en) A kind of crowd's sentiment analysis method, apparatus and storage medium
CN111951283A (en) Medical image identification method and system based on deep learning
WO2020119624A1 (en) Class-sensitive edge detection method based on deep learning
CN112991281B (en) Visual detection method, system, electronic equipment and medium
CN114782979A (en) Training method and device for pedestrian re-recognition model, storage medium and terminal
Haarika et al. Insect classification framework based on a novel fusion of high-level and shallow features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant