CN110472600A - The identification of eyeground figure and its training method, device, equipment and storage medium - Google Patents
The identification of eyeground figure and its training method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN110472600A CN110472600A CN201910769997.XA CN201910769997A CN110472600A CN 110472600 A CN110472600 A CN 110472600A CN 201910769997 A CN201910769997 A CN 201910769997A CN 110472600 A CN110472600 A CN 110472600A
- Authority
- CN
- China
- Prior art keywords
- feature
- images
- recognized
- color characteristic
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
This application discloses the identification of eyeground figure and its training method, device, equipment and storage mediums, are related to artificial intelligence field.Specific implementation are as follows: eyeground figure recognition methods is applied to terminal device, and terminal device is connect with image acquisition units, comprising: obtains the images to be recognized of image acquisition units acquisition;Local textural feature, edge feature and the color characteristic of images to be recognized are extracted respectively;Local binary patterns feature, edge feature and color characteristic are spliced, splicing feature is obtained;Splicing feature is inputted into preset model, to identify whether images to be recognized is eyeground figure by preset model.
Description
Technical field
This application involves field of image processings, and in particular to artificial intelligence field more particularly to a kind of eyeground figure identification and
Its training method, device, equipment and storage medium.
Background technique
During the diagnosis of eyes class disease, eyeground figure plays indispensable role.Pass through image acquisition units, example
If fundus camera can obtain eyeground figure, however during shooting eyeground figure by image acquisition units, Image Acquisition
The meeting that unit may be shot is the non-eyeground figure such as natural scene image or anterior segment image, it is therefore desirable to can be accurately
Whether the image for identifying image acquisition units shooting is eyeground figure.
Summary of the invention
In a first aspect, the embodiment of the present application provides a kind of eyeground figure recognition methods, it is applied to terminal device, the terminal
Equipment is connect with image acquisition units, comprising: obtains the images to be recognized of described image acquisition unit acquisition;Described in extracting respectively
Local textural feature, edge feature and the color characteristic of images to be recognized;By the Local textural feature, the edge feature and
The color characteristic is spliced, and splicing feature is obtained;The splicing feature is inputted into preset model, to pass through the default mould
Type identifies whether the images to be recognized is eyeground figure.
The images to be recognized that the embodiment of the present application is acquired by obtaining described image acquisition unit;Extract the figure to be identified
Local textural feature, edge feature and the color characteristic of picture;By Local textural feature, the edge feature and the color characteristic
Spliced, obtains splicing feature;The splicing feature is inputted into preset model, with by preset model identification it is described to
Identify whether image is eyeground figure.Since LBP feature and HOG feature can describe the texture information of images to be recognized, and LBP
Feature is able to maintain good invariance to the geometric deformation of image to orientation-sensitive, HOG feature.In addition, HOG feature is to noise spot
Sensitivity vulnerable to illumination variation and the influences of extraneous factors such as blocks, and LBP feature can be eliminated to extraneous scene to image
It influences, therefore LBP feature and HOG feature is combined to be able to solve under complex scene light change to influence caused by feature description,
More accurately express the textural characteristics of image.Further, since LBP feature and HOG feature are retouched for the feature of gray level image
It states, has ignored color to the importance of image discriminating, and most of eyeground figure is color image, therefore table can be carried out with color combining square
Up to the distribution of color of image, to promote the accuracy of identification of eyeground figure.
Optionally, the Local textural feature for extracting the images to be recognized, comprising: divide the images to be recognized
For the first preset quantity image block, the dimension phase of first preset quantity and the Local textural feature of the images to be recognized
Together;Calculate the local binary patterns value of each pixel in each described image block;Count all pictures in each described image block
The local binary patterns total value of vegetarian refreshments;The first preset quantity local binary mould total value is connected and is used as the figure to be identified
The Local textural feature of picture.
Optionally, the dimension value range of the Local textural feature of the images to be recognized is 9-11.
Optionally, the edge feature for extracting the images to be recognized, comprising: calculate each in the images to be recognized
The gradient of pixel;The images to be recognized block is divided into multiple images unit;According to pixel each in the images to be recognized
Gradient, determine the histogram of gradients of each described image unit;According to the histogram of gradients of each described image unit, determine
The edge feature of the images to be recognized.
Optionally, the dimension value range of the edge feature is 8000-8200.
Optionally, the color characteristic for extracting the images to be recognized, comprising: count the images to be recognized point respectively
Not preset quantity channel color component average value and standard deviation value;By the respective color in preset quantity channel point
The average value and standard deviation value of amount are connected, and the color characteristic of the images to be recognized is obtained.
Optionally, the color characteristic for extracting the images to be recognized, comprising: count the images to be recognized point respectively
Not in R, G, the average value and standard deviation value of the color component of channel B;By R, G, the average value and mark of the color component of channel B
Quasi- deviation series connection, obtains the color characteristic of 6 dimensions.
Optionally, the dimension value range of the splicing feature is 8110-8120.
Optionally, the preset model is obtained based on the training of gradient boosted tree.
The preset model of the embodiment of the present application is obtained using the training of gradient boosted tree, due to regression tree (tree-
Ensemble) model is to join together to approach that " god's function (can be fitted all data with some " weak " trees every time
Function) ", one small step of iterative approach each time, by successive ignition, the fitting effect that can have reached, it is not easy to generate quasi-
Close phenomenon.In addition, since the data in real world largely have noise, and the model anti-noise ability based on regression tree is more
By force, therefore, it in tree-model, is also easy to handle missing values.
Optionally, Local textural feature, edge feature and the color characteristic for extracting the images to be recognized, comprising:
The images to be recognized is inputted into default Feature Selection Model, it is described to be identified to be extracted by the default Feature Selection Model
Local textural feature, edge feature and the color characteristic of image.
Optionally, described that the splicing feature is inputted into preset model, with described wait know by preset model identification
After whether other image is eyeground figure, the method also includes: recognition result is shown on the terminal device.
Second aspect, the embodiment of the present application provide a kind of training method of eyeground figure identification, comprising: obtaining has first
The training image of markup information, first markup information include whether the training image is eyeground figure;Extract the training
Local textural feature, edge feature and the color characteristic of image;By the Local textural feature, the edge feature and the face
Color characteristic is spliced, and splicing feature is obtained;The gradient that the splicing feature input constructs in advance is promoted into tree-model, to pass through
The preset model constructed in advance identifies whether the training image is eyeground figure;Based on recognition result and first mark
The gradient constructed in advance described in discrepancy adjustment between information promotes the network parameter of tree-model.
Optionally, Local textural feature, edge feature and the color characteristic for extracting the training image, comprising: will
The training image inputs default Feature Selection Model, to extract the training image by the default Feature Selection Model
Local textural feature, edge feature and color characteristic.
Optionally, described that the training image is inputted into default Feature Selection Model, to pass through the default feature extraction
Before the Local textural feature of training image described in model extraction, edge feature and color characteristic, the method also includes: it obtains
Training image with the second markup information, second markup information include at least: Local textural feature, edge feature and face
Color characteristic;The training image is inputted to the feature extraction network constructed in advance, to pass through the feature extraction constructed in advance
Network extracts the Local textural feature, edge feature and color characteristic of the training image;The local grain based on extraction
Feature, the edge feature and the color characteristic respectively with the Local textural feature of mark, the edge feature and institute
The difference between color characteristic is stated, the network parameter of the feature extraction network constructed in advance is adjusted.
The third aspect, the embodiment of the present application provide a kind of eyeground figure identification device, comprising: first obtains module, is used for
Obtain images to be recognized;First extraction module, for extracting Local textural feature, the edge feature of the images to be recognized respectively
And color characteristic;First splicing module, for carrying out the Local textural feature, the edge feature and the color characteristic
Splicing obtains splicing feature;First identification module, for the splicing feature to be inputted preset model, by described default
Model identifies whether the images to be recognized is eyeground figure.
Optionally, first extraction module is specifically used for when extracting the Local textural feature of the images to be recognized:
The images to be recognized is divided into the first preset quantity image block, first preset quantity and the images to be recognized
The dimension of Local textural feature is identical;Calculate the local binary patterns value of each pixel in each described image block;Statistics is every
The local binary patterns total value of all pixels point in a described image block;By the first preset quantity local binary mould total value
The Local textural feature connected as the images to be recognized.
Optionally, the dimension value range of the Local textural feature of the images to be recognized is 9-11.
Optionally, first extraction module is specifically used for when extracting the edge feature of the images to be recognized: calculating
The gradient of each pixel in the images to be recognized;The images to be recognized block is divided into multiple images unit;According to described
The gradient of each pixel in images to be recognized determines the histogram of gradients of each described image unit;According to each described image
The histogram of gradients of unit determines the edge feature of the images to be recognized.
Optionally, the dimension value range of the edge feature is 8000-8200.
Optionally, first extraction module is specifically used for when extracting the color characteristic of the images to be recognized: respectively
Count the images to be recognized respectively preset quantity channel color component average value and standard deviation value;By present count
The average value of the respective color component in channel and standard deviation value series connection are measured, the color characteristic of the images to be recognized is obtained.
Optionally, first extraction module is specifically used for when extracting the color characteristic of the images to be recognized: respectively
The images to be recognized is counted respectively in R, G, the average value and standard deviation value of the color component of channel B;By R, G, channel B
The average value and standard deviation value of color component are connected, and the color characteristic of 6 dimensions is obtained.
Optionally, the dimension value range of the splicing feature is 8110-8120.
Optionally, the preset model is obtained based on the training of gradient boosted tree.
Optionally, first extraction module extract the Local textural feature of the images to be recognized, edge feature and
When color characteristic, it is specifically used for: the images to be recognized is inputted into default Feature Selection Model, to mention by the default feature
Take the Local textural feature, edge feature and color characteristic of images to be recognized described in model extraction.
Optionally, described device further include: display module, for recognition result to be shown.
Fourth aspect, the embodiment of the present application provide a kind of training device of eyeground figure identification, comprising: second obtains mould
Block, for obtaining the training image with the first markup information, first markup information include the training image whether be
Eyeground figure;Second extraction module, for extracting the Local textural feature, edge feature and color characteristic of the training image;The
Two splicing modules are spliced for splicing the Local textural feature, the edge feature and the color characteristic
Feature;Second identification module, the gradient for constructing the splicing feature input in advance promotes tree-model, by described pre-
The preset model first constructed identifies whether the training image is eyeground figure;Module is adjusted, for based on recognition result and described
The gradient constructed in advance described in discrepancy adjustment between first markup information promotes the network parameter of tree-model.
Optionally, second extraction module is in Local textural feature, edge feature and the face for extracting the training image
When color characteristic, it is specifically used for: the training image is inputted into default Feature Selection Model, to pass through the default feature extraction mould
Type extracts the Local textural feature, edge feature and color characteristic of the training image.
Optionally, it is described second obtain module, be also used to obtain have the second markup information training image, described second
Markup information includes at least: Local textural feature, edge feature and color characteristic;Second identification module is also used to institute
It states training image and inputs the feature extraction network constructed in advance, described in being extracted by the feature extraction network constructed in advance
Local textural feature, edge feature and the color characteristic of training image;The adjustment module is also used to the office based on extraction
Portion's textural characteristics, the edge feature and the color characteristic are special with the Local textural feature of mark, the edge respectively
The difference sought peace between the color characteristic adjusts the network parameter of the feature extraction network constructed in advance.
5th aspect, the embodiment of the present application provide a kind of eyeground figure identification equipment, comprising:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one
A processor executes, so that at least one described processor is able to carry out method described in first aspect.
6th aspect, the embodiment of the present application provide a kind of training equipment of eyeground figure identification, comprising:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one
A processor executes, so that at least one described processor is able to carry out method described in second aspect.
7th aspect, the embodiment of the present application provide a kind of non-instantaneous computer-readable storage for being stored with computer instruction
Medium, the computer instruction is for making the computer execute method described in first aspect and second aspect.
Eighth aspect, the embodiment of the present application provide a kind of eyeground figure recognition methods, comprising: obtain images to be recognized;It mentions
Take the Local textural feature, edge feature and color characteristic of the images to be recognized;By the local binary patterns feature, described
Edge feature and the color characteristic are spliced, and splicing feature is obtained;The figure to be identified is identified based on the splicing feature
It seem no for eyeground figure.
Optionally, described to identify whether the images to be recognized is eyeground figure based on the splicing feature, comprising: will be described
Splice feature and inputs preset model, it is described pre- to identify whether the images to be recognized is eyeground figure by the preset model
If model is obtained based on the training of gradient boosted tree.
One embodiment in above-mentioned application have the following advantages that or the utility model has the advantages that extract characteristics of image for be identified
For image, images to be recognized can be more accurately expressed;It can be solved well by the preset model that boosted tree training obtains
Certainly over-fitting.Because using the technology hand to image zooming-out Local textural feature to be identified, edge feature and color characteristic
Section, so overcoming the technical problem for causing accuracy of identification not high image expression scarce capacity in the prior art, Jin Erda
To promotion accuracy of identification technical effect, and because training to obtain preset model using boosted tree, the prior art is overcome
In the problem of causing trained preset model to be easy to produce over-fitting since data volume is few.
Other effects possessed by above-mentioned optional way are illustrated hereinafter in conjunction with specific embodiment.
Detailed description of the invention
Attached drawing does not constitute the restriction to the application for more fully understanding this programme.Wherein:
Fig. 1 is a kind of application scenario diagram of eyeground figure recognition methods of the embodiment of the present application;
Fig. 2 is the flow chart of the eyeground figure recognition methods of the embodiment of the present application;
Fig. 3 is the schematic diagram of the eyeground figure recognition methods of the embodiment of the present application;
Fig. 4 is the flow chart of the eyeground figure recognition methods of another embodiment of the application;
Fig. 5 is the exemplary diagram of the eyeground figure recognition methods of the embodiment of the present application;
Fig. 6 is the eyeground figure recognition methods flow chart that another embodiment of the application provides;
Fig. 7 is the exemplary diagram of the eyeground figure recognition methods of the embodiment of the present application;
Fig. 8 is the eyeground figure recognition methods flow chart that another embodiment of the application provides;
Fig. 9 is the training method flow chart for the eyeground figure identification that another embodiment of the application provides;
Figure 10 is the structure chart of the eyeground figure identification device of the embodiment of the present application;
Figure 11 is the structure chart for the training device that the eyeground figure of the embodiment of the present application identifies;
Figure 12 is the electricity of the training method of the eyeground figure recognition methods and the identification of eyeground figure for realizing the embodiment of the present application
The block diagram of sub- equipment.
Specific embodiment
It explains below in conjunction with exemplary embodiment of the attached drawing to the application, including the various of the embodiment of the present application
Details should think them only exemplary to help understanding.Therefore, those of ordinary skill in the art should recognize
It arrives, it can be with various changes and modifications are made to the embodiments described herein, without departing from the scope and spirit of the present application.Together
Sample, for clarity and conciseness, descriptions of well-known functions and structures are omitted from the following description.
Figure recognition methods in eyeground provided by the embodiments of the present application can be applied to have setting for image analysis and processing function
It is standby upper, such as the terminal devices such as computer, IPAD.When the scheme of the present embodiment is applied on above-mentioned terminal device, can pass through
The image acquisition units acquisition eye fundus image being arranged on terminal device, then figure identification in eyeground is executed by the processor of terminal device
Method.Certainly, the eyeground figure recognition methods of the present embodiment can also acquire eye fundus image by external image acquisition units,
And pass through wired or be wirelessly transferred to terminal device, eyeground figure identification side is executed by the processor of terminal device
Method.Below by acquire eye fundus image by external image acquisition units, and by wired or wirelessly transmit
To terminal device, for the application scenarios that eyeground figure recognition methods is executed by the processor of terminal device, the application is implemented
The eyeground figure recognition methods that example provides describes in detail:
Fig. 1 is a kind of application scenario diagram of eyeground figure recognition methods of the embodiment of the present application.As shown in Figure 1, the applied field
Scape includes: image acquisition units 10 and terminal device 11, image acquisition units 10 and terminal device 11 can carry out wire communication or
Wireless communication.Optionally, image acquisition units 10 can be imaging sensor, for example, fundus camera, image acquisition units 10
Eye fundus image can be acquired, and the eye fundus image of acquisition is sent to terminal device 11.Terminal device 11 is that have display screen
Equipment with processor is internally provided with, such as computer, IPAD equipment.Wherein, display screen can show the eyeground figure of acquisition
As or display by the method for the embodiment of the present application treated eye fundus image, the processor inside terminal device can be to figure
As the eye fundus image that acquisition unit 10 acquires is handled.
Conventionally, as there are many kinds of classes for non-eyeground figure, for instance it can be possible that empty clap image, anterior segment image, ash
Degree figure, fluoroscopic visualization image etc., so data distribution disunity.In addition, due to the number such as the hollow bat figure of non-eyeground figure, anterior ocular segment
Measure it is less, if being easy to produce over-fitting using deep learning model.
Figure recognition methods in eyeground provided by the embodiments of the present application, it is intended to solve the technical problem as above of the prior art.Specifically
The technical solution of use are as follows: obtain images to be recognized;Extract the textural characteristics and color characteristic of the images to be recognized;It will be described
Textural characteristics and color characteristic are spliced, and splicing feature is obtained;Identify that the images to be recognized is based on the splicing feature
No is eyeground figure.Wherein, textural characteristics may include Local textural feature and edge feature.
How the technical solution of the application and the technical solution of the application are solved with specifically embodiment below above-mentioned
Technical problem is described in detail.These specific embodiments can be combined with each other below, for the same or similar concept
Or process may repeat no more in certain embodiments.Below in conjunction with attached drawing, embodiments herein is described.
Fig. 2 is the flow chart of the eyeground figure recognition methods of the embodiment of the present application.Fig. 3 is that the eyeground figure of the embodiment of the present application is known
The schematic diagram of other method.As shown in Figures 2 and 3, the eyeground figure recognition methods of the embodiment of the present application, comprises the following specific steps that:
Step 201, the images to be recognized for obtaining image acquisition units acquisition.
As shown in Figure 1, acquiring the images to be recognized of user 12 by image acquisition units 10.Wherein, image acquisition units
The images to be recognized of 10 acquisitions may be eye fundus image, it is also possible to be non-eye fundus image, such as it is empty clap image, anterior segment image,
Grayscale image, fluoroscopic visualization image etc., the images to be recognized that the embodiment of the present application is intended to identify that image acquisition units 10 acquire is eye
Base map picture is also non-eye fundus image.
Optionally, acquiring the images to be recognized of user 12 by image acquisition units 10 can be colored eye fundus image.
Step 202, Local textural feature, edge feature and the color characteristic for extracting images to be recognized respectively.
Optionally, Local textural feature can be local binary patterns (Local Binary Patterns, LBP) feature.
Edge feature can be histograms of oriented gradients (Histogram of Oriented Gradient, HOG) feature;
Color characteristic can be color moment (Color Moments) feature.
As shown in figure 3, carrying out feature extraction to images to be recognized.Specifically the LBP for extracting images to be recognized respectively is special
Sign, HOG feature and color moment characteristics, the LBP feature extracted, HOG feature and color moment characteristics are respectively provided with dimensional information,
For example, LBP feature is 10 dimensional feature vectors, HOG feature is 8100 dimensional feature vectors, and color moment characteristics are 6 dimensional feature vectors.
Step 203 splices Local textural feature, edge feature and color characteristic, obtains splicing feature.
In a specific example, it is assumed that local binary patterns character representation is [L1 ... L K], direction gradient histogram
Figure character representation is [H1 ... HM], and color moment character representation is [C1 ... CT], then [L1 ... can be expressed as by splicing feature
LK, H1 ... HM, C1 ... CT].Wherein, the dimension for splicing feature is equal to local binary patterns feature, edge feature and color
The sum of respective dimension of feature, for example, the dimension of local binary patterns feature, edge feature and color characteristic is respectively K, M, T,
The dimension for then splicing feature is K+M+T.As shown in figure 3, splicing when, by LBP feature, HOG feature and color moment characteristics according to
Dimension is sequentially connected in series to obtain splicing feature, for example, if LBP feature, HOG feature and color moment characteristics are respectively [L1 ... L
10], [H1 ... H8100], [C1 ... C6], it is assumed that splicing feature indicates that then splicing character representation is [x1 ... x using x
10, x11 ... x 8110, x8111 ... x 8116].
Splicing feature is inputted preset model by step 204, to identify whether images to be recognized is eyeground by preset model
Figure.
Optionally, preset model can be based on gradient boosted tree (Gradient Boosting DecisionTree,
GBDT) the GBDT model that training obtains.In the present embodiment, pass through gradient boosted tree (Gradient Boosting
Decision Tree, GBDT) after training obtains preset model, so that it may by the feature of splicing input preset model, preset mould
Type can export automatically images to be recognized whether be eyeground figure recognition result.As shown in figure 3, the splicing feature by 8116 dimensions is defeated
Enter GBDT model, it is the recognition result that eyeground figure is also non-eyeground figure that GBDT model will export images to be recognized automatically, and is shown
Show on the display screen of terminal device 11 as shown in Figure 1.Optionally, GBDT model can also further identify non-eyeground
Image type of figure, such as empty bat image, anterior segment image, grayscale image, fluoroscopic visualization image etc..
The images to be recognized that the embodiment of the present application is acquired by obtaining described image acquisition unit;Extract the figure to be identified
Local textural feature, edge feature and the color characteristic of picture;By Local textural feature, the edge feature and the color characteristic
Spliced, obtains splicing feature;The splicing feature is inputted into preset model, with by preset model identification it is described to
Identify whether image is eyeground figure.Since LBP feature and HOG feature can describe the texture information of images to be recognized, and LBP
Feature is able to maintain good invariance to the geometric deformation of image to orientation-sensitive, HOG feature.In addition, HOG feature is to noise spot
Sensitivity vulnerable to illumination variation and the influences of extraneous factors such as blocks, and LBP feature can be eliminated to extraneous scene to image
It influences, therefore LBP feature and HOG feature is combined to be able to solve under complex scene light change to influence caused by feature description,
More accurately express the textural characteristics of image.Further, since LBP feature and HOG feature are retouched for the feature of gray level image
It states, has ignored color to the importance of image discriminating, and most of eyeground figure is color image, therefore table can be carried out with color combining square
Up to the distribution of color of image, to promote the accuracy of identification of eyeground figure.
Fig. 4 is the flow chart of the eyeground figure recognition methods of another embodiment of the application.On the basis of the above embodiments, originally
The eyeground figure recognition methods that embodiment provides is when extracting the LBP feature of images to be recognized, as shown in figure 4, specifically including as follows
Step:
Step S401, images to be recognized is divided into the first preset quantity image block, first preset quantity with to
Identify that the dimension of the Local textural feature of image is identical.
Optionally, the present embodiment before extracting LBP feature needs that colored eyeground figure is converted to grayscale image first.
Optionally, the dimension value range of the Local textural feature of images to be recognized is 8-11, such as can be 9 with value,
10 or 11.By taking LBP feature is 10 dimensional feature vectors as an example, 10 can be set by the first preset quantity.If images to be recognized is
256*256 can then divide images to be recognized according to the size of 25.6*256, or divide according to the size of 256*25.6
Images to be recognized.
Step S402, the local binary patterns value of each pixel in each image block is calculated.
Specifically, Fig. 5 is referred to for the calculating process of the local binary patterns value of each pixel, as shown in figure 5, false
If the coordinate of current pixel point is (X, Y), then the point centered on pixel (X, Y), chooses the region n*n where it, such as 3*3
The pixel value of the pixel value of other 8 pixels in addition to central point and pixel (X, Y) are done and are haggled over by region, if other 8
In a pixel the pixel value of some pixel be greater than or equal to pixel (X, Y) pixel value, then be labeled as 1, if other 8
The pixel value of some pixel is less than the pixel value of pixel (X, Y) in a pixel, then is labeled as 0, it is hereby achieved that
8 binary codings of pixel (X, Y) neighborhood, such as (01111100) 2,8 binary codings are converted into the decimal system, are obtained
To 124,124 be exactly pixel (X, Y) local binary patterns value.It, can be according to for other pixels in grayscale image
Respective local binary patterns value is calculated in above method step.
Step S403, the local binary patterns total value of all pixels point in each image block is counted.
In the example of the application, it is assumed that in ready-portioned images to be recognized each image block number be I0,
I1 ... I9 can then pass through the part of all pixels point in each image block in statistical picture block I0, I1 ... I9 respectively
Binary pattern total value, it is assumed that the local binary patterns total value of image block I0, I1 ... I9 of statistics be respectively [5809],
[3910],[4126],[1212],[4398],[3498],[1520],[3900],[4623],[32540].If statistics is ten
The local binary patterns value of system, it is also necessary to the local binary patterns total value of each image block is normalized, normalizing
Change the specific size that can be by the local binary patterns total value of each image block divided by images to be recognized of processing, such as
(256x256), then the local binary patterns total value after obtaining image block I0, I1 ... I9 normalized are respectively
[0.08863831]、[0.05966187]、[0.06295776]、[0.01849365]、[0.06710815]、
[0.05337524]、[0.02319336]、[0.05950928]、[0.07054138]、[0.496521]。
Step S404, the first preset quantity local binary mould total value is connected special as the local grain of images to be recognized
Sign.
For example, the local binary patterns total value after obtaining image block I0, I1 ... I9 normalized is respectively
[0.08863831]、[0.05966187]、[0.06295776]、[0.01849365]、[0.06710815]、
[0.05337524], after [0.02319336], [0.05950928], [0.07054138], [0.496521], so that it may obtain
The LBP feature of images to be recognized be [0.08863831,0.05966187,0.06295776,0.01849365,0.06710815,
0.05337524、0.02319336、0.05950928、0.07054138、0.496521]。
Fig. 6 is the eyeground figure recognition methods flow chart that another embodiment of the application provides.On the basis of the above embodiments,
Figure recognition methods in eyeground provided in this embodiment extract images to be recognized HOG feature when, as shown in fig. 6, specifically include as
Lower step:
Step S601, the gradient of each pixel in images to be recognized is calculated.
Optionally, the present embodiment before extracting HOG feature needs that colored eyeground figure is converted to grayscale image first.It can
Choosing, it can also be handled using the standardization (normalization) that Gamma correction method carries out color space to images to be recognized, it is intended to adjust
The contrast of images to be recognized is saved, the shade and illumination variation for reducing images to be recognized part are influenced caused by image, simultaneously
It can inhibit the interference of noise.
Optionally, the gradient for calculating each pixel in images to be recognized, which can be, calculates each pixel in images to be recognized
Size and Orientation, it is intended to capture the profile information in images to be recognized, while the interference that further weakened light shines.
Optionally, the dimension value range of the HOG feature of images to be recognized be 8000-8200, such as 8005,8100,
8050、8015、8020。
Step S602, images to be recognized is divided into multiple images unit.
For example, can be using 8*8 pixel in images to be recognized as an elementary area cell.Certainly, this implementation
Example is with 8*8 pixel here for example, those skilled in the art can be adjusted according to actual needs.
Step S603, according to the gradient of pixel each in images to be recognized, the histogram of gradients of each elementary area is determined.
Optionally, according to the gradient of pixel each in images to be recognized, the histogram of gradients of each elementary area is counted, it can
To be to count the number of different gradients in each elementary area according to the gradient of pixel each in images to be recognized.
Step S604, according to the histogram of gradients of each elementary area, the edge feature of images to be recognized is determined.
Optionally, according to the histogram of gradients of each elementary area, the edge feature of images to be recognized is determined, comprising: root
According to the histogram of gradients of each elementary area, the histogram of gradients of each image subblock is determined, each image subblock includes multiple
Elementary area;According to the histogram of gradients of each image subblock, the histogram of gradients of images to be recognized is determined, as figure to be identified
The edge feature of picture.For example, 2*2 elementary area can be formed an image subblock block, then by each image subblock
The histogram of gradients series connection of 2*2 elementary area, obtains the histogram of gradients of image subblock block in block;To own again
The histogram of gradients of image subblock block is connected, and the edge feature of image block is obtained.
As shown in fig. 7, choosing 2*2 elementary area in images to be recognized, it is denoted as c0, c1, c2, c3, elementary area respectively
C0, c1, c2, c3 are exactly an image subblock block, if the histogram of gradients of elementary area c0, c1, c2, c3 are respectively as follows:
[0.1114858], [0.13090634], [0.21111427], [0.22141684], then the gradient histogram of image subblock block
Figure is [0.1114858,0.13090634,0.21111427,0.22141684].Histogram of gradients is at normalization in this example
Histogram of gradients after reason, specific normalization processing method can be found in the method step of the normalized of previous embodiment introduction
Suddenly, the present embodiment is not repeated to introduce herein.
Fig. 8 is the eyeground figure recognition methods flow chart that another embodiment of the application provides.On the basis of the above embodiments,
Figure recognition methods in eyeground provided in this embodiment extract images to be recognized color characteristic when, as shown in figure 8, specifically include as
Lower step:
Step S801, respectively statistics images to be recognized respectively preset quantity channel color component average value and mark
Quasi- deviation.
Optionally, images to be recognized can be the color point in R, G, channel B in the color component in preset quantity channel
Amount.Then step S801 is statistics images to be recognized in R, G, the average value and standard deviation value of the color component of channel B.
Optionally, images to be recognized can be calculated in the average value of R, G, the color component of channel B using following formula
It arrives:
In formula (1), Pi,jIndicate that i-th of color component of j-th of pixel of colored eye fundus image, N indicate the pixel in image
Number.
Optionally, images to be recognized can use following formula meter in the standard deviation value of R, G, the color component of channel B
It obtains:
In formula (2), Pi,jIndicate that i-th of color component of j-th of pixel of colored eye fundus image, N indicate colored eye fundus image
In number of pixels, μiIndicate the average value of i-th of color component in colored eye fundus image.
Step S802, the average value of the respective color component in preset quantity channel and standard deviation value are connected, is obtained
The color characteristic of the images to be recognized.
It optionally, is the color point in R, G, channel B if images to be recognized is in the color component in preset quantity channel
Amount.Finally obtained is the vector C of one 6 dimension to indicate the color characteristic of the colour eye fundus image, the vector C of 6 dimension are as follows:
C=[mean (R), mean (G), mean (B), std (R), std (G), std (B)]; (3)
In formula (3), mean (R) indicates colored eye fundus image in the average value of the color component in the channel R;Mean (G) is indicated
Average value of the colored eye fundus image in the color component in the channel G;Mean (B) indicates colored eye fundus image in the color point of channel B
The average value of amount;Std (R) indicates colored eye fundus image in the standard deviation value of the color component in the channel R;Std (G) indicates colored
Standard deviation value of the eye fundus image in the color component in the channel G;Std (B) indicates colored eye fundus image in the color component of channel B
Standard deviation value.
The above embodiments of the present application are by colored eye fundus image in R, G, the average value and standard deviation of the color component of channel B
Difference is used as color moment.Optionally, the embodiment of the present application can also by colored eye fundus image R, G, channel B color component it is flat
Mean value, standard deviation, gradient are as color moment.Optionally, images to be recognized the gradient of R, G, the color component of channel B can be with
It is calculated using following formula:
In formula (4), Pi,jIndicate that i-th of color component of j-th of pixel of colored eye fundus image, N indicate colored eye fundus image
In number of pixels, μiIndicate the average value of i-th of color component in colored eye fundus image;siIt indicates i-th in colored eye fundus image
The gradient of a color component.
It optionally, is the color point in R, G, channel B if images to be recognized is in the color component in preset quantity channel
Amount.Finally obtained is the vector C of one 9 dimension to indicate the color characteristic of the colour eye fundus image, the vector C of 9 dimension are as follows:
C=[mean (R), mean (G), mean (B), std (R), std (G), std (B), s (R), s (G), s (B)]; (5)
In formula (5), mean (R) indicates colored eye fundus image in the average value of the color component in the channel R;Mean (G) is indicated
Average value of the colored eye fundus image in the color component in the channel G;Mean (B) indicates colored eye fundus image in the color point of channel B
The average value of amount;Std (R) indicates colored eye fundus image in the standard deviation value of the color component in the channel R;Std (G) indicates colored
Standard deviation value of the eye fundus image in the color component in the channel G;Std (B) indicates colored eye fundus image in the color component of channel B
Standard deviation value;S (R) indicates colored eye fundus image in the gradient of the color component in the channel R;S (G) indicates colored eye fundus image
In the gradient of the color component in the channel G;S (B) indicates colored eye fundus image in the gradient of the color component of channel B.
Optionally, in the Local textural feature, edge feature and color characteristic for extracting images to be recognized, it can also be logical
It crosses and images to be recognized is inputted into default Feature Selection Model, the local line of images to be recognized is extracted by presetting Feature Selection Model
Manage feature, edge feature and color characteristic.Wherein, default Feature Selection Model can be feature extraction network, such as convolutional Neural
Network, such as existing VGG network.Network is extracted by using the training image training characteristics with markup information, can be obtained
To default Feature Selection Model.For example, network is extracted using with the training image input feature vector for being labeled with Local textural feature,
And the discrepancy adjustment feature between the Local textural feature of the Local textural feature and mark extracted according to feature extraction network mentions
Take the network parameter of network.It, can also be using same training method one feature of training for edge feature and color characteristic
Network is extracted, the present embodiment is not repeated to introduce herein.Optionally, special for the Local textural feature of images to be recognized, edge
It seeks peace the extraction of color characteristic, can be trained to obtain default Feature Selection Model using the same feature extraction network,
It can respectively be trained to obtain default Feature Selection Model using a feature extraction network.It is mentioned when respective using a feature
When network being taken to be trained to obtain default Feature Selection Model, default Feature Selection Model should include three default feature extractions
Submodel, including the first default feature extraction submodel, the second default feature extraction submodel, third preset feature extraction submodule
Type, the first default feature extraction submodel, the second default feature extraction submodel, third are preset feature extraction submodel and are used respectively
In the Local textural feature, edge feature and the color characteristic that extract images to be recognized.
Fig. 9 is the training method flow chart for the eyeground figure identification that another embodiment of the application provides.Optionally, the application is real
Whether feature input preset model will spliced by applying example, before being eyeground figure by preset model identification images to be recognized, need
The preset model is obtained to model training.As shown in figure 9, the embodiment of the present application provides a kind of training side of eyeground figure identification
Method specifically comprises the following steps:
Step S901, obtain have the first markup information training image, the first markup information include training image whether
For eyeground figure.
In the present embodiment, training image can be the eye fundus image collected by image acquisition units, or public
The mode of data set is opened to obtain.Optionally, it can be marked whether to the eye fundus image of acquisition by the way of manually marking
For eyeground figure.
Step S902, Local textural feature, edge feature and the color characteristic of training image are extracted.
The present embodiment can use upper in the Local textural feature, edge feature and color characteristic for extracting training image
The specific embodiment of embodiment introduction is stated, the present embodiment is not repeated to introduce herein.
Step S903, Local textural feature, edge feature and color characteristic are spliced, obtains splicing feature.
The present embodiment splices by local binary patterns feature, edge feature and color characteristic, obtains splicing feature
When, the specific embodiment of above-described embodiment introduction can be used, the present embodiment is not repeated to introduce herein.
Step S904, the gradient that feature input constructs in advance will be spliced and promotes tree-model, with default by what is constructed in advance
Whether model recognition training image is eyeground figure.
Step S905, the gradient boosted tree constructed in advance based on the discrepancy adjustment between recognition result and the first markup information
The network parameter of model.
In the training process, the error rate of the weak learner of previous round iteration updates the weight of training set, it is assumed that preceding primary
The strong learner that repetitive exercise obtains is ft-1(x), loss function is L (y, ft-1(x)), then when the target of previous iteration is to find
The weak learner h of one CART regression tree modelt(x), loss L (y, the f of epicycle are allowedt(x))=L (y, ft-1(x))+ht(x) most
Small, in formula, x indicates that training image, y indicate the markup information of input picture, and t is the number of iterations.That is, working as previous iteration
The decision tree that training is found, will make the loss of training sample become smaller as far as possible.In this way, repetitive exercise can basis each time
Markup information and a penalty values are obtained to the recognition result of training image, by multiple repetitive exercise, when loss function
When loss no longer declines, training terminates.
Since regression tree (tree-ensemble) model is to join together to approach with some " weak " trees every time
" god's function (function that can be fitted all data) ", one small step of iterative approach each time can be reached by successive ignition
Fitting effect, be also not easy over-fitting.In addition, since the data in real world largely have noise, and based on recurrence
The model anti-noise ability of tree is stronger, therefore, in tree-model, is also easy to handle missing values.In addition, being made using GBDT
It is usual to the distribution of data and insensitive for classifier, therefore, even if training image data disunity, will not influence mould
The training effect of type.
Optionally, the Local textural feature, edge feature and color characteristic of training image are extracted, comprising: by training image
Input default Feature Selection Model, with by the Local textural feature of Feature Selection Model extraction training image, edge feature and
Color characteristic.
Optionally, training image is inputted into default Feature Selection Model, to extract training image by Feature Selection Model
Local textural feature, before edge feature and color characteristic, the method for the embodiment of the present application further includes following steps: obtaining tool
Have the training image of the second markup information, the second markup information includes at least: Local textural feature, edge feature and color are special
Sign;Training image is inputted to the feature extraction network constructed in advance, to extract training by the feature extraction network constructed in advance
Local textural feature, edge feature and the color characteristic of image;Local textural feature, edge feature and color based on extraction are special
The difference between the Local textural feature of mark, edge feature and color characteristic, the feature that adjustment constructs in advance mention sign respectively
Take the network parameter of network.It wherein, can be with for the extraction of Local textural feature, edge feature and color characteristic in training image
Referring to previous embodiment for specific Jie of the extraction of Local textural feature, edge feature and color characteristic in images to be recognized
It continues.For the training process of Feature Selection Model, previous embodiment may refer to for Jie of feature extraction network training process
It continues, the present embodiment is not repeated to introduce herein.
Figure 10 is the structure chart of the eyeground figure identification device of the embodiment of the present application.Eyeground figure provided by the embodiments of the present application is known
Other device can be terminal device 11 shown in Fig. 1.As shown in Figure 10, figure identification device in eyeground provided by the embodiments of the present application
100 include: the first acquisition module 101, the first extraction module 102, the first splicing module 103, the first identification module 104;Wherein,
First obtains module 101, for obtaining images to be recognized, wherein can be from image acquisition units 10 shown in FIG. 1 and obtain
Images to be recognized;First extraction module 102, for extracting Local textural feature, the edge feature of the images to be recognized respectively
And color characteristic;First splicing module 103 is used for the local binary patterns feature, the edge feature and the color
Feature is spliced, and splicing feature is obtained;First identification module 104, for the splicing feature to be inputted preset model, with logical
It crosses the preset model and identifies whether the images to be recognized is eyeground figure.
Optionally, first extraction module 102 is specific to use when extracting the Local textural feature of the images to be recognized
In: the images to be recognized is divided into the first preset quantity image block, first preset quantity and the figure to be identified
The dimension of the Local textural feature of picture is identical;Calculate the local binary patterns value of each pixel in each described image block;System
Count the local binary patterns total value of all pixels point in each described image block;By the first preset quantity local binary mould
The feature vector that total value is connected as the described first default dimension.
Optionally, the dimension value range of the Local textural feature of the images to be recognized is 9-11.
Optionally, first extraction module 102 is specifically used for when extracting the edge feature of the images to be recognized:
Calculate the gradient of each pixel in the images to be recognized;The images to be recognized block is divided into multiple images unit;According to
The gradient of each pixel in the images to be recognized determines the histogram of gradients of each described image unit;According to each described
The histogram of gradients of elementary area determines the edge feature of the images to be recognized.
Optionally, the dimension value range of the edge feature is 8000-8200.
Optionally, first extraction module 102 is specifically used for when extracting the color characteristic of the images to be recognized:
Count respectively the images to be recognized respectively preset quantity channel color component average value and standard deviation value;It will be pre-
If the average value and standard deviation value of the respective color component in quantity channel are connected, the color for obtaining the images to be recognized is special
Sign.
Optionally, first extraction module 102 is specifically used for when extracting the color characteristic of the images to be recognized:
The images to be recognized is counted respectively respectively in R, G, the average value and standard deviation value of the color component of channel B;R, G, B are led to
The average value and standard deviation value of the color component in road are connected, and the color characteristic of 6 dimensions is obtained.
Optionally, the dimension value range of the splicing feature is 8110-8120.
Optionally, the preset model is obtained based on the training of gradient boosted tree.
Optionally, first extraction module 102 is in Local textural feature, the edge feature for extracting the images to be recognized
When with color characteristic, it is specifically used for: the images to be recognized is inputted into default Feature Selection Model, to pass through the default feature
Extract Local textural feature, edge feature and the color characteristic of images to be recognized described in model extraction.
Optionally, described device further include: display module 105, for showing recognition result, for example, in such as Fig. 1
Shown in show on terminal device 11.
The eyeground figure identification device of embodiment illustrated in fig. 10 can be used for executing the technical solution of above method embodiment, in fact
Existing principle is similar with technical effect, and details are not described herein again.
Figure 11 is the structure chart for the training device that the eyeground figure of the embodiment of the present application identifies.As shown in figure 11, the application is real
The training device 110 for applying the eyeground figure identification of example offer includes: the second acquisition module 111, the spelling of the second extraction module 112, second
Connection module 113, the second identification module 114 and adjustment module 115;Wherein, second module 111 is obtained, has first for obtaining
The training image of markup information, first markup information include whether the training image is eyeground figure;Second extraction module
112, for extracting the Local textural feature, edge feature and color characteristic of the training image;Second splicing module 113 is used
Splice in by the local binary patterns feature, the edge feature and the color characteristic, obtains splicing feature;Second
Identification module 114, the gradient for constructing the splicing feature input in advance promotes tree-model, to pass through the preparatory building
Preset model recognition training image whether be eyeground figure;Module 115 is adjusted, for based on recognition result and first mark
The gradient constructed in advance described in discrepancy adjustment between information promotes the network parameter of tree-model.
Optionally, second extraction module 112 extract the Local textural feature of the training image, edge feature and
When color characteristic, it is specifically used for: the training image is inputted into default Feature Selection Model, to pass through the Feature Selection Model
Extract the Local textural feature, edge feature and color characteristic of the training image.
Optionally, described second module 111 is obtained, is also used to obtain the training image with the second markup information, it is described
Second markup information includes at least: Local textural feature, edge feature and color characteristic;Second identification module 114, is also used
In the training image to be inputted to the feature extraction network constructed in advance, to be mentioned by the feature extraction network constructed in advance
Take the Local textural feature, edge feature and color characteristic of the training image;The adjustment module 115 is also used to be based on to mention
The Local textural feature, the edge feature and the color characteristic taken respectively with the Local textural feature of mark,
Difference between the edge feature and the color characteristic adjusts the network ginseng of the feature extraction network constructed in advance
Number.
The training device of the eyeground figure identification of embodiment illustrated in fig. 11 can be used for executing the technical side of above method embodiment
Case, it is similar that the realization principle and technical effect are similar, and details are not described herein again.
According to an embodiment of the present application, present invention also provides a kind of electronic equipment and a kind of readable storage medium storing program for executing.The electricity
Sub- equipment can be the training equipment of eyeground figure identification equipment or the identification of eyeground figure.
It as shown in figure 12, is the training method identified according to the eyeground figure recognition methods of the embodiment of the present application or eyeground figure
The block diagram of electronic equipment.Electronic equipment is intended to indicate that various forms of digital computers, such as, laptop computer, desk-top meter
Calculation machine, workbench, personal digital assistant, server, blade server, mainframe computer and other suitable computer.Electricity
Sub- equipment also may indicate that various forms of mobile devices, such as, personal digital assistant, cellular phone, smart phone, wearable
Equipment and other similar computing devices.Component, their connection and relationship shown in this article and their function are only made
For example, and it is not intended to limit the realization of the application that is described herein and/or requiring.
As shown in figure 12, which includes: one or more processors 121, memory 122, and for connecting
The interface of each component, including high-speed interface and low-speed interface.All parts are interconnected using different bus, and can be by
It is mounted on public mainboard or installs in other ways as needed.Processor can instruction to executing in electronic equipment
Handled, including storage in memory or on memory (such as, to be coupled to interface in external input/output device
Display equipment) on show GUI graphical information instruction.In other embodiments, if desired, can be by multiple processors
And/or multiple bus is used together with multiple memories with multiple memories.It is also possible to multiple electronic equipments are connected, it is each
Equipment provides the necessary operation in part (for example, as server array, one group of blade server or multiprocessor system
System).In Figure 12 by taking a processor 121 as an example.Optionally, which can also include image acquisition units, such as eye
Bottom camera.
Memory 122 is non-transitory computer-readable storage medium provided herein.Wherein, the memory is deposited
The instruction that can be executed by least one processor is contained, so that at least one described processor executes eyeground provided herein
The training method of figure recognition methods and the identification of eyeground figure.The non-transitory computer-readable storage medium storage computer of the application refers to
It enables, the training which is used to that computer to be made to execute eyeground figure recognition methods and the identification of eyeground figure provided herein
Method.
Memory 122 is used as a kind of non-transitory computer-readable storage medium, can be used for storing non-instantaneous software program, non-
Instantaneous computer executable program and module, such as the instruction of eyeground figure recognition methods and the identification of eyeground figure in the embodiment of the present application
Practice the corresponding program instruction/module of method (for example, attached shown in Fig. 10 first obtains module 101, the first extraction module 102, the
One splicing module 103, the first identification module 104, display module 105, the second acquisition module 111, second shown in attached drawing 11 mention
Modulus block 112, the second splicing module 113, the second identification module 114, adjustment module 115).Processor 121 passes through operation storage
Non-instantaneous software program, instruction and module in memory 122, thereby executing the various function application and number of server
According to processing, the i.e. training method of eyeground figure recognition methods in realization above method embodiment and the identification of eyeground figure.
Memory 122 may include storing program area and storage data area, wherein storing program area can store operation system
Application program required for system, at least one function;Storage data area, which can be stored, identifies that equipment and eyeground figure are known according to eyeground figure
Other trained equipment uses created data etc..In addition, memory 122 may include high-speed random access memory, also
It may include non-transitory memory, a for example, at least disk memory, flush memory device or other non-instantaneous solid-state memories
Part.In some embodiments, it includes the memory remotely located relative to processor 121 that memory 122 is optional, these are remotely deposited
Reservoir can identify the training equipment of equipment and the identification of eyeground figure by being connected to the network to eyeground figure.The example of above-mentioned network includes
But be not limited to internet, intranet, local area network, mobile radio communication and combinations thereof.
Eyeground figure identifies that the training equipment of equipment and the identification of eyeground figure can also include: input unit 123 and output device
124.Processor 121, memory 122, input unit 123 and output device 124 can be connected by bus or other modes,
In Figure 12 for being connected by bus.
Input unit 123 can receive the number or character information of input, and generate and eyeground figure identification equipment and eyeground
Scheme the user setting and the related key signals input of function control of the training equipment of identification, such as touch screen, keypad, mouse
The input units such as mark, track pad, touch tablet, indicating arm, one or more mouse button, trace ball, control stick.Output device
124 may include display equipment, auxiliary lighting apparatus (for example, LED) and haptic feedback devices (for example, vibrating motor) etc..It should
Display equipment can include but is not limited to, and liquid crystal display (LCD), light emitting diode (LED) display and plasma are shown
Device.In some embodiments, display equipment can be touch screen.
The various embodiments of system and technology described herein can be in digital electronic circuitry, integrated circuit system
It is realized in system, dedicated ASIC (specific integrated circuit), computer hardware, firmware, software, and/or their combination.These are various
Embodiment may include: to implement in one or more computer program, which can be
It executes and/or explains in programmable system containing at least one programmable processor, which can be dedicated
Or general purpose programmable processors, number can be received from storage system, at least one input unit and at least one output device
According to and instruction, and data and instruction is transmitted to the storage system, at least one input unit and this at least one output
Device.
These calculation procedures (also referred to as program, software, software application or code) include the machine of programmable processor
Instruction, and can use programming language, and/or the compilation/machine language of level process and/or object-oriented to implement these
Calculation procedure.As used herein, term " machine readable media " and " computer-readable medium " are referred to for referring to machine
It enables and/or data is supplied to any computer program product, equipment, and/or the device of programmable processor (for example, disk, light
Disk, memory, programmable logic device (PLD)), including, receive the machine readable of the machine instruction as machine-readable signal
Medium.Term " machine-readable signal " is referred to for machine instruction and/or data to be supplied to any of programmable processor
Signal.
In order to provide the interaction with user, system and technology described herein, the computer can be implemented on computers
The display device for showing information to user is included (for example, CRT (cathode-ray tube) or LCD (liquid crystal display) monitoring
Device);And keyboard and indicator device (for example, mouse or trace ball), user can by the keyboard and the indicator device come
Provide input to computer.The device of other types can be also used for providing the interaction with user;For example, being supplied to user's
Feedback may be any type of sensory feedback (for example, visual feedback, audio feedback or touch feedback);And it can use
Any form (including vocal input, voice input or tactile input) receives input from the user.
System described herein and technology can be implemented including the computing system of background component (for example, as data
Server) or the computing system (for example, application server) including middleware component or the calculating including front end component
System is (for example, the subscriber computer with graphic user interface or web browser, user can pass through graphical user circle
Face or the web browser to interact with the embodiment of system described herein and technology) or including this backstage portion
In any combination of computing system of part, middleware component or front end component.Any form or the number of medium can be passed through
Digital data communicates (for example, communication network) and is connected with each other the component of system.The example of communication network includes: local area network
(LAN), wide area network (WAN) and internet.
Computer system may include client and server.Client and server is generally off-site from each other and usually logical
Communication network is crossed to interact.By being run on corresponding computer and each other with the meter of client-server relation
Calculation machine program generates the relationship of client and server.
According to the technical solution of the embodiment of the present application, because using to image zooming-out Local textural feature to be identified, edge
The technological means of feature and color characteristic, so overcoming leads to accuracy of identification for image expression scarce capacity in the prior art
Not high technical problem, and then reach and promote accuracy of identification technical effect, and because train to obtain default mould using boosted tree
Type, so overcome causes trained preset model to be easy to produce asking for over-fitting since data volume is few in the prior art
Topic.
It should be understood that various forms of processes illustrated above can be used, rearrangement increases or deletes step.Example
Such as, each step recorded in the application of this hair can be performed in parallel or be sequentially performed the order that can also be different and execute,
As long as it is desired as a result, being not limited herein to can be realized technical solution disclosed in the present application.
Above-mentioned specific embodiment does not constitute the limitation to the application protection scope.Those skilled in the art should be bright
White, according to design requirement and other factors, various modifications can be carried out, combination, sub-portfolio and substitution.It is any in the application
Spirit and principle within made modifications, equivalent substitutions and improvements etc., should be included within the application protection scope.
Claims (33)
1. a kind of eyeground figure recognition methods, which is characterized in that be applied to terminal device, the terminal device and image acquisition units
Connection, comprising:
Obtain the images to be recognized of described image acquisition unit acquisition;
The Local textural feature, edge feature and color characteristic of the images to be recognized are extracted respectively;
The Local textural feature, the edge feature and the color characteristic are spliced, splicing feature is obtained;
The splicing feature is inputted into preset model, to identify whether the images to be recognized is eyeground by the preset model
Figure.
2. the method according to claim 1, wherein the local grain for extracting the images to be recognized is special
Sign, comprising:
The images to be recognized is divided into the first preset quantity image block, first preset quantity and the figure to be identified
The dimension of the Local textural feature of picture is identical;
Calculate the local binary patterns value of each pixel in each described image block;
Count the local binary patterns total value of all pixels point in each described image block;
Local textural feature by the first preset quantity local binary mould total value series connection as the images to be recognized.
3. method according to claim 1 or 2, which is characterized in that the dimension of the Local textural feature of the images to be recognized
Degree value range is 9-11.
4. the method according to claim 1, wherein the edge feature for extracting the images to be recognized, packet
It includes:
Calculate the gradient of each pixel in the images to be recognized;
The images to be recognized block is divided into multiple images unit;
According to the gradient of pixel each in the images to be recognized, the histogram of gradients of each described image unit is determined;
According to the histogram of gradients of each described image unit, the edge feature of the images to be recognized is determined.
5. method according to claim 1 or 4, which is characterized in that the dimension value range of the edge feature is 8000-
8200。
6. the method according to claim 1, wherein the color characteristic for extracting the images to be recognized, packet
It includes:
Count respectively the images to be recognized respectively preset quantity channel color component average value and standard deviation value;
The average value of the respective color component in preset quantity channel and standard deviation value are connected, the images to be recognized is obtained
Color characteristic.
7. the method according to claim 1, wherein the color characteristic for extracting the images to be recognized, packet
It includes:
The images to be recognized is counted respectively respectively in R, G, the average value and standard deviation value of the color component of channel B;
R, G, the average value of the color component of channel B and standard deviation value are connected, the color characteristic of 6 dimensions is obtained.
8. the method according to claim 1, wherein the dimension value range of the splicing feature is 8110-
8120。
9. method according to claim 1-8, which is characterized in that the preset model is based on gradient boosted tree
What training obtained.
10. method according to claim 1-8, which is characterized in that the office for extracting the images to be recognized
Portion's textural characteristics, edge feature and color characteristic, comprising:
The images to be recognized is inputted into default Feature Selection Model, with by the default Feature Selection Model extract it is described to
Identify Local textural feature, edge feature and the color characteristic of image.
11. the method according to claim 1, wherein described input preset model for the splicing feature, with logical
It crosses after the preset model identifies whether the images to be recognized is eyeground figure, the method also includes:
Recognition result is shown on the terminal device.
12. a kind of training method of eyeground figure identification characterized by comprising
The training image with the first markup information is obtained, first markup information includes whether the training image is eyeground
Figure;
Extract the Local textural feature, edge feature and color characteristic of the training image;
The Local textural feature, the edge feature and the color characteristic are spliced, splicing feature is obtained;
The gradient that the splicing feature input constructs in advance is promoted into tree-model, to know by the preset model constructed in advance
Whether the not described training image is eyeground figure;
Tree-model is promoted based on the gradient constructed in advance described in the discrepancy adjustment between recognition result and first markup information
Network parameter.
13. according to the method for claim 12, which is characterized in that the local grain for extracting the training image is special
Sign, edge feature and color characteristic, comprising:
The training image is inputted into default Feature Selection Model, to extract the training by the default Feature Selection Model
Local textural feature, edge feature and the color characteristic of image.
14. according to the method for claim 13, which is characterized in that described that the training image is inputted default feature extraction
Model, to extract the Local textural feature, edge feature and color of the training image by the default Feature Selection Model
Before feature, the method also includes:
The training image with the second markup information is obtained, second markup information includes at least: Local textural feature, edge
Feature and color characteristic;
The training image is inputted to the feature extraction network constructed in advance, to pass through the feature extraction network constructed in advance
Extract the Local textural feature, edge feature and color characteristic of the training image;
The Local textural feature, the edge feature and the color characteristic based on extraction respectively with the part of mark
Difference between textural characteristics, the edge feature and the color characteristic adjusts the feature extraction network constructed in advance
Network parameter.
15. a kind of eyeground figure identification device characterized by comprising
First obtains module, for obtaining images to be recognized;
First extraction module, for extracting the Local textural feature, edge feature and color characteristic of the images to be recognized respectively;
First splicing module is obtained for splicing the Local textural feature, the edge feature and the color characteristic
To splicing feature;
First identification module, for by the splicing feature input preset model, with by the preset model identification described in
Identify whether image is eyeground figure.
16. device according to claim 15, which is characterized in that first extraction module is extracting the figure to be identified
When the Local textural feature of picture, it is specifically used for:
The images to be recognized is divided into the first preset quantity image block, first preset quantity and the figure to be identified
The dimension of the Local textural feature of picture is identical;
Calculate the local binary patterns value of each pixel in each described image block;
Count the local binary patterns total value of all pixels point in each described image block;
Local textural feature by the first preset quantity local binary mould total value series connection as the images to be recognized.
17. device according to claim 15 or 16, which is characterized in that the Local textural feature of the images to be recognized
Dimension value range is 9-11.
18. device according to claim 15, which is characterized in that first extraction module is extracting the figure to be identified
When the edge feature of picture, it is specifically used for:
Calculate the gradient of each pixel in the images to be recognized;
The images to be recognized block is divided into multiple images unit;
According to the gradient of pixel each in the images to be recognized, the histogram of gradients of each described image unit is determined;
According to the histogram of gradients of each described image unit, the edge feature of the images to be recognized is determined.
19. device described in 5 or 18 according to claim 1, which is characterized in that the dimension value range of the edge feature is
8000-8200。
20. device according to claim 15, which is characterized in that first extraction module is extracting the figure to be identified
When the color characteristic of picture, it is specifically used for:
Count respectively the images to be recognized respectively preset quantity channel color component average value and standard deviation value;
The average value of the respective color component in preset quantity channel and standard deviation value are connected, the images to be recognized is obtained
Color characteristic.
21. device according to claim 15, which is characterized in that first extraction module is extracting the figure to be identified
When the color characteristic of picture, it is specifically used for:
The images to be recognized is counted respectively respectively in R, G, the average value and standard deviation value of the color component of channel B;
R, G, the average value of the color component of channel B and standard deviation value are connected, the color characteristic of 6 dimensions is obtained.
22. device according to claim 15, which is characterized in that the dimension value range of the splicing feature is 8110-
8120。
23. the described in any item devices of 5-22 according to claim 1, which is characterized in that the preset model is mentioned based on gradient
Rise what tree training obtained.
24. the described in any item devices of 5-22 according to claim 1, which is characterized in that first extraction module is extracting institute
When stating the Local textural feature, edge feature and color characteristic of images to be recognized, it is specifically used for:
The images to be recognized is inputted into default Feature Selection Model, with by the default Feature Selection Model extract it is described to
Identify Local textural feature, edge feature and the color characteristic of image.
25. device according to claim 15, which is characterized in that described device further include:
Display module, for showing recognition result.
26. a kind of training device of eyeground figure identification characterized by comprising
Second obtains module, and for obtaining the training image with the first markup information, first markup information includes described
Whether training image is eyeground figure;
Second extraction module, for extracting the Local textural feature, edge feature and color characteristic of the training image;
Second splicing module is obtained for splicing the Local textural feature, the edge feature and the color characteristic
To splicing feature;
Second identification module, the gradient for constructing the splicing feature input in advance promotes tree-model, by described pre-
The preset model first constructed identifies whether the training image is eyeground figure;
Module is adjusted, for based on constructing gradient in advance described in the discrepancy adjustment between recognition result and first markup information
Promote the network parameter of tree-model.
27. device according to claim 26, which is characterized in that second extraction module is extracting the training image
Local textural feature, edge feature and when color characteristic, be specifically used for:
The training image is inputted into default Feature Selection Model, to extract the training by the default Feature Selection Model
Local textural feature, edge feature and the color characteristic of image.
28. device according to claim 27, which is characterized in that
Described second obtains module, is also used to obtain the training image with the second markup information, second markup information is extremely
It less include: Local textural feature, edge feature and color characteristic;
Second identification module is also used to the training image inputting the feature extraction network constructed in advance, to pass through
State Local textural feature, edge feature and color characteristic that the feature extraction network constructed in advance extracts the training image;
The adjustment module is also used to the Local textural feature, the edge feature and the color characteristic based on extraction
Difference between the Local textural feature of mark, the edge feature and the color characteristic respectively adjusts described pre-
The network parameter of the feature extraction network first constructed.
29. a kind of eyeground figure identifies equipment characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one
It manages device to execute, so that at least one described processor is able to carry out method of any of claims 1-11.
30. a kind of training equipment of eyeground figure identification characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one
It manages device to execute, so that at least one described processor is able to carry out method described in any one of claim 12-14.
31. a kind of non-transitory computer-readable storage medium for being stored with computer instruction, which is characterized in that the computer refers to
It enables for making the computer perform claim require method described in any one of 1-14.
32. a kind of eyeground figure recognition methods characterized by comprising
Obtain images to be recognized;
Extract the textural characteristics and color characteristic of the images to be recognized;
The textural characteristics and color characteristic are spliced, splicing feature is obtained;
Identify whether the images to be recognized is eyeground figure based on the splicing feature.
33. according to the method for claim 32, which is characterized in that described described to be identified based on splicing feature identification
Whether image is eyeground figure, comprising:
The splicing feature is inputted into preset model, to identify whether the images to be recognized is eyeground by the preset model
Figure, the preset model are obtained based on the training of gradient boosted tree.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910769997.XA CN110472600A (en) | 2019-08-20 | 2019-08-20 | The identification of eyeground figure and its training method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910769997.XA CN110472600A (en) | 2019-08-20 | 2019-08-20 | The identification of eyeground figure and its training method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110472600A true CN110472600A (en) | 2019-11-19 |
Family
ID=68512850
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910769997.XA Pending CN110472600A (en) | 2019-08-20 | 2019-08-20 | The identification of eyeground figure and its training method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110472600A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111833340A (en) * | 2020-07-21 | 2020-10-27 | 北京百度网讯科技有限公司 | Image detection method, image detection device, electronic equipment and storage medium |
CN112652392A (en) * | 2020-12-22 | 2021-04-13 | 成都市爱迦科技有限责任公司 | Fundus anomaly prediction system based on deep neural network |
CN113591877A (en) * | 2021-07-05 | 2021-11-02 | 贵州电网有限责任公司 | Insulator icing type identification method and device, storage medium and equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104077612A (en) * | 2014-07-15 | 2014-10-01 | 中国科学院合肥物质科学研究院 | Pest image recognition method based on multi-feature sparse representation technology |
CN106548195A (en) * | 2016-10-13 | 2017-03-29 | 华南理工大学 | A kind of object detection method based on modified model HOG ULBP feature operators |
CN106780465A (en) * | 2016-08-15 | 2017-05-31 | 哈尔滨工业大学 | Retinal images aneurysms automatic detection and recognition methods based on gradient vector analysis |
CN107248161A (en) * | 2017-05-11 | 2017-10-13 | 江西理工大学 | Retinal vessel extracting method is supervised in a kind of having for multiple features fusion |
-
2019
- 2019-08-20 CN CN201910769997.XA patent/CN110472600A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104077612A (en) * | 2014-07-15 | 2014-10-01 | 中国科学院合肥物质科学研究院 | Pest image recognition method based on multi-feature sparse representation technology |
CN106780465A (en) * | 2016-08-15 | 2017-05-31 | 哈尔滨工业大学 | Retinal images aneurysms automatic detection and recognition methods based on gradient vector analysis |
CN106548195A (en) * | 2016-10-13 | 2017-03-29 | 华南理工大学 | A kind of object detection method based on modified model HOG ULBP feature operators |
CN107248161A (en) * | 2017-05-11 | 2017-10-13 | 江西理工大学 | Retinal vessel extracting method is supervised in a kind of having for multiple features fusion |
Non-Patent Citations (3)
Title |
---|
柳杨: "《数字图像物体识别理论详解与实战》", 31 January 2018, 北京邮电大学出版社 * |
蒋先刚: "《基于稀疏表达的火焰与烟雾探测方法研究》", 31 August 2017, 西南交通大学出版社 * |
邓超 等: "《数字图像处理与模式识别研究》", 30 June 2018, 地质出版社 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111833340A (en) * | 2020-07-21 | 2020-10-27 | 北京百度网讯科技有限公司 | Image detection method, image detection device, electronic equipment and storage medium |
US11798193B2 (en) | 2020-07-21 | 2023-10-24 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Multi-dimensional image detection on at least two acquired images |
CN111833340B (en) * | 2020-07-21 | 2024-03-26 | 阿波罗智能技术(北京)有限公司 | Image detection method, device, electronic equipment and storage medium |
CN112652392A (en) * | 2020-12-22 | 2021-04-13 | 成都市爱迦科技有限责任公司 | Fundus anomaly prediction system based on deep neural network |
CN113591877A (en) * | 2021-07-05 | 2021-11-02 | 贵州电网有限责任公司 | Insulator icing type identification method and device, storage medium and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI742690B (en) | Method and apparatus for detecting a human body, computer device, and storage medium | |
CN108717524B (en) | Gesture recognition system based on double-camera mobile phone and artificial intelligence system | |
CN110009027B (en) | Image comparison method and device, storage medium and electronic device | |
CN104851099B (en) | A kind of image interfusion method based on expression study | |
EP3872766A2 (en) | Method and device for processing image, related electronic device and storage medium | |
CN110472600A (en) | The identification of eyeground figure and its training method, device, equipment and storage medium | |
CN113674421B (en) | 3D target detection method, model training method, related device and electronic equipment | |
CN111833340A (en) | Image detection method, image detection device, electronic equipment and storage medium | |
CN111860362A (en) | Method and device for generating human face image correction model and correcting human face image | |
CN113191938B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN111488821B (en) | Method and device for identifying countdown information of traffic signal lamp | |
CN107169427B (en) | Face recognition method and device suitable for psychology | |
CN112241716B (en) | Training sample generation method and device | |
CN112149635A (en) | Cross-modal face recognition model training method, device, equipment and storage medium | |
CN110379251A (en) | Intelligence based on touch-control clipboard assists system of practising handwriting | |
CN104751406A (en) | Method and device used for blurring image | |
CN111523467B (en) | Face tracking method and device | |
CN111539897A (en) | Method and apparatus for generating image conversion model | |
CN112489129A (en) | Pose recognition model training method and device, pose recognition method and terminal equipment | |
CN109800659A (en) | A kind of action identification method and device | |
CN110378329A (en) | Intelligence based on dot matrix pen assists system of practising handwriting | |
CN109241869A (en) | The recognition methods of answering card score, device and terminal device | |
CN111080754B (en) | Character animation production method and device for connecting characteristic points of head and limbs | |
CN108549484A (en) | Man-machine interaction method and device based on human body dynamic posture | |
CN111768005A (en) | Training method and device for lightweight detection model, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |