CN109829446A - Eye fundus image recognition methods, device, electronic equipment and storage medium - Google Patents
Eye fundus image recognition methods, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN109829446A CN109829446A CN201910167485.6A CN201910167485A CN109829446A CN 109829446 A CN109829446 A CN 109829446A CN 201910167485 A CN201910167485 A CN 201910167485A CN 109829446 A CN109829446 A CN 109829446A
- Authority
- CN
- China
- Prior art keywords
- identified
- fundus image
- eye fundus
- lesion
- block technique
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The embodiment of the invention discloses a kind of eye fundus image recognition methods, device, electronic equipment and storage mediums.This method comprises: obtaining eye fundus image to be identified, and the processing of grid piecemeal is carried out to eye fundus image to be identified, to form multiple multi-block techniques to be identified;By each of eye fundus image to be identified multi-block technique to be identified, neural network model trained in advance is inputted, respectively with the lesion state of determination multi-block technique to be identified;According to the position of lesion state and each multi-block technique to be identified in eye fundus image to be identified, the lesion state of eye fundus image to be identified is determined.The technical solution of the embodiment of the present invention reduces the eye fundus image sample size of needs and the data operation quantity of model training process while guaranteeing that lesion identification accuracy and polymorphic type lesion identify.
Description
Technical field
The present embodiments relate to technical field of image processing more particularly to a kind of eye fundus image recognition methods, device, electricity
Sub- equipment and storage medium.
Background technique
Eye fundus image is able to record the retinal information of patient, carries out lesion for doctor and identifies and positions, to carry out
The diagnosis of eye disease.
In order to avoid the workload and subjectivity of doctor, the prior art has been gradually introduced the aid in treatment function of computer,
Eye fundus image is identified, for example, by using based on Threshold segmentation, based on morphological segment, based on classify and be based on deep learning
Method carry out image recognition.Wherein, the method based on deep learning identifies eye fundus image, specifically, passes through doctor
Life manually marks the focal area of eye fundus image, and by the eye fundus image and some normal eye fundus images work after mark
It for training sample, inputs deep learning model and is trained, and by the deep learning model after training to unknown eyeground figure
It seem no to be identified comprising lesion.
It since the lesion type reflected in eye fundus image is more, comes in every shape, and the eye fundus image of normal person is also
It is diversified, thus it is accurate if implementation model is wanted to identify, and can recognize that various types lesion, it needs in model training mistake
A large amount of training samples are inputted in journey, while the operand for carrying out data processing to training sample is also larger.But it is limited to pass through
Doctor marks sample size obtained and accuracy, so that the accuracy of model identification and the target for being applicable in a variety of lesion types
It is difficult to realize.
Summary of the invention
The embodiment of the present invention provides a kind of eye fundus image recognition methods, device, electronic equipment and storage medium, to guarantee
While lesion identifies that accuracy and polymorphic type lesion identify, the eye fundus image sample size and model training mistake of needs are reduced
The data operation quantity of journey.
In a first aspect, the embodiment of the invention provides a kind of eye fundus image recognition methods, comprising:
Eye fundus image to be identified is obtained, and the processing of grid piecemeal is carried out to the eye fundus image to be identified, it is multiple to be formed
Multi-block technique to be identified;
By each of the eye fundus image to be identified multi-block technique to be identified, neural network mould trained in advance is inputted respectively
Type, with the lesion state of the determination multi-block technique to be identified;
According to the position of the lesion state and each multi-block technique to be identified in eye fundus image to be identified, institute is determined
State the lesion state of eye fundus image to be identified.
Second aspect, the embodiment of the invention also provides a kind of eye fundus image identification devices, comprising:
Grid piecemeal processing module to be identified, for obtaining eye fundus image to be identified, and to the eye fundus image to be identified
The processing of grid piecemeal is carried out, to form multiple multi-block techniques to be identified;
Grid lesion state determining module, for dividing each of the eye fundus image to be identified multi-block technique to be identified
Neural network model trained in advance is not inputted, with the lesion state of the determination multi-block technique to be identified;
Image focus state determining module, for according to the lesion state and each multi-block technique to be identified wait know
Position in other eye fundus image determines the lesion state of the eye fundus image to be identified.
The third aspect, the embodiment of the invention also provides a kind of electronic equipment, the electronic equipment includes:
One or more processors;
Memory, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processing
Device realizes a kind of eye fundus image recognition methods as provided by first aspect embodiment.
Fourth aspect, the embodiment of the invention also provides a kind of computer readable storage mediums, are stored thereon with computer
Program realizes a kind of eye fundus image recognition methods as provided by first aspect embodiment when the program is executed by processor.
The embodiment of the present invention carries out at grid piecemeal by obtaining eye fundus image to be identified, and to eye fundus image to be identified
Reason, to form multiple multi-block techniques to be identified;By each of eye fundus image to be identified multi-block technique to be identified, input is preparatory respectively
Trained neural network model, with the lesion state of determination multi-block technique to be identified;According to lesion state and each piecemeal to be identified
Position of the grid in eye fundus image to be identified determines the lesion state of eye fundus image to be identified.Above-mentioned technical proposal by pair
Eye fundus image to be identified carries out grid piecemeal, and using the grid after piecemeal as the input of neural network model, reduces nerve
The number of parameters of network model thereby reduces neural network model training data operand;Meanwhile by the grid generation after piecemeal
The training that neural network model is carried out for eye fundus image to be identified reduces required eye fundus image training sample during model training
This sample size;In addition, complicated eye fundus image is divided into different multi-block techniques, is closed by the thought to break the whole up into parts
The local detail in image is infused, and then ensure that the accuracy for carrying out lesion identification to eye fundus image to be identified;Furthermore pass through knot
Location information of the multi-block technique to be identified in eye fundus image to be identified is closed, it can be based on the specific position of each multi-block technique to be identified
It sets and the relative position between adjacent multi-block technique to be identified, auxiliary carries out the identification of polymorphic type lesion.
Detailed description of the invention
Fig. 1 is the flow chart of one of embodiment of the present invention one eye fundus image recognition methods;
Fig. 2 is the flow chart of one of embodiment of the present invention two eye fundus image recognition methods;
Fig. 3 A is the structural schematic diagram of one of embodiment of the present invention three eye fundus image identification model;
Fig. 3 B is the structural schematic diagram of one of the embodiment of the present invention three neural network model;
Fig. 4 A is the flow chart of one of embodiment of the present invention four neural network model training method;
Fig. 4 B is one of embodiment of the present invention four lesion eye fundus image;
Fig. 4 C is the training eye fundus image that the preprocessed and equidimension grid in the embodiment of the present invention four divides;
Fig. 4 D is the lesion multi-block technique and ordinary person's multi-block technique after the mark in the embodiment of the present invention four;
Fig. 4 E is the flow chart of one of embodiment of the present invention four eye fundus image recognition methods;
Fig. 4 F is one of embodiment of the present invention four Class Activation mapping graph;
Fig. 4 G is the final display image in the embodiment of the present invention four;
Fig. 5 is the structure chart of one of embodiment of the present invention five eye fundus image identification device;
Fig. 6 is the structural schematic diagram of one of the embodiment of the present invention six electronic equipment.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining the present invention rather than limiting the invention.It also should be noted that in order to just
Only the parts related to the present invention are shown in description, attached drawing rather than entire infrastructure.
Embodiment one
Fig. 1 is the flow chart of one of embodiment of the present invention one eye fundus image recognition methods.The embodiment of the present invention is applicable in
In carry out lesion state recognition to eye fundus image the case where, this method is executed by eye fundus image identification device, and the device is using soft
Part and/or hardware realization, and concrete configuration, in electronic equipment, which, which can be, has certain data-handling capacity
Mobile terminal or fixed terminal, can also be server.
A kind of eye fundus image recognition methods as shown in Figure 1, comprising:
S110, eye fundus image to be identified is obtained, and the processing of grid piecemeal is carried out to the eye fundus image to be identified, to be formed
Multiple multi-block techniques to be identified.
Wherein, eyeground is by the macula area on retina, optical fundus blood vessel, optic papilla, optic nerve fiber, retina, with
And postretinal choroid etc. is constituted.It is abnormal that eye fundus image can reflect that ocular tissue whether there is to a certain extent, such as
There are microaneurysm, hard exudate and bleedings etc..
Optionally, obtain eye fundus image to be identified can be eye fundus image acquisition device treat test object carry out image
After acquisition, directly real-time or timing acquisition eye fundus image to be identified collected.Or it is optional, electronic equipment it is local, with
In other associated storage equipment of electronic equipment or cloud, it is collected to be detected that eye fundus image acquisition device has been stored in advance
The eye fundus image to be identified of object is directly carried out from other local, associated storage equipment or cloud when needing to obtain
The acquisition of eye fundus image to be identified.
It should be noted that when the eye fundus image to be identified obtained is different due to obtaining source, such as due to different model
Eye fundus image acquisition device carry out Image Acquisition can when the size of each eye fundus image to be identified being caused to have a certain difference
To carry out size normalized to eye fundus image to be identified.It illustratively, can be by the eye fundus image to be identified according to setting
The size that sets the goal zooms in and out processing, and the eye fundus image to be identified after scaling processing is replaced the eye fundus image to be identified.
Wherein, target size can by technical staff as needed or empirical value set, guarantee target size used with subsequent
Neural network model eye fundus image acquired when being trained target size it is consistent.
Since the marginal portion in the eye fundus image of different eye fundus image acquisition devices after acquisition can add not similar shape
The label of shape, or due to being influenced to cause the marginal portion of eye fundus image can be due to by surrounding environment light in image acquisition process
Over-exposed there are stronger noises, in order to avoid the edge noise of eye fundus image to be identified influences the recognition result of lesion, also
The irrelevant information (such as noise information and mark information etc.) at the eyeground edge of the eye fundus image to be identified can be filtered out, and
Eye fundus image to be identified after filter is made an uproar replaces the eye fundus image to be identified.It illustratively, can be to eye fundus image to be identified
Region carries out limb recognition, and the edge of identification is set the irrelevant information in width range and is filtered out.It should be noted that
It carries out method used by limb recognition and irrelevant information filters out used method and needs and when neural network model training pairs
The method for identification of edge and irrelevant information filtering method answered are consistent.
It is influenced by the intensity of illumination of ambient enviroment in image acquisition process, and due to different eye fundus image acquisition devices
Between intrinsic difference, cause the brightness of eye fundus image obtained and contrast different.In order to avoid eye to be identified
Base map can also choose at least one Color Channel, to described as the different images lesion recognition result of brightness and contrast
Eye fundus image to be identified carries out histogram equalization processing, and will obtain image after processing and replace the eye fundus image to be identified.
It is even more important in view of the information in colored eye fundus image Green channel is than red channel and blue channel more horn of plenty, usually
Histogram equalization processing is carried out using the image to green channel, to adjust the contrast of images to be recognized.Illustratively, directly
Square figure equalization processing can be limitation contrast self-adapting histogram equilibrium processing.It should be noted that it is equal to carry out histogram
Corresponding Color Channel when selected Color Channel and histogram equalization processing method are needed with model training when weighing apparatusization
And histogram equalization processing method is consistent.
Optionally, the processing of grid piecemeal is carried out to eye fundus image to be identified, can be in such a way that equidimension is divided, it will
Eye fundus image to be identified is split according to being sized, and obtains the identical multiple multi-block techniques to be identified of size.
Or it is optional, the processing of grid piecemeal is carried out to eye fundus image to be identified, can also be according to different setting ratios
Example carries out grid dividing to eye fundus image to be identified, obtains the different multiple multi-block techniques to be identified of size;Wherein, different
Setting ratio can be by technical staff according to position (such as the phase of lesion and eye fundus image for being likely to occur lesion in eye fundus image
To position) and size of tumor (such as relative size of lesion and eye fundus image), empirically value is set.
Or it is optional, the processing of grid piecemeal is carried out to eye fundus image to be identified, can also be for same eye to be identified
Base map picture carries out grid dividing in the way of equidimension segmentation according to different division proportions is unified every time, obtains size not
Same multiple multi-block techniques to be identified.Wherein, the division number of eye fundus image to be identified can be empirically worth and is set,
Such as 3 times;Division proportion when being divided every time can be as needed by technical staff or empirical value is set.
S120, by each of the eye fundus image to be identified multi-block technique to be identified, input in advance trained nerve respectively
Network model, with the lesion state of the determination multi-block technique to be identified.
Wherein, neural network model trained in advance can be the multi-block technique divided according to a large amount of eye fundus image,
And each multi-block technique corresponding known lesion state, as training sample be input in neural network model in model to
Optimal Parameters are trained, obtained trained neural network model.It is understood that due in model training
Using multi-block technique as an independent main body processed, pass through the image data and corresponding known disease of the main body processed
The trained neural network model of stove institute, it is inevitable the lesion state of the multi-block technique to be identified inputted also to be carried out in advance
It surveys.
It should be noted that multi-block technique to be replaced to the instruction of eye fundus image progress neural network model when due to model training
Practice, since the data volume of single multi-block technique is significantly less than the data volume of entire eye fundus image, the neural network trained
The number of parameters of parameter to be optimized also significantly reduces in model, thereby reduces the data operation quantity during model training.Separately
Outside, since a large amount of multi-block technique can be marked off in an eye fundus image, the eyeground needed for model training process
The quantity of image is also opposite to be reduced.Furthermore since multi-block technique belongs to the local message in eye fundus image, pass through what is broken the whole up into parts
Thought, when complicated eye fundus image is divided into different multi-block technique progress model trainings, more offices paid close attention in images
Portion's details, and then ensure that the precision of trained neural network model.
It is understood that various sizes of lesion in order to balance, avoids the erroneous detection of lesion and the generation of detection leakage phenomenon,
By each of the eye fundus image to be identified multi-block technique to be identified, neural network model trained in advance is inputted, respectively with true
It, can also input to neural network model trained in advance is input to before the lesion state of the fixed multi-block technique to be identified
Data are expanded.Specifically, can be using each multi-block technique to be identified as original multi-block technique to be identified, described wait know
At least one adjustment multi-block technique to be identified of the original multi-block technique to be identified is obtained in other eye fundus image, wherein described
Adjust the partial region that the original multi-block technique to be identified is included at least in multi-block technique to be identified;It described is adjusted each wait know
Other multi-block technique and corresponding original multi-block technique to be identified input together the neural network model carry out it is described original wait know
The lesion state recognition of other multi-block technique.
S130, the position according to the lesion state and each multi-block technique to be identified in eye fundus image to be identified,
Determine the lesion state of the eye fundus image to be identified.
According to the lesion state of each multi-block technique to be identified and each multi-block technique to be identified in eye fundus image to be identified
Location information, you can learn that whether there is lesion in eye fundus image to be identified, and there are the location informations of lesion.Certainly,
By splicing to each multi-block technique to be identified according to sequence when carrying out grid piecemeal, by being carried out to spliced image
Contours extract can also obtain the profile information of lesion.It, can also be in profile in order to keep extracted profile information more accurate
The pretreatment operations such as smothing filtering are carried out to spliced image before extraction, it is dry to reduce the noise in spliced image
It disturbs.For the ease of to extraction profile information and eye fundus image to be identified associated by, can also by the profile information of extraction with to
Identify eye fundus image Overlapping display.
The embodiment of the present invention carries out at grid piecemeal by obtaining eye fundus image to be identified, and to eye fundus image to be identified
Reason, to form multiple multi-block techniques to be identified;By each of eye fundus image to be identified multi-block technique to be identified, input is preparatory respectively
Trained neural network model, with the lesion state of determination multi-block technique to be identified;According to lesion state and each piecemeal to be identified
Position of the grid in eye fundus image to be identified determines the lesion state of eye fundus image to be identified.Above-mentioned technical proposal by pair
Eye fundus image to be identified carries out grid piecemeal, and using the grid after piecemeal as the input of neural network model, reduces nerve
The number of parameters of network model thereby reduces neural network model training data operand;Meanwhile by the grid generation after piecemeal
The training that neural network model is carried out for eye fundus image to be identified reduces required eye fundus image training sample during model training
This sample size;In addition, complicated eye fundus image is divided into different multi-block techniques, is closed by the thought to break the whole up into parts
The local detail in image is infused, and then ensure that the accuracy for carrying out lesion identification to eye fundus image to be identified;Furthermore pass through knot
Location information of the multi-block technique to be identified in eye fundus image to be identified is closed, it can be based on the specific position of each multi-block technique to be identified
It sets and the relative position between adjacent multi-block technique to be identified, auxiliary carries out the identification of polymorphic type lesion.
Embodiment two
Fig. 2 is the flow chart of one of embodiment of the present invention two eye fundus image recognition methods.The embodiment of the present invention is upper
It states and improvement is optimized on the basis of the technical solution of each embodiment.
Further, in operation, " by each of the eye fundus image to be identified multi-block technique to be identified, input is preparatory respectively
It is additional " to obtain lesion piecemeal before trained neural network model, with the lesion state of the determination multi-block technique to be identified "
Grid and ordinary person's multi-block technique, wherein the lesion multi-block technique is the piecemeal that mark includes lesion in lesion eye fundus image
Grid;The lesion multi-block technique and ordinary person's multi-block technique are inputted in the neural network model and are trained ", to improve mind
Training mechanism through network model.
A kind of eye fundus image recognition methods as shown in Figure 2, comprising:
S211, lesion multi-block technique and ordinary person's multi-block technique are obtained, wherein the lesion multi-block technique is on lesion eyeground
Mark includes the multi-block technique of lesion in image.
Wherein, ordinary person's multi-block technique is the multi-block technique not being marked in lesion eye fundus image, or is noted as not wrapping
Include the multi-block technique of lesion;And/or ordinary person's multi-block technique is the corresponding multi-block technique of normal eye fundus image.
Wherein, lesion multi-block technique includes original multi-block technique and adjustment multi-block technique;Wherein adjustment multi-block technique be
It is extracted in lesion eye fundus image, and includes at least the partial region of original multi-block technique.
Illustratively, the adjustment piecemeal net of the partial region including original multi-block technique is extracted in lesion eye fundus image
Lattice can be and obtain original focus eye fundus image, and carry out grid piecemeal processing, to obtain multiple original multi-block techniques;It obtains
For the lesion state annotation results of the original multi-block technique, wherein the lesion state annotation results include: including lesion
It does not include lesion;At least one adjustment piecemeal net of the lesion multi-block technique is obtained in the original focus eye fundus image
Lattice, as lesion multi-block technique.
It should be noted that being not necessarily to due to only needing to be labeled original multi-block technique when being labeled lesion state
The Pixel-level profile of the focal area of entire original multi-block technique is delineated, lesion type label time is greatly saved,
Random error when lesion boundary is difficult to divide due to introducing when labeled standards disunity is avoided simultaneously.
Optionally, at least one adjustment piecemeal of the lesion multi-block technique is obtained in the original focus eye fundus image
Grid can be in the original focus eye fundus image, using the lesion multi-block technique as center region, by lesion piecemeal net
The side length of lattice extends to setting multiple, obtains and expands multi-block technique, as adjustment multi-block technique.Wherein, setting multiple can be by
Technical staff is set as needed or sets based on experience value, such as can be 2 times.It is understood that passing through expansion
Adjustment multi-block technique generation, can further expand the quantity of lesion multi-block technique, combine large scale lesion
Local message and the global information of large scale lesion region.
Or it is optional, at least one adjustment of the lesion multi-block technique is obtained in the original focus eye fundus image
Multi-block technique can be in the original focus eye fundus image, using the lesion multi-block technique as center region, by lesion point
The side length of block grid reduces setting ratio, obtains and reduces multi-block technique, as adjustment multi-block technique.Wherein, setting ratio can be with
It is set as needed by technical staff or is set based on experience value, such as can be 1/2.It is understood that passing through contracting
The generation of small adjustment multi-block technique, can further expand the quantity of lesion multi-block technique, combine small size lesion
Local message and small size lesion region global information.
Illustratively, lesion multi-block technique is obtained, it can be when needed in lesion eye fundus image known to lesion classification
Carry out the extraction of lesion multi-block technique.Or it is optional, extracted lesion multi-block technique is stored in electronic equipment sheet in advance
In ground, the associated storage equipment of electronic equipment or cloud, and local, electronic equipment the associated storage from electronic equipment when needed
It is directly acquired in equipment or cloud.
Illustratively, ordinary person's multi-block technique is obtained, can be when needed to lesion eye fundus image known to lesion classification
The middle extraction for carrying out ordinary person's multi-block technique.Or it is optional, the eye fundus image of normal person is subjected to the processing of grid piecemeal, to obtain
Multiple ordinary person's multi-block techniques.Or it is optional, ordinary person's multi-block technique is stored to the pass of electronic equipment local, electronic equipment in advance
In connection storage equipment or cloud, and it is straight from electronic equipment local, the associated storage equipment of electronic equipment or cloud when needed
It obtains and takes.It is understood that ordinary person's multi-block technique and lesion multi-block technique can store in identical or different storage region.
In view of the eye fundus image of Most patients during carrying out eye fundus image identification to patient is bottom of the normal eyes
Image, thus it is higher to the specific requirements of eye fundus image identification, namely reduce to bottom of the normal eyes image there is a situation where mistaken diagnosis,
Therefore need to input a large amount of ordinary person's multi-block technique in model training.In order to simplify the acquisition process of ordinary person's multi-block technique, together
The specificity of Shi Tigao eye fundus image identification carries out the processing of grid piecemeal preferably through by the eye fundus image of normal person, to obtain
The mode of multiple ordinary person's multi-block techniques is taken, this mode does not need doctor and participates in being labeled.
It should be noted that the eye fundus image in original focus eye fundus image and normal person (hereafter referred to collectively as trains eye
Base map picture) acquisition process in, since acquired training eye fundus image is different due to obtaining source, such as by different model
The acquisition of eye fundus image acquisition device, can be to training when causing to have a certain difference between the size of each trained eye fundus image
Eye fundus image carries out size normalized.Illustratively, can by the trained eye fundus image according to setting target size into
Row scaling processing, and the training eye fundus image after scaling processing is replaced into the trained eye fundus image.Wherein, target size can be with
By technical staff as needed or empirical value set.
Since the marginal portion in the eye fundus image of different eye fundus image acquisition devices after acquisition can add not similar shape
The label of shape, or due to being influenced to cause the marginal portion of eye fundus image can be due to by surrounding environment light in image acquisition process
Over-exposed there are stronger noises, in order to avoid the edge noise of training eye fundus image influences the precision of institute's training pattern, also
The noise information at the eyeground edge of the trained eye fundus image can be filtered out, and will be filtered described in the replacement of the training eye fundus image after making an uproar
Training eye fundus image.Illustratively, limb recognition can be carried out to training eye fundus image region, and the edge of identification is set into width
The irrelevant information of degree range is filtered out.
It is influenced by the intensity of illumination of ambient enviroment in image acquisition process, and due to different eye fundus image acquisition devices
Between intrinsic difference, cause the brightness of eye fundus image obtained and contrast different.In order to avoid training eyeground
The different images lesion recognition result of image brightness and contrast, can also choose at least one Color Channel, to the instruction
Practice eye fundus image and carry out histogram equalization processing, and image will be obtained after processing and replace the trained eye fundus image.It considers
The information in colored eye fundus image Green channel is even more important than red channel and blue channel more horn of plenty, generallys use pair
The image of green channel carries out histogram equalization processing, to adjust the contrast of images to be recognized.Illustratively, histogram is equal
Weighing apparatusization processing can be limitation contrast self-adapting histogram equilibrium processing.
S212, it will be trained in the lesion multi-block technique and ordinary person's multi-block technique input neural network model.
It should be noted that due to including expanding multi-block technique, original multi-block technique and reducing in lesion multi-block technique
The various sizes of multi-block technique such as multi-block technique, therefore the neural network model trained can take into account the disease of different sizes
Stove improves trained neural network model to the recognition accuracy of different sizes, different types of lesion.For example, expanding
The introducing of multi-block technique can effectively distinguish the difference of lesion and physiological structure (such as optic disk, intersect blood vessel or macular area etc.)
It is different;The smaller sizes lesion such as aneurysms can be preferably adapted to by reducing multi-block technique;Original multi-block technique can be fitted preferably
Situations such as exudation and bleeding with zonule.
S220, eye fundus image to be identified is obtained, and the processing of grid piecemeal is carried out to the eye fundus image to be identified, to be formed
Multiple multi-block techniques to be identified.
S230, by each of the eye fundus image to be identified multi-block technique to be identified, input in advance trained nerve respectively
Network model, with the lesion state of the determination multi-block technique to be identified.
S240, the position according to the lesion state and each multi-block technique to be identified in eye fundus image to be identified,
Determine the lesion state of the eye fundus image to be identified.
It should be noted that the embodiment of the present invention does not do any restriction to the specific execution sequence of S211~S212, it is only necessary to
Guarantee that S211~S212 is completed before S230.
The embodiment of the present invention is by the way that by each of eye fundus image to be identified multi-block technique to be identified, input is instructed in advance respectively
Experienced neural network model, before the lesion state of determination multi-block technique to be identified, additional neural network model training step,
To improve the training mechanism of neural network model.Meanwhile it is artificial in the prior art by the mark replacement to lesion multi-block technique
The mode that Pixel-level is delineated is carried out to eye fundus image, annotating efficiency is improved, has saved the label time of lesion type, kept away simultaneously
Random error when lesion boundary is difficult to divide due to introducing when labeled standards disunity is exempted from.In addition, passing through ordinary person's piecemeal net
The introducing of lattice, simplify do not include lesion type piecemeal grid acquisition process, while improving trained neural network mould
The specificity of type progress eye fundus image identification.
Embodiment three
On the basis of the technical solution of the various embodiments described above, the embodiment of the present invention carries out eye fundus image recognition methods
Optimal improvements.
Fig. 3 A is the structural schematic diagram of one of embodiment of the present invention three eye fundus image identification model.The eye fundus image is known
Other model includes preprocessing module 310, model training module 320 and post-processing module 330.
Preprocessing module 310, for carrying out pretreatment operation to the reference eye fundus image of acquisition.Wherein refer to eye fundus image
It can be eye fundus image to be identified, can also be training eye fundus image when carrying out model training;The trained eye fundus image packet
Include the normal person that lesion eye fundus image and user for carrying out lesion multi-block technique extraction carry out the acquisition of ordinary person's multi-block technique
Eyeground figure.
Specifically, the pretreatment operation carried out to the reference eye fundus image of acquisition includes following at least one: eye will be referred to
Base map picture zooms in and out processing according to setting target size, and the reference eye fundus image replacement after scaling processing is described with reference to eye
Base map picture;The noise information at the eyeground edge with reference to eye fundus image is filtered out, and is replaced the reference eye fundus image after making an uproar is filtered
It is described to refer to eye fundus image;At least one Color Channel is chosen, carries out histogram equalization processing with reference to eye fundus image to described,
And it is described with reference to eye fundus image that image replacement will be obtained after processing.Wherein, target size can by technical staff as needed or
Empirical value is set.
Model training module 320, for being trained to neural network model.
Referring to a kind of structural schematic diagram of neural network model shown in Fig. 3 B it is found that neural network model includes convolutional layer
321, pond layer 322 and classification layer 323.
Wherein, convolutional layer 321 handle the convolution of the original multi-block technique and each adjustment multi-block technique including being respectively used to
Each multi-block technique is carried out process of convolution at least once, to form the identical multiple characteristic patterns of matrix size by channel.
The exemplary adjustment multi-block technique that gives includes that expansion multi-block technique is different with two kinds of multi-block technique is reduced in Fig. 3 B
In the case where the multi-block technique of size, carry out characteristic pattern extraction the case where.
Specifically, for three kinds various sizes of point of multi-block technique of multi-block technique, original multi-block technique and diminution is expanded
Block grid carries out multiple dimensioned characteristic pattern and extracts respectively by three feature extraction networks, and cascades each feature extraction network institute
Obtained characteristic pattern.By the extraction of the characteristic pattern of different scale, the characteristic information on different scale has been taken into account, has made the spy obtained
Sign figure is more abundant comprehensive, while filtering out to the irrelevant information in multi-block technique, reduces the data during model training
Treating capacity.
It should be noted that the number for carrying out convolution can be with when carrying out characteristic pattern extraction to various sizes of multi-block technique
Identical to can also be different, convolution scale, used convolution kernel can be identical or not identical, can guarantee finally obtained spy
The matrix size for levying figure is identical.Wherein, the scale of convolution and used convolution kernel can be by technical staff rule of thumb
Value setting, or determined by test of many times.
Pond layer 322 compresses to obtain the pond value of corresponding characteristic point for carrying out characteristic pattern to each characteristic pattern, to extract
Main feature simplifies network query function complexity.
Illustratively, using average pond (Global Average Pooling, the GAP) method of the overall situation, by different scale
On characteristic pattern on one characteristic point of each characteristic point boil down to.
Specifically, determining the corresponding pond value of each characteristic pattern according to the following formula:
Wherein, fk(x, y) is the characteristic value of the characteristic point of the position (x, y) in k-th of characteristic pattern;FkFor k-th of characteristic pattern pond
The pond value obtained after change;Wherein, k=1,2 ..., n, n are the total quantity of the characteristic pattern after cascade.
Classify layer 323, for using the characteristic point of the corresponding different scale of each multi-block technique as input, to each multi-block technique
Lesion classification predicted;Wherein, lesion classification includes: including lesion and does not include lesion.That is, the classification layer is two points
Class model, for whether including that lesion is predicted to multi-block technique.
It illustratively, can include the probability value that lesion or multi-block technique do not include lesion by calculating multi-block technique, it is right
Multi-block technique carries out two classification.
Specifically, determining whether to different multi-block techniques include that lesion is predicted according to the following formula:
Wherein, t represents the generic of multi-block technique, and the classification of two classification includes: including lesion and do not include lesion;pt
For the corresponding prediction probability of different classes of t;wk tFor the classified weight of the corresponding characteristic point of k-th of characteristic pattern;Wherein, k=1,
2 ..., n, n are the total quantity of the characteristic pattern after cascade.
In the training process for carrying out neural network model, in order to make prediction result and its actual disease to multi-block technique
Stove classification infinite approach, it will training is iterated to neural network model.In general, objective function can be arranged to constrain mind
The number of iterations through network model.In general, after the functional value of objective function is minimum or uniform convergence, it will stop to nerve
There is the phenomenon that over-fitting to avoid the neural network model trained in the repetitive exercise of network model.
During carrying out model training, sample including lesion (namely positive sample, such as can be marked with " 1 "
Note) and do not include lesion sample (namely negative sample, such as can be labeled with " 0 ") between quantity gap it is larger, it is logical
Normal negative sample ratio will seriously be greater than the ratio of positive sample, the specificity of training pattern to improve.In addition, similar sample interior
Since lesion classification is more, classification lesion of the same race is there is also significant difference existing for biggish difference, different classes of lesion, because
This, leading to the input sample of model training process, there are sample variation in imbalanced training sets between class, class is larger and unbalanced
Situation.
To solve the above-mentioned problems, objective function used by the embodiment of the present invention is according to focal loss objective function and outstanding person
Card moral similarity factor loss objective function is constructed.Specifically, constructed objective function Equation is as follows:
Wherein, wherein p represents the model predication value including lesion of the classification layer output;Y represents the disease of multi-block technique
Stove mark value;T represents the generic of multi-block technique, and classification includes: including lesion and do not include lesion;ptIt represents different classes of
The conversion estimation value of t;γ represents focusing parameter;αtRepresent different classes of weight.Wherein, γ, αtRule of thumb by technical staff
Value setting is determined by test of many times.
Wherein, lesion mark value is that " 0 " indicates the lesion classification of mark not include lesion;Lesion mark value is " 1 " table
Indicating note lesion classification be include lesion.
It, can be with it should be noted that when the sensitivity and specificity of model that training obtains have a certain difference
Reduce model learning rate, while focusing parameter γ and/or different classes of weight α in modified objective functiontWhat is trained
On the basis of model, the optimization training for continuing model further mentions until the sensitivity and specificity of model keep balance
The high robustness of model.
Post-processing module 330, for being identified using trained neural network model to eye fundus image to be identified.
Specifically, calculating the Class Activation value of each multi-block technique to be identified in eye fundus image to be identified according to the following formula:
Wherein, fk(x, y) is the characteristic value of the characteristic point of the position (x, y) in k-th of characteristic pattern, wk tFor kth in classification layer
The classified weight of the corresponding characteristic point of a characteristic pattern, Mt(x, y) be Class Activation mapping graph in the position (x, y) characteristic point class
Activation value;Wherein, k=1,2 ..., n, n are the total quantity of the characteristic pattern after cascade.
All kinds of activation values are spliced and combined to form Class Activation according to the positional relationship of each characteristic point in eye fundus image to be identified
Mapping graph.According between the Class Activation mapping graph and the multi-block technique to be identified position corresponding relationship and it is described to
It identifies position of the multi-block technique in eye fundus image to be identified, the characteristic point of the Class Activation mapping graph is mapped to eye to be identified
In base map picture, to obtain the lesion pattern in the eye fundus image to be identified.
The embodiment of the present invention is by multiple dimensioned lower progress characteristic pattern extraction, having taken into account on different scale multi-block technique
It is more abundant comprehensive to make the characteristic pattern obtained, while filtering out to the irrelevant information in multi-block technique for characteristic information, reduces mould
Data processing amount in type training process.In addition, by losing mesh according to focal loss objective function and Jie Kade similarity factor
Scalar functions combination building neural network model carries out objective function when model training, overcomes imbalanced training sets between class, in class
Sample variation is larger and unbalanced problem, while improving the specificity of model.Furthermore by by multi-block technique to be identified
Each feature point value be converted to Class Activation mapping graph, be convenient for the determination of lesions position and the extraction of profile information.
Example IV
The embodiment of the present invention provides a kind of preferred embodiment on the basis of the technical solution of the various embodiments described above.
A kind of neural network model training method as shown in Figure 4 A, comprising:
S401, it obtains with reference to eye fundus image.
It wherein, include lesion eye fundus image and the eye fundus image of normal person with reference to eye fundus image.
Fig. 4 B is the lesion eye fundus image obtained.
S402, size normalized is carried out to reference eye fundus image according to target size.
S403, it is normalized to size carry out limb recognition with reference to eye fundus image, and by the irrelevant information in the edge of eyeground
It filters out.
Wherein, irrelevant information includes eye fundus image acquisition device to label added by eye fundus image and eye fundus image
Noise caused by edge overexposure.
S404, irrelevant information is filtered out after reference eye fundus image, green channel carry out limitation contrast self-adaptive direct
Square figure equilibrium treatment obtains training eye fundus image.
S405, each trained eye fundus image is divided by equidimension grid, obtains original multi-block technique.
Fig. 4 C is that size is normalized to 1024*1024 pixel, eliminates the information at edge 5%, and in green channel
Carry out limitation contrast self-adapting histogram equilibrium processing, and the training eye fundus image divided through equidimension grid.
S406, lesion classification mark is carried out to original multi-block technique, obtains lesion multi-block technique and ordinary person's multi-block technique.
Wherein, lesion classification includes: including lesion (for example, being labeled as " 1 ") and does not include lesion classification (for example, mark
For " 0 ").
By clicking original multi-block technique, the mark of " including lesion " classification can be carried out to original multi-block technique.To disease
After stove eye fundus image is labeled, lesion multi-block technique and ordinary person's multi-block technique are obtained, referring to fig. 4 D.Wherein, preceding 5 row in Fig. 4 D
It is exemplary to give part lesion multi-block technique;5 rows are exemplary afterwards gives part ordinary person's multi-block technique.
S407, in the corresponding lesion eye fundus image of training eye fundus image, will be sick using lesion multi-block technique as center region
The side length of stove multi-block technique extends 2 times, obtains expanding multi-block technique.
S408, in the corresponding lesion eye fundus image of training eye fundus image, will be sick using lesion multi-block technique as center region
The side length of stove multi-block technique reduces 1/2, obtains reducing multi-block technique.
S409, original multi-block technique, expansion multi-block technique and diminution multi-block technique are separately input into corresponding convolution
Channel carries out feature extraction, obtains multiple characteristic patterns of same matrix size under different scale, and each convolutional channel is exported
Characteristic pattern cascaded.
S410, the corresponding each characteristic pattern of same multi-block technique is passed through into global average pond, it is corresponding obtains each multi-block technique
Characteristic point.
Specifically, determining the corresponding pond value of each characteristic pattern according to the following formula:
Wherein, fk(x, y) is the characteristic value of the characteristic point of the position (x, y) in k-th of characteristic pattern;FkFor k-th of characteristic pattern pond
The pond value obtained after change;Wherein, k=1,2 ..., n, n are the total quantity of the characteristic pattern after cascade.
S411, according to the corresponding characteristic point of each characteristic pattern in same multi-block technique, pass through softmax classifier calculated pair
The affiliated type of each multi-block technique is predicted.
Specifically, the generic of different multi-block techniques is predicted in determination according to the following formula:
Wherein, t represents the generic of multi-block technique, and classification includes: including lesion and do not include lesion;ptFor inhomogeneity
The corresponding prediction probability of other t;wk tFor the classified weight of the corresponding characteristic point of k-th of characteristic pattern.
The prediction probability and practical mark lesion classification of S412, basis to multi-block technique, calculate current iteration and trained
The loss function of journey, until deconditioning after the functional value uniform convergence of loss function, obtains trained neural network model.
Wherein, used loss function is as follows:
Wherein, wherein p represents the model predication value including lesion of the classification layer output;Y represents the disease of multi-block technique
Stove mark value;T represents the generic of multi-block technique, and classification includes: including lesion and do not include lesion;ptIt represents different classes of
The conversion estimation value of t;γ represents focusing parameter;αtRepresent different classes of weight.Wherein, γ, αtRule of thumb by technical staff
Value setting is determined by test of many times.
Wherein, lesion mark value is that " 0 " indicates the lesion classification of mark not include lesion;Lesion mark value is " 1 " table
Indicating note lesion classification be include lesion.
S413, it is carried out in advance using lesion classification of the trained neural network model to each multi-block technique in eye fundus image
It surveys.
A kind of eye fundus image recognition methods as shown in Figure 4 E, comprising:
S421, prediction eye fundus image is obtained.
S422, size normalized is carried out to prediction eye fundus image according to target size.
S423, prediction eye fundus image normalized to size carry out limb recognition, and by the irrelevant information in the edge of eyeground
It filters out.
S424, irrelevant information is filtered out after prediction eye fundus image, green channel carry out limitation contrast self-adaptive direct
Square figure equilibrium treatment, obtains target eye fundus image.
S425, target eyeground is divided by equidimension grid, obtains original predictive multi-block technique.
S426, in target eye fundus image, to original predictive multi-block technique be center region, by original predictive multi-block technique
Side length extend 2 times, obtain expand prediction multi-block technique.
S427, in target eye fundus image, to original predictive multi-block technique be center region, by original predictive multi-block technique
Side length reduce 1/2, obtain reduce prediction multi-block technique.
S428, original predictive multi-block technique, expansion prediction multi-block technique and diminution prediction multi-block technique are input to training
In good neural network model, the characteristic value of each prediction each pixel of multi-block technique is obtained.
S429, according to the characteristic pattern corresponding characteristic point of different scale in the classification layer of trained neural network model
Classified weight calculates the Class Activation value of each prediction multi-block technique.
Specifically, calculating the Class Activation value of each prediction multi-block technique in target eye fundus image according to the following formula:
Wherein, fk(x, y) is the characteristic value of the characteristic point of the position (x, y) in k-th of characteristic pattern, wk tFor kth in classification layer
The classified weight of the corresponding characteristic point of a characteristic pattern, Mt(x, y) is to predict that the characteristic point in multi-block technique in the position (x, y) is corresponding
Class Activation value.
S430, the Class Activation value of each prediction multi-block technique is spelled according to the positional relationship of each pixel in prediction eye fundus image
It connects combination and forms Class Activation mapping graph.
The exemplary Class Activation mapping graph provided of F referring to fig. 4.
S431, smothing filtering is carried out to Class Activation mapping graph.
S432, the Class Activation mapping graph after smothing filtering is iterated threshold process, obtains lesion segmentation result.
S433, contours extract is carried out to the Class Activation mapping graph after lesion segmentation, and by the profile information and target of extraction
Eye fundus image Overlapping display.
The final display image obtained after the profile information of extraction is superimposed with target eye fundus image G referring to fig. 4.
Embodiment five
Fig. 5 is the structure chart of one of embodiment of the present invention five eye fundus image identification device.The embodiment of the present invention is applicable in
In carry out lesion state recognition to eye fundus image the case where, which uses software and or hardware realization, and concrete configuration is in electricity
In sub- equipment, which, which can be, has certain data-handling capacity mobile terminal or fixed terminal, can also be service
Device.
A kind of eye fundus image identification device as shown in Figure 5, comprising: grid piecemeal processing module 510 to be identified, grid disease
Stove state determining module 520 and image focus state determining module 530.
Wherein, grid piecemeal processing module 510 to be identified, for obtaining eye fundus image to be identified, and to described to be identified
Eye fundus image carries out the processing of grid piecemeal, to form multiple multi-block techniques to be identified;
Grid lesion state determining module 520, for by each of the eye fundus image to be identified multi-block technique to be identified,
Neural network model trained in advance is inputted, respectively with the lesion state of the determination multi-block technique to be identified;
Image focus state determining module 530, for being existed according to the lesion state and each multi-block technique to be identified
Position in eye fundus image to be identified determines the lesion state of the eye fundus image to be identified.
The embodiment of the present invention obtains eye fundus image to be identified by grid piecemeal processing module to be identified, and to eye to be identified
Base map picture carries out the processing of grid piecemeal, to form multiple multi-block techniques to be identified;It will be to by grid lesion state determining module
It identifies each of eye fundus image multi-block technique to be identified, inputs neural network model trained in advance respectively, it is to be identified with determination
The lesion state of multi-block technique;By image focus state determining module according to lesion state and each multi-block technique to be identified to
It identifies the position in eye fundus image, determines the lesion state of eye fundus image to be identified.Above-mentioned technical proposal passes through to eye to be identified
Base map picture carries out grid piecemeal, and using the grid after piecemeal as the input of neural network model, reduces neural network model
Number of parameters, thereby reduce neural network model training data operand;Meanwhile the grid after piecemeal being replaced to be identified
Eye fundus image carries out the training of neural network model, reduces the sample of required eye fundus image training sample during model training
Quantity;In addition, complicated eye fundus image is divided into different multi-block techniques, is paid close attention in image by the thought to break the whole up into parts
Local detail, and then ensure that eye fundus image to be identified carry out lesion identification accuracy;Furthermore it is to be identified by combining
Location information of the multi-block technique in eye fundus image to be identified, can specific location based on each multi-block technique to be identified and with phase
Relative position between adjacent multi-block technique to be identified, auxiliary carry out the identification of polymorphic type lesion.
Further, which further includes that multi-block technique expansion module to be identified is specifically used for:
By each of the eye fundus image to be identified multi-block technique to be identified, in advance trained neural network is being inputted respectively
Before model, using each multi-block technique to be identified as original multi-block technique to be identified, obtained in the eye fundus image to be identified
Take at least one adjustment multi-block technique to be identified of the original multi-block technique to be identified;Wherein, the adjustment piecemeal to be identified
The partial region of the original multi-block technique to be identified is included at least in grid;
Each adjustment multi-block technique to be identified is inputted into the nerve with corresponding original multi-block technique to be identified together
Network model carries out the lesion state recognition of the original multi-block technique to be identified.
Further, which further includes neural network model training module, for instructing to neural network model
Practice, specifically include:
Training sample acquiring unit, for obtaining lesion multi-block technique and ordinary person's multi-block technique, wherein the lesion piecemeal
Grid is the multi-block technique that mark includes lesion in lesion eye fundus image;
Training unit, for by the lesion multi-block technique and ordinary person's multi-block technique input in the neural network model into
Row training.
Further, the lesion multi-block technique includes original multi-block technique and adjustment multi-block technique, wherein the adjustment
Multi-block technique is the partial region extracted in the lesion eye fundus image, and include at least the original multi-block technique.
Further, training sample acquiring unit is specifically used for when obtaining lesion multi-block technique:
Original focus eye fundus image is obtained, and carries out grid piecemeal processing, to obtain multiple original multi-block techniques;
Obtain the lesion state annotation results for being directed to the original multi-block technique, wherein the lesion state annotation results
Include: including lesion and does not include lesion;
At least one adjustment multi-block technique that the lesion multi-block technique is obtained in the original focus eye fundus image, makees
For lesion multi-block technique.
Further, training sample acquiring unit is executing following steps: obtaining in the original focus eye fundus image
When at least one adjustment multi-block technique of the lesion multi-block technique, it is specifically used for:
In the original focus eye fundus image, using the lesion multi-block technique as center region, by lesion multi-block technique
Side length extend to setting multiple, obtain and expand multi-block technique, as adjustment multi-block technique;And/or
In the original focus eye fundus image, using the lesion multi-block technique as center region, by lesion multi-block technique
Side length reduce setting ratio, obtain reduce multi-block technique, as adjustment multi-block technique.
Further, training sample acquiring unit is specifically used for when obtaining ordinary person's multi-block technique:
The eye fundus image of normal person is subjected to the processing of grid piecemeal, to obtain multiple ordinary person's multi-block techniques.
Further, the neural network training model includes convolutional layer, pond layer and classification layer, wherein the convolution
Layer includes the convolutional channel for being respectively used to handle the original multi-block technique and each adjustment multi-block technique, by each multi-block technique into
Capable process of convolution at least once, to form the identical multiple characteristic patterns of matrix size, and the feature that each convolutional channel is exported
Figure is cascaded.
Further, the pond layer is two categorization modules, the mind using global average pond method, the classification layer
Objective function through network training model is according to focal loss objective function and the loss objective function building of Jie Kade similarity factor.
Further, the neural network training model in neural network model training pattern is adopted when carrying out model training
Objective function is constructed according to following formula:
Wherein, p represents the model predication value including lesion of the classification layer output;
Y represents the lesion mark value of multi-block technique;
T represents the generic of multi-block technique, and classification includes: including lesion and do not include lesion;
ptRepresent the conversion estimation value of different classes of t;
γ represents focusing parameter;
αtRepresent different classes of weight.
Further, image focus state determining module 530, comprising:
Characteristic value acquiring unit, the piecemeal to be identified that the pond layer convolutional layer for obtaining in neural network model is exported
Each characteristic value of multiple characteristic patterns of grid;
Class Activation mapped graphics are at unit, for the classified weight in the classification layer using the neural network model, and
Each characteristic value of multiple characteristic patterns of the multi-block technique to be identified, is calculated according to following formula, with obtain respectively to
It identifies the Class Activation value of multi-block technique, forms Class Activation mapping graph:
Wherein, fk(x, y) is the characteristic value of the characteristic point of the position (x, y) in k-th of characteristic pattern, wk tFor kth in classification layer
The classified weight of the corresponding characteristic point of a characteristic pattern, Mt(x, y) be Class Activation mapping graph in the position (x, y) characteristic point class
Activation value;Wherein, k=1,2 ..., n, n are the total quantity of the characteristic pattern after cascade;
Lesion pattern acquiring unit, for according to the position between the Class Activation mapping graph and the multi-block technique to be identified
The position of corresponding relationship and the multi-block technique to be identified in eye fundus image to be identified is set, by the Class Activation mapping graph
Characteristic point be mapped in eye fundus image to be identified, to obtain the lesion pattern in the eye fundus image to be identified.
Further, image focus state determining module 530 further includes Class Activation mapping graph processing unit, is used for:
After forming Class Activation mapping graph, smothing filtering is carried out to the Class Activation mapping graph, and will obtain after filtering
Image replace the Class Activation mapping graph;And/or
After forming Class Activation mapping graph, contours extract is carried out to the Class Activation mapping graph, and will be after contours extract
Obtained image replaces the Class Activation mapping graph.
Further, the classification layer for being to original multi-block technique and each adjustment multi-block technique using following formula
No includes that lesion is predicted:
Wherein, t represents the generic of multi-block technique, and classification includes: including lesion and do not include lesion;ptFor inhomogeneity
The corresponding prediction probability of other t;wk tFor the classified weight of the corresponding characteristic point of k-th of characteristic pattern;Wherein, k=1,2 ..., n, n are
The total quantity of characteristic pattern after cascade.
Further, which further includes that eye fundus image preprocessing module is used for:
After obtaining eye fundus image to be identified, before carrying out the processing of grid piecemeal to the eye fundus image to be identified,
Execute at least one of following methods:
The eye fundus image to be identified is zoomed in and out into processing according to setting target size, and by after scaling processing wait know
Other eye fundus image replaces the eye fundus image to be identified;
The noise information at the eyeground edge of the eye fundus image to be identified is filtered out, and the eye fundus image to be identified after making an uproar will be filtered
Replace the eye fundus image to be identified;
At least one Color Channel is chosen, histogram equalization processing is carried out to the eye fundus image to be identified, and will place
Image is obtained after reason replaces the eye fundus image to be identified.
Eye provided by any embodiment of the invention can be performed in eye fundus image identification device provided by the embodiment of the present invention
Bottom image-recognizing method has and executes the corresponding functional module of eye fundus image recognition methods and beneficial effect.
Embodiment six
Fig. 6 is the structural schematic diagram of one of the embodiment of the present invention six electronic equipment.Fig. 6, which is shown, to be suitable for being used to realizing
The block diagram of the example electronic device 612 of embodiment of the present invention.The electronic equipment 612 that Fig. 6 is shown is only an example, no
The function and use scope for coping with the embodiment of the present invention bring any restrictions.The electronic equipment specifically can be terminal device or clothes
Business device.
As shown in fig. 6, electronic equipment 612 is showed in the form of universal computing device.The component of electronic equipment 612 can wrap
Include but be not limited to: one or more processor or processing unit 616, system storage 628 connect different system components
The bus 618 of (including system storage 628 and processing unit 616).
Bus 618 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller,
Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.It lifts
For example, these architectures include but is not limited to industry standard architecture (ISA) bus, microchannel architecture (MAC)
Bus, enhanced isa bus, Video Electronics Standards Association (VESA) local bus and peripheral component interconnection (PCI) bus.
Electronic equipment 612 typically comprises a variety of computer system readable media.These media can be it is any can be by
The usable medium that electronic equipment 612 accesses, including volatile and non-volatile media, moveable and immovable medium.
System storage 628 may include the computer system readable media of form of volatile memory, such as deposit at random
Access to memory (RAM) 630 and/or cache memory 632.Electronic equipment 612 may further include it is other it is removable/no
Movably, volatile/non-volatile computer system storage medium.Only as an example, storage system 634 can be used for reading and writing
Immovable, non-volatile magnetic media (Fig. 6 do not show, commonly referred to as " hard disk drive ").It, can although being not shown in Fig. 6
To provide the disc driver for reading and writing to removable non-volatile magnetic disk (such as " floppy disk "), and it is non-volatile to moving
Property CD (such as CD-ROM, DVD-ROM or other optical mediums) read and write CD drive.In these cases, each drive
Dynamic device can be connected by one or more data media interfaces with bus 618.Memory 628 may include at least one journey
Sequence product, the program product have one group of (for example, at least one) program module, these program modules are configured to perform this hair
The function of bright each embodiment.
Program/utility 640 with one group of (at least one) program module 642, can store in such as memory
In 628, such program module 642 includes but is not limited to operating system, one or more application program, other program modules
And program data, it may include the realization of network environment in each of these examples or certain combination.Program module 642
Usually execute the function and/or method in embodiment described in the invention.
Electronic equipment 612 can also be with one or more external equipments 614 (such as keyboard, sensing equipment, display 624
Deng) communication, can also be enabled a user to one or more equipment interact with the electronic equipment 612 communicate, and/or with make
Any equipment (such as network interface card, the modem that the electronic equipment 612 can be communicated with one or more of the other calculating equipment
Etc.) communication.This communication can be carried out by input/output (I/O) interface 622.Also, electronic equipment 612 can also lead to
Cross network adapter 620 and one or more network (such as local area network (LAN), wide area network (WAN) and/or public network, example
Such as internet) communication.As shown, network adapter 620 is communicated by bus 618 with other modules of electronic equipment 612.It answers
When understanding, although not shown in the drawings, other hardware and/or software module can be used in conjunction with electronic equipment 612, including but unlimited
In: microcode, device driver, redundant processing unit, external disk drive array, RAID system, tape drive and number
According to backup storage system etc..
Processing unit 616 passes through at least one program in multiple programs that operation is stored in system storage 628, from
And application and data processing are performed various functions, such as realize a kind of eye fundus image identification side provided by the embodiment of the present invention
Method.
Embodiment seven
The embodiment of the present invention seven provides a kind of computer readable storage medium, is stored thereon with computer program, the journey
A kind of eye fundus image recognition methods provided by any embodiment of the present invention is realized when sequence is executed by processor, comprising: acquisition to
It identifies eye fundus image, and the processing of grid piecemeal is carried out to the eye fundus image to be identified, to form multiple multi-block techniques to be identified;
By each of the eye fundus image to be identified multi-block technique to be identified, neural network model trained in advance is inputted, respectively with true
The lesion state of the fixed multi-block technique to be identified;According to the lesion state and each multi-block technique to be identified to be identified
Position in eye fundus image determines the lesion state of the eye fundus image to be identified.
The computer storage medium of the embodiment of the present invention, can be using any of one or more computer-readable media
Combination.Computer-readable medium can be computer-readable signal media or computer readable storage medium.It is computer-readable
Storage medium for example may be-but not limited to-the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or
Device, or any above combination.The more specific example (non exhaustive list) of computer readable storage medium includes: tool
There are electrical connection, the portable computer diskette, hard disk, random access memory (RAM), read-only memory of one or more conducting wires
(ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-
ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.In this document, computer-readable storage
Medium can be any tangible medium for including or store program, which can be commanded execution system, device or device
Using or it is in connection.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for
By the use of instruction execution system, device or device or program in connection.
The program code for including on computer-readable medium can transmit with any suitable medium, including --- but it is unlimited
In wireless, electric wire, optical cable, RF etc. or above-mentioned any appropriate combination.
The computer for executing operation of the present invention can be write with one or more programming languages or combinations thereof
Program code, described program design language include object oriented program language-such as Java, Smalltalk, C++,
It further include conventional procedural programming language-such as " C " language or similar programming language.Program code can be with
It fully executes, partly execute on the user computer on the user computer, being executed as an independent software package, portion
Divide and partially executes or executed on a remote computer or server completely on the remote computer on the user computer.?
Be related in the situation of remote computer, remote computer can pass through the network of any kind --- including local area network (LAN) or
Wide area network (WAN)-be connected to subscriber computer, or, it may be connected to outer computer (such as mentioned using Internet service
It is connected for quotient by internet).
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that
The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation,
It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention
It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also
It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.
Claims (17)
1. a kind of eye fundus image recognition methods characterized by comprising
Eye fundus image to be identified is obtained, and the processing of grid piecemeal is carried out to the eye fundus image to be identified, it is multiple wait know to be formed
Other multi-block technique;
By each of the eye fundus image to be identified multi-block technique to be identified, neural network model trained in advance is inputted respectively,
With the lesion state of the determination multi-block technique to be identified;
According to the position of the lesion state and each multi-block technique to be identified in eye fundus image to be identified, determine it is described to
Identify the lesion state of eye fundus image.
2. the method according to claim 1, wherein by be identified point of each of described eye fundus image to be identified
Block grid is inputted respectively before neural network model trained in advance, further includes:
Using each multi-block technique to be identified as original multi-block technique to be identified, in the eye fundus image to be identified described in acquisition
At least one of original multi-block technique to be identified adjusts multi-block technique to be identified, wherein in the adjustment multi-block technique to be identified
Including at least the partial region of the original multi-block technique to be identified;
Each adjustment multi-block technique to be identified is inputted into the neural network with corresponding original multi-block technique to be identified together
Model carries out the lesion state recognition of the original multi-block technique to be identified.
3. method according to claim 1 or 2, which is characterized in that the method also includes the neural network models
Training stage, the training stage include:
Obtain lesion multi-block technique and ordinary person's multi-block technique, wherein the lesion multi-block technique is to get the bid in lesion eye fundus image
Note includes the multi-block technique of lesion;
The lesion multi-block technique and ordinary person's multi-block technique are inputted in the neural network model and are trained.
4. according to the method described in claim 3, it is characterized in that, the lesion multi-block technique includes original multi-block technique and tune
Whole multi-block technique, wherein the adjustment multi-block technique is to extract in the lesion eye fundus image, and include at least the original
The partial region of beginning multi-block technique.
5. according to the method described in claim 4, it is characterized in that, acquisition lesion multi-block technique includes:
Original focus eye fundus image is obtained, and carries out grid piecemeal processing, to obtain multiple original multi-block techniques;
Obtain the lesion state annotation results for being directed to the original multi-block technique, wherein the lesion state annotation results include:
It including lesion and does not include lesion;
At least one adjustment multi-block technique that the lesion multi-block technique is obtained in the original focus eye fundus image, as disease
Stove multi-block technique.
6. according to the method described in claim 5, it is characterized in that, obtaining the lesion in the original focus eye fundus image
At least one of multi-block technique adjusts multi-block technique
In the original focus eye fundus image, using the lesion multi-block technique as center region, by the side of lesion multi-block technique
It is long to extend to setting multiple, it obtains and expands multi-block technique, as adjustment multi-block technique;And/or
In the original focus eye fundus image, using the lesion multi-block technique as center region, by the side of lesion multi-block technique
It is long to reduce setting ratio, it obtains and reduces multi-block technique, as adjustment multi-block technique.
7. according to the method described in claim 3, it is characterized in that, acquisition ordinary person's multi-block technique includes:
The eye fundus image of normal person is subjected to the processing of grid piecemeal, to obtain multiple ordinary person's multi-block techniques.
8. according to the method described in claim 4, it is characterized in that, the neural network model include convolutional layer, pond layer and
Classification layer;Wherein, the convolutional layer includes the convolution for being respectively used to handle the original multi-block technique and each adjustment multi-block technique
Each multi-block technique is carried out process of convolution at least once by channel, to form the identical multiple characteristic patterns of matrix size, and will be each
The characteristic pattern that convolutional channel is exported is cascaded.
9. according to the method described in claim 8, it is characterized in that, the global average pond method of pond layer use, described
Classification layer is two categorization modules, and the objective function of the neural network model is similar with Jie Kade according to focal loss objective function
Coefficient loss objective function is constructed.
10. according to the method described in claim 9, it is characterized in that, the objective function is constructed using following formula:
Wherein, p represents the model predication value including lesion of the classification layer output;
Y represents the lesion mark value of multi-block technique;
T represents the generic of multi-block technique, and classification includes: including lesion and do not include lesion;
ptRepresent the conversion estimation value of different classes of t;
γ represents focusing parameter;
αtRepresent different classes of weight.
11. according to the method described in claim 8, it is characterized in that, according to the lesion state and the piecemeal net to be identified
Position of the lattice in eye fundus image to be identified determines that the lesion state of the eye fundus image includes:
Obtain each spy of the multiple characteristic patterns for the multi-block technique to be identified that the pond layer convolutional layer in neural network model is exported
Value indicative;
Using multiple spies of classified weight and the multi-block technique to be identified in the classification layer of the neural network model
Each characteristic value for levying figure, is calculated according to following formula, to obtain the Class Activation value of each multi-block technique to be identified, is formed class and is swashed
Mapping graph living:
Wherein, fk(x, y) is the characteristic value of the characteristic point of the position (x, y) in k-th of characteristic pattern, wk tIt is special for k-th in classification layer
Sign schemes the classified weight of corresponding characteristic point, Mt(x, y) be Class Activation mapping graph in the position (x, y) characteristic point Class Activation
Value;Wherein, k=1,2 ..., n, n are the total quantity of the characteristic pattern after cascade;
According to position corresponding relationship between the Class Activation mapping graph and the multi-block technique to be identified and described to be identified
The characteristic point of the Class Activation mapping graph is mapped to eyeground figure to be identified by position of the multi-block technique in eye fundus image to be identified
As in, to obtain the lesion pattern in the eye fundus image to be identified.
12. according to the method for claim 11, which is characterized in that after forming Class Activation mapping graph, further includes:
Smothing filtering is carried out to the Class Activation mapping graph, and the image obtained after filtering is replaced into the Class Activation mapping graph;
And/or
Contours extract is carried out to the Class Activation mapping graph, and the image obtained after contours extract is replaced into the Class Activation and is mapped
Figure.
13. according to the method described in claim 8, it is characterized in that, the classification layer is used for using following formula to original point
Block grid and it is each adjustment multi-block technique whether include lesion predicted:
Wherein, t represents the generic of multi-block technique, and classification includes: including lesion and do not include lesion;ptIt is different classes of t pairs
The prediction probability answered;wk tFor the classified weight of the corresponding characteristic point of k-th of characteristic pattern;Wherein, k=1,2 ..., n, n are after cascading
Characteristic pattern total quantity.
14. the method according to claim 1, wherein after obtaining eye fundus image to be identified, to it is described to
Further include at least one of following methods before identifying that eye fundus image carries out the processing of grid piecemeal:
The eye fundus image to be identified is zoomed in and out into processing according to setting target size, and by the eye to be identified after scaling processing
Base map picture replaces the eye fundus image to be identified;
The noise information at the eyeground edge of the eye fundus image to be identified is filtered out, and is replaced the eye fundus image to be identified after making an uproar is filtered
The eye fundus image to be identified;
At least one Color Channel is chosen, histogram equalization processing is carried out to the eye fundus image to be identified, and will be after processing
It obtains image and replaces the eye fundus image to be identified.
15. a kind of eye fundus image identification device characterized by comprising
Grid piecemeal processing module to be identified is carried out for obtaining eye fundus image to be identified, and to the eye fundus image to be identified
Grid piecemeal processing, to form multiple multi-block techniques to be identified;
Grid lesion state determining module, for by each of the eye fundus image to be identified multi-block technique to be identified, difference to be defeated
Enter neural network model trained in advance, with the lesion state of the determination multi-block technique to be identified;
Image focus state determining module is used for according to the lesion state and each multi-block technique to be identified in eye to be identified
Position in base map picture determines the lesion state of the eye fundus image to be identified.
16. a kind of electronic equipment, which is characterized in that the electronic equipment includes:
One or more processors;
Memory, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
A kind of now eye fundus image recognition methods as described in any in claim 1-14.
17. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
A kind of eye fundus image recognition methods as described in any in claim 1-14 is realized when execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910167485.6A CN109829446A (en) | 2019-03-06 | 2019-03-06 | Eye fundus image recognition methods, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910167485.6A CN109829446A (en) | 2019-03-06 | 2019-03-06 | Eye fundus image recognition methods, device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109829446A true CN109829446A (en) | 2019-05-31 |
Family
ID=66865453
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910167485.6A Pending CN109829446A (en) | 2019-03-06 | 2019-03-06 | Eye fundus image recognition methods, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109829446A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110348543A (en) * | 2019-06-10 | 2019-10-18 | 腾讯医疗健康(深圳)有限公司 | Eye fundus image recognition methods, device, computer equipment and storage medium |
CN110400289A (en) * | 2019-06-26 | 2019-11-01 | 平安科技(深圳)有限公司 | Eye fundus image recognition methods, device, equipment and storage medium |
CN110458217A (en) * | 2019-07-31 | 2019-11-15 | 腾讯医疗健康(深圳)有限公司 | Image-recognizing method and device, eye fundus image recognition methods and electronic equipment |
CN110599451A (en) * | 2019-08-05 | 2019-12-20 | 平安科技(深圳)有限公司 | Medical image focus detection positioning method, device, equipment and storage medium |
CN110705352A (en) * | 2019-08-29 | 2020-01-17 | 杭州晟元数据安全技术股份有限公司 | Fingerprint image detection method based on deep learning |
CN111127425A (en) * | 2019-12-23 | 2020-05-08 | 北京至真互联网技术有限公司 | Target detection positioning method and device based on retina fundus image |
CN111428737A (en) * | 2020-04-01 | 2020-07-17 | 南方科技大学 | Example retrieval method, device, server and storage medium for ophthalmologic image |
CN111523602A (en) * | 2020-04-27 | 2020-08-11 | 珠海上工医信科技有限公司 | Fundus image prediction method and device, storage medium, and electronic device |
CN111815573A (en) * | 2020-06-17 | 2020-10-23 | 科大智能物联技术有限公司 | Coupling outer wall detection method and system based on deep learning |
CN111986211A (en) * | 2020-08-14 | 2020-11-24 | 武汉大学 | Deep learning-based ophthalmic ultrasonic automatic screening method and system |
CN112446867A (en) * | 2020-11-25 | 2021-03-05 | 上海联影医疗科技股份有限公司 | Method, device and equipment for determining blood flow parameters and storage medium |
CN112634309A (en) * | 2020-11-30 | 2021-04-09 | 上海联影医疗科技股份有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113313705A (en) * | 2021-06-22 | 2021-08-27 | 上海杏脉信息科技有限公司 | Pathological image processing system, method and medium |
CN113768461A (en) * | 2021-09-14 | 2021-12-10 | 北京鹰瞳科技发展股份有限公司 | Fundus image analysis method and system and electronic equipment |
CN114882313A (en) * | 2022-05-17 | 2022-08-09 | 阿波罗智能技术(北京)有限公司 | Method and device for generating image annotation information, electronic equipment and storage medium |
CN110458217B (en) * | 2019-07-31 | 2024-04-19 | 腾讯医疗健康(深圳)有限公司 | Image recognition method and device, fundus image recognition method and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170112372A1 (en) * | 2015-10-23 | 2017-04-27 | International Business Machines Corporation | Automatically detecting eye type in retinal fundus images |
CN108470359A (en) * | 2018-02-11 | 2018-08-31 | 艾视医疗科技成都有限公司 | A kind of diabetic retinal eye fundus image lesion detection method |
CN108961296A (en) * | 2018-07-25 | 2018-12-07 | 腾讯科技(深圳)有限公司 | Eye fundus image dividing method, device, storage medium and computer equipment |
-
2019
- 2019-03-06 CN CN201910167485.6A patent/CN109829446A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170112372A1 (en) * | 2015-10-23 | 2017-04-27 | International Business Machines Corporation | Automatically detecting eye type in retinal fundus images |
CN108470359A (en) * | 2018-02-11 | 2018-08-31 | 艾视医疗科技成都有限公司 | A kind of diabetic retinal eye fundus image lesion detection method |
CN108961296A (en) * | 2018-07-25 | 2018-12-07 | 腾讯科技(深圳)有限公司 | Eye fundus image dividing method, device, storage medium and computer equipment |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110348543B (en) * | 2019-06-10 | 2023-01-06 | 腾讯医疗健康(深圳)有限公司 | Fundus image recognition method and device, computer equipment and storage medium |
CN110348543A (en) * | 2019-06-10 | 2019-10-18 | 腾讯医疗健康(深圳)有限公司 | Eye fundus image recognition methods, device, computer equipment and storage medium |
CN110400289A (en) * | 2019-06-26 | 2019-11-01 | 平安科技(深圳)有限公司 | Eye fundus image recognition methods, device, equipment and storage medium |
CN110400289B (en) * | 2019-06-26 | 2023-10-24 | 平安科技(深圳)有限公司 | Fundus image recognition method, fundus image recognition device, fundus image recognition apparatus, and fundus image recognition storage medium |
CN110458217A (en) * | 2019-07-31 | 2019-11-15 | 腾讯医疗健康(深圳)有限公司 | Image-recognizing method and device, eye fundus image recognition methods and electronic equipment |
CN110458217B (en) * | 2019-07-31 | 2024-04-19 | 腾讯医疗健康(深圳)有限公司 | Image recognition method and device, fundus image recognition method and electronic equipment |
CN110599451A (en) * | 2019-08-05 | 2019-12-20 | 平安科技(深圳)有限公司 | Medical image focus detection positioning method, device, equipment and storage medium |
CN110599451B (en) * | 2019-08-05 | 2023-01-20 | 平安科技(深圳)有限公司 | Medical image focus detection and positioning method, device, equipment and storage medium |
US11961227B2 (en) | 2019-08-05 | 2024-04-16 | Ping An Technology (Shenzhen) Co., Ltd. | Method and device for detecting and locating lesion in medical image, equipment and storage medium |
CN110705352A (en) * | 2019-08-29 | 2020-01-17 | 杭州晟元数据安全技术股份有限公司 | Fingerprint image detection method based on deep learning |
CN111127425B (en) * | 2019-12-23 | 2023-04-28 | 北京至真互联网技术有限公司 | Target detection positioning method and device based on retina fundus image |
CN111127425A (en) * | 2019-12-23 | 2020-05-08 | 北京至真互联网技术有限公司 | Target detection positioning method and device based on retina fundus image |
CN111428737B (en) * | 2020-04-01 | 2024-01-19 | 南方科技大学 | Instance retrieval method, device, server and storage medium for ophthalmic image |
CN111428737A (en) * | 2020-04-01 | 2020-07-17 | 南方科技大学 | Example retrieval method, device, server and storage medium for ophthalmologic image |
CN111523602A (en) * | 2020-04-27 | 2020-08-11 | 珠海上工医信科技有限公司 | Fundus image prediction method and device, storage medium, and electronic device |
CN111815573A (en) * | 2020-06-17 | 2020-10-23 | 科大智能物联技术有限公司 | Coupling outer wall detection method and system based on deep learning |
CN111986211A (en) * | 2020-08-14 | 2020-11-24 | 武汉大学 | Deep learning-based ophthalmic ultrasonic automatic screening method and system |
CN112446867B (en) * | 2020-11-25 | 2023-05-30 | 上海联影医疗科技股份有限公司 | Method, device, equipment and storage medium for determining blood flow parameters |
CN112446867A (en) * | 2020-11-25 | 2021-03-05 | 上海联影医疗科技股份有限公司 | Method, device and equipment for determining blood flow parameters and storage medium |
CN112634309B (en) * | 2020-11-30 | 2023-08-15 | 上海联影医疗科技股份有限公司 | Image processing method, device, electronic equipment and storage medium |
CN112634309A (en) * | 2020-11-30 | 2021-04-09 | 上海联影医疗科技股份有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113313705A (en) * | 2021-06-22 | 2021-08-27 | 上海杏脉信息科技有限公司 | Pathological image processing system, method and medium |
CN113768461A (en) * | 2021-09-14 | 2021-12-10 | 北京鹰瞳科技发展股份有限公司 | Fundus image analysis method and system and electronic equipment |
CN114882313A (en) * | 2022-05-17 | 2022-08-09 | 阿波罗智能技术(北京)有限公司 | Method and device for generating image annotation information, electronic equipment and storage medium |
CN114882313B (en) * | 2022-05-17 | 2023-07-25 | 阿波罗智能技术(北京)有限公司 | Method, device, electronic equipment and storage medium for generating image annotation information |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109829446A (en) | Eye fundus image recognition methods, device, electronic equipment and storage medium | |
CN110232383B (en) | Focus image recognition method and focus image recognition system based on deep learning model | |
CN110033456B (en) | Medical image processing method, device, equipment and system | |
CN110197493A (en) | Eye fundus image blood vessel segmentation method | |
CN109858540B (en) | Medical image recognition system and method based on multi-mode fusion | |
CN110491480A (en) | A kind of medical image processing method, device, electromedical equipment and storage medium | |
CN108230294B (en) | Image detection method, image detection device, electronic equipment and storage medium | |
CN107886503A (en) | A kind of alimentary canal anatomical position recognition methods and device | |
CN108765392B (en) | Digestive tract endoscope lesion detection and identification method based on sliding window | |
CN109902717A (en) | Lesion automatic identifying method, device and computer readable storage medium | |
CN109635846A (en) | A kind of multiclass medical image judgment method and system | |
CN112017185B (en) | Focus segmentation method, device and storage medium | |
CN110826576B (en) | Cervical lesion prediction system based on multi-mode feature level fusion | |
CN110428421A (en) | Macula lutea image region segmentation method and apparatus | |
CN106650794A (en) | Method and system for eliminating highlight of image affected by highlight reflection on object surface | |
CN109583364A (en) | Image-recognizing method and equipment | |
US20230022921A1 (en) | System and method for analyzing corneal lesion using anterior ocular segment image, and computer-readable recording medium | |
CN110786824A (en) | Coarse marking fundus oculi illumination bleeding lesion detection method and system based on bounding box correction network | |
CN102567734A (en) | Specific value based retina thin blood vessel segmentation method | |
WO2019102844A1 (en) | Classification device, classification method, program, and information recording medium | |
Miao et al. | Classification of Diabetic Retinopathy Based on Multiscale Hybrid Attention Mechanism and Residual Algorithm | |
CN111724894A (en) | Data acquisition method, device, terminal and storage medium | |
Reza et al. | Automatic detection of optic disc in fundus images by curve operator | |
CN110503636A (en) | Parameter regulation means, lesion prediction technique, parameter adjustment controls and electronic equipment | |
CN115359548A (en) | Handheld intelligent pupil detection device and detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190531 |