CN108596895B - Fundus image detection method, device and system based on machine learning - Google Patents

Fundus image detection method, device and system based on machine learning Download PDF

Info

Publication number
CN108596895B
CN108596895B CN201810387302.7A CN201810387302A CN108596895B CN 108596895 B CN108596895 B CN 108596895B CN 201810387302 A CN201810387302 A CN 201810387302A CN 108596895 B CN108596895 B CN 108596895B
Authority
CN
China
Prior art keywords
fundus image
detection
feature set
machine learning
fundus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810387302.7A
Other languages
Chinese (zh)
Other versions
CN108596895A (en
Inventor
赵昕
熊健皓
李舒磊
马永培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Beijing Airdoc Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN201810387302.7A priority Critical patent/CN108596895B/en
Publication of CN108596895A publication Critical patent/CN108596895A/en
Priority to US16/623,202 priority patent/US11501428B2/en
Priority to PCT/CN2019/084207 priority patent/WO2019206208A1/en
Application granted granted Critical
Publication of CN108596895B publication Critical patent/CN108596895B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/031Recognition of patterns in medical or anatomical images of internal organs

Abstract

The invention discloses a method, a device and a system for detecting fundus images based on machine learning, wherein the method comprises the following steps: acquiring a fundus image to be detected; performing first feature set detection on the whole area of the fundus image; performing a second feature set detection for a particular region in the fundus image if the first feature set is not present in the fundus image; the first feature set has a greater significance than the second feature set. The fundus images with various characteristic types are firstly classified greatly, the obvious characteristics are firstly identified, the partitioned fine detection is carried out in the images without the obvious characteristics, the detection results are independently output, the number of the detections can be reduced, the recognition efficiency is greatly improved, and the simultaneous accurate detection of the obvious characteristics and the tiny characteristics can be realized.

Description

Fundus image detection method, device and system based on machine learning
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a device and a system for detecting fundus images based on machine learning.
Background
In recent years, machine learning has been widely used in the medical field, and in particular, machine learning techniques typified by deep learning have been attracting attention in the medical imaging field. For example, in the aspect of fundus image detection, the depth learning technology can more accurately detect a certain characteristic of a fundus image, for example, a depth learning model is trained by using a sample of the characteristic of a large number of macular holes, and the trained model is used for detecting the macular holes of the fundus image. These techniques are often limited to the detection of only a single feature or a small number of associated features, and may not accurately detect other features. However, since the eye is a very delicate and complex organ in the human body, the variety of features it contains is large, and the difference between the individual features is often large. Therefore, the detection result is difficult to converge by adopting the existing detection technology, so that the detection result is not accurate enough. Or, each feature is trained to be a model for detection, so that the number of required samples is large, the calculated amount is increased sharply under the condition of large feature number, and the detection efficiency is reduced.
Therefore, how to rapidly and accurately detect the fundus image becomes a technical problem to be solved urgently.
Disclosure of Invention
The invention aims to solve the technical problem of how to quickly and accurately detect the fundus image.
To this end, according to a first aspect, an example of the present invention provides a fundus image detection method based on machine learning, including: acquiring a fundus image to be detected; carrying out first feature set detection on the whole region of the fundus image; if the first feature set does not exist in the fundus image, performing second feature set detection on a specific area in the fundus image; the first feature set has a greater significance than the second feature set.
Optionally, the specific area includes: at least one of a disc region, a macular region, a vascular region, and a retinal region.
Optionally, the first feature set comprises at least one first class sub-feature; the second set of features includes at least one second class sub-feature in a particular region.
Optionally, between acquiring the fundus image to be detected and determining whether the first feature set exists in the fundus image, further comprising: the fundus images are subjected to quality inspection to screen the fundus images.
Optionally, the quality detecting the fundus image comprises: and performing any one or any combination of stain/bright spot detection, exposure detection, definition detection, light leakage detection and local shadow detection on the fundus image.
Optionally, after the second feature set detection is performed on different fundus regions of the fundus image, the method further includes: performing third characteristic detection on the fundus image not containing the second characteristic set; the significance of the third feature is less than the significance of the second feature set.
Optionally, at least one of the first set of features, the second set of features, or the quality is detected by machine learning.
According to a second aspect, an embodiment of the present invention provides a fundus image detection apparatus based on machine learning, including: the acquisition module is used for acquiring a fundus image to be detected; the first detection module is used for detecting a first feature set of the whole region of the fundus image; the second detection module is used for detecting that the first characteristic set does not exist in the fundus image and detecting a second characteristic set aiming at a specific area in the fundus image; the first feature set has a greater significance than the second feature set.
Optionally, the specific area includes: at least one of a disc region, a macular region, a vascular region, and a retinal region.
Optionally, the first feature set and/or the second feature set is a multi-feature set.
Optionally, the fundus image detection apparatus further includes: and the third detection module is used for performing quality detection on the fundus images so as to screen the fundus images.
Optionally, the third detection module comprises: and the detection unit is used for carrying out any one or any combination of stain/bright spot detection, exposure detection, definition detection, light leakage detection and local shadow detection on the fundus image.
Optionally, the fundus image detection apparatus further includes: the fourth detection module is used for carrying out third feature detection on the fundus images not containing the second feature set; the significance of the third feature is less than the significance of the second feature set.
Optionally, at least one of the first set of features, the second set of features, or the quality of the fundus image is detected by machine learning.
According to a third aspect, an embodiment of the present invention provides an electronic device, including: a controller comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor, the instructions being executable by the at least one processor to cause the at least one processor to perform the fundus image detection method described in any one of the first aspects above.
According to a fourth aspect, an embodiment of the present invention provides a machine learning-based fundus image detection system including: the image acquisition device is used for acquiring fundus images; the electronic device according to the third aspect is arranged in the cloud server and is in communication with the image acquisition device; and an output device, in communication with the electronic device, for outputting a result of the fundus image detection.
According to the fundus image detection method, device and system based on machine learning provided by the embodiment of the invention, the detection of the first feature set with higher significance is firstly carried out on the whole region of the fundus image to be detected, the primary screening is carried out on the fundus image to be detected, then the detection of the second feature set with lower significance is carried out on the specific region of the image not containing the first feature set, the fundus images with various feature types can be firstly classified greatly, the more obvious features are firstly identified, the partition fine detection is carried out on the image without the more obvious features, the detection results are output in series step by step, the detection number can be reduced, the identification efficiency is greatly improved, and the obvious features and the tiny features can be simultaneously and accurately detected.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 shows a flowchart of a fundus image detection method based on machine learning of the present embodiment;
fig. 2 shows a flowchart of another fundus image detection method based on machine learning of the present embodiment;
fig. 3 shows a schematic diagram of a fundus image detection apparatus based on machine learning of the present embodiment;
FIG. 4 shows a schematic diagram of an electronic device of the present embodiment;
fig. 5 shows a schematic diagram of the fundus image detection system based on machine learning of the present embodiment.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a fundus image detection method based on machine learning, which comprises the following steps as shown in figure 1:
s11, acquiring a fundus image to be detected.
S12, detecting a first feature set of the whole region of the fundus image. In this embodiment, the first feature set may be features with a feature saliency greater than a preset saliency threshold. Specifically, the saliency may be a trade-off between color difference, contrast, and gray scale or a size of an occupied area. For example, a region which is largely different from the normal fundus color exists in the entire fundus image region and which occupies a ratio larger than a certain preset value can be regarded as the first feature set. For example, features such as a large area of abnormal tissue or structure of the fundus, a large macula in the fundus, etc. may be used as the first set of features. In this embodiment, the first feature set may be a multi-feature set, for example, features such as a larger area or more prominent abnormal tissue or structure of the fundus, a larger macula of the fundus, etc. may be considered as sub-features in the first feature set. The set of so-called first features may be determined by detecting any one of the sub-features in the set of features. In a specific embodiment, the first feature set detection may be performed by a machine learning method, specifically, a model may be trained by a large number of fundus image samples having sub-features in the first feature set, a fundus picture to be detected is input into the model of the first feature set, and the first feature set detection is performed on an image to be detected. If the fundus image has the first feature set, the process may proceed to step S13. If the first feature set does not exist in the fundus image, the process advances to step S14. In this embodiment, to reduce the amount of computation, the detection result may be set as a dual-labeled result with and without the first feature set.
S13, confirming the category of the first feature set. In this embodiment, the first feature set may be a set of multiple sub-features, and to improve the detection accuracy, the detection result may be set as a multi-label result, and the multiple labels may correspond to the multiple sub-features. In this embodiment, the category of the first set of features may include features such as large areas of abnormal tissue or structures in the fundus, large maculas in the fundus, etc., and the first set of features may also include structures in the fundus such as the optic disc, macula, and blood vessels. The class of the first set of features may be identified based on the attributes of the individual sub-features.
S14, detecting a second feature set aiming at a specific area in the fundus image. In this embodiment, the significance of the first feature set is greater than the significance of the second feature set, in this embodiment, the second feature set may be a detail feature, and for convenience of understanding, the significance of the first feature set is greater than the significance of the second feature set by taking the example that the color difference, the contrast, the gray scale or the area of the second feature set is smaller than the color difference, the contrast, the gray scale or the area of the first feature set. In this embodiment, the second feature set may also be features belonging to a specific area. For convenience of explanation, in the present embodiment, the specific region may be exemplified by at least one of a disc region, a macula region, and a blood vessel region and a retina region. For example, the specific region is a disc region, and the second feature set may be exemplified by sub-features of at least one of disc shape abnormality, disc color abnormality, and optic nerve abnormality. Alternatively, where the specific region is the region of the macula, the second set of features may be exemplified by sub-features of a structural abnormality of the macula and/or a shape abnormality of the macula. Or, the specific region is a blood vessel region, and the second feature set may be exemplified by sub-features of at least one of blood vessel color abnormality, blood vessel direction abnormality, central vein shape abnormality, and branch vein shape abnormality. Alternatively, the specific region is a retinal region, and the second feature set may be fine outliers, such as color outliers, irregularly shaped points, or a decrease in the retinal region. Those skilled in the art will appreciate that the second set of features may also include features of other details in the fundus, such as vascular lines, etc. The particular regions may also include other particular regions, such as a region or regions that are considered specified. In this embodiment, the second feature sets of a plurality of specific regions may be detected in parallel, the detection result of each specific region may be output independently, and in order to reduce the amount of computation, the detection result may be set as a dual-label result with the second feature set present and the second feature set absent. In this embodiment, in order to improve the detection accuracy, the detection result may be set as a multi-label result, and the multi-label result may include that the second feature set is detected to be absent and a plurality of sub-features in the second feature set are present.
The method comprises the steps of firstly detecting a first characteristic set with high significance degree in the whole area of an eye fundus image to be detected, preliminarily screening the eye fundus image to be detected, then detecting a second characteristic set with low significance degree in a specific area of the image not containing the first characteristic set, firstly classifying eye fundus images with various characteristic types, firstly identifying the obvious characteristics, then carrying out partitioned fine detection in the image without the obvious characteristics, carrying out step-by-step serial detection, and independently outputting detection results.
Because the quality difference of the fundus pictures shot by the image shooters is large, the pictures are often overexposed, dark and fuzzy, and the difficulty of machine learning judgment is greatly increased. As an optional embodiment, by detecting the image quality, screening out the image with qualified quality can further ensure the accuracy of image detection. In a specific embodiment, the fundus image may be subjected to any one or any combination of smear/bright spot detection, exposure detection, sharpness detection, light leak detection, and local shadow detection.
Specifically, the stain/bright spot detection can be performed by performing weighted average processing on a plurality of images to be detected to obtain an average image, and further judging whether pixel points exceeding a preset brightness range exist in the average image; and when the average image has pixel points exceeding the preset brightness range, confirming that stains/bright spots exist in the image to be detected. And the detection of the stain or the bright spot can be finished. The light leakage detection can be carried out to carry out binarization processing on an image to be detected to obtain a preset area in the image; generating a mask based on a preset region boundary; fusing the image to be detected with a mask; calculating the average color brightness of the fused image, and comparing the average color brightness with a preset color brightness threshold; and confirming the light leakage degree of the image to be detected according to the comparison result. When the degree of light leakage is larger than a preset value, the fundus image can be confirmed to be light leaked. When local shadow detection is carried out, the histogram of any color channel in an image to be detected can be counted; counting the number of pixel points smaller than a preset pixel value; judging whether the number of pixel points smaller than a preset pixel value is smaller than a preset number or not; and when the number of the pixel points smaller than the preset pixel value is smaller than the preset number, confirming that the local shadow exists in the image to be detected. The definition detection comprises the following steps: extracting high-frequency components of an image to be detected; calculating the information amount of the high-frequency component; and confirming the definition of the image to be detected based on the information content of the high-frequency component. During exposure detection, an image to be detected can be converted into a gray image; counting the root mean square of the gray level image histogram; and confirming the exposure of the image to be detected based on the root mean square size. When the fundus image has the above-described quality problem, the detection result of the image may be affected. Resulting in an inaccurate detection result. Particularly, when detecting the sub-features in the second feature set, it is likely that the sub-features in the second feature set cannot be detected, and therefore, in order to ensure the detection accuracy of the image, in the present embodiment, the image with the above-mentioned image quality may be eliminated.
In practical applications, some features in the fundus image, especially some abnormal features with small significance, may not exist in a specific region of the fundus image, and only the features with small significance in the specific region are detected, which may cause missing detection, and in order to improve the comprehensiveness of the detection and the accuracy of the detection, an embodiment of the present invention further provides a fundus image detection method, as shown in fig. 2, which may include the following steps:
s21, acquiring a fundus image to be detected.
S22, detecting a first feature set of the whole region of the fundus image. In particular, reference may be made to the description of the detection of the first feature set in step S12 in the above embodiment. If the fundus image has the first feature set, the process may proceed to step S23. If the first feature set does not exist in the fundus image, the process advances to step S24
And S23, confirming the category of the first feature set. In particular, refer to the description of step S13 of the above embodiment for identifying the category of the first feature set.
S24, detecting a second feature set aiming at a specific area in the fundus image. In particular, reference may be made to the description of the detection of the second feature set in step S14 of the above embodiment.
And S25, performing third characteristic detection on the fundus image not containing the second characteristic set. In an embodiment, after the screening of the first feature set with high significance and the screening of the second feature set with low significance of a specific region are performed on the fundus image, other features of other regions can be detected, and in this embodiment, the third feature detection can also be detected by a machine learning method. Specific reference may be made to the description in the above embodiments. In this embodiment, the third features may include finer features than the second feature set, such as fundus reflection points, and distributed outliers. After the third feature is detected, the detection result is the result of the third feature.
In an alternative embodiment, the third feature detection may be performed in parallel with the second feature set detection, and after the first feature set detection, the second feature set detection may be performed on a specific region, and the third feature detection may be performed on a region outside the specific region or the entire region of the fundus image, respectively, so that the features of the image may be detected more accurately.
In an alternative embodiment, when the feature set is detected by machine learning, each feature set or sub-features in the feature set can be implemented by a convolutional neural network, in this embodiment, the basic unit mechanisms of the convolutional neural network used are a convolutional layer superposition activation function (Re L u) layer and a Pooling (Pooling), wherein the convolutional layer is used for screening specific image features, the activation function layer uses a Re L u activation function to perform nonlinear processing on the screened features, and the Pooling layer uses max Pooling to extract the strongest information at different positions.
The number of layers of the network per module varies from 15 to 100 layers in the present embodiment, depending on the type of fundus feature to be detected. Specifically, the convolutional neural network may be implemented as the following structure: input layer-C1-BN 1-R1-P1-C2-BN2-R2-P2-C3-BN 3-R3-P3-C4-BN 4-R4-P4-C5-BN 5-R5-P5-FC 1-FC 2-SoftMax. The input layer is an image with a certain size, C represents a convolutional layer (same principles C1, C2, C3, C4 and C5), BN represents a batch normalization layer (same principles BN1, BN2, BN3, BN4 and BN5), R represents a function activation layer (same principles R1, R2, R3, R4 and R5), P represents a pooling layer (same principles P1, P2, P3, P4 and P5), the full connection layer provides output for FC1 and FC2, and SoftMax provides output. The convolutional neural network used in the present embodiment is not limited to the above-described convolutional neural network structure, and other neural network results that can satisfy the present embodiment are also applicable.
In the present embodiment, since the saliency of the first feature set is greater than the saliency of the second feature set and the saliency of the third feature set, in the present embodiment, the hidden layer size of the neural network can be changed according to the saliency of the features, which is called a layer between the input and the output. Specifically, the features with large saliency may use a smaller hidden layer, and the features with small saliency may use a larger hidden layer. In this embodiment, the maximum hidden layer of the convolutional network for the second and third feature sets of small significance is larger than the maximum hidden layer of the convolutional network for the first feature set.
In particular, when processing the first feature set with large saliency, because the feature saliency is large, the maximum hidden layer size of the network needs to be small, for example, may be smaller than 200 × 200, so as to extract features. Therefore, in the network structure, for the second feature set or the third feature set with small significance, the output of the hidden layer with the largest size should be kept large, such as larger than 300 × 300, to ensure that fine sub-features of the fundus, such as fine extravasation points and hemorrhage points, the latter fine structural abnormalities, can be found and extracted. The output size of the maximum hidden layer is determined by the image input layer, the volume base layer and the pooling layer, and can be realized in various ways, which is not described herein again. An embodiment of the present invention provides a fundus image detection apparatus, as shown in fig. 3, the detection apparatus includes: an acquisition module 10, configured to acquire a fundus image to be detected; the first detection module 20 is configured to perform first feature set detection on the entire region of the fundus image; a second detection module 30, configured to detect that the first feature set does not exist in the fundus image at the first detection module, and perform a second feature set detection for a specific region in the fundus image; the first feature set has a greater significance than the second feature set.
As an alternative embodiment, the specific area includes: at least one of a disc region, a macular region, a vascular region, and a retinal region.
As an alternative embodiment, the first feature set and/or the second feature set is a multi-feature set.
As an alternative embodiment, the fundus image detection apparatus further includes: and the third detection module is used for performing quality detection on the fundus images so as to screen the fundus images.
As an alternative embodiment, the third detection module includes: and the detection unit is used for carrying out any one or any combination of stain/bright spot detection, exposure detection, definition detection, light leakage detection and local shadow detection on the fundus image.
As an alternative embodiment, the fundus image detection apparatus further includes: the fourth detection module is used for carrying out third feature detection on the fundus images not containing the second feature set; the significance of the third feature is less than the significance of the second feature set.
As an alternative embodiment, at least one of the first set of features, the second set of features, or the quality of the fundus image is detected by machine learning.
In this embodiment, the electronic device may be a server or a terminal. As shown in fig. 4, a controller is included, and the controller includes one or more processors 41 and a memory 42, and one processor 41 is taken as an example in fig. 4.
The electronic device may further include: an input device 43 and an output device 44.
The processor 41, the memory 42, the input device 43 and the output device 44 may be connected by a bus or other means, and fig. 4 illustrates the connection by a bus as an example.
The processor 41 may be a Central Processing Unit (CPU). The Processor 41 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 42, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules. The processor 41 executes various functional applications of the server and data processing by running the non-transitory software programs, instructions, and modules stored in the memory 42, that is, implements the fundus image detection method of the above-described method embodiment.
The memory 42 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of a processing device operated by the server, and the like. Further, the memory 42 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 42 may optionally include memory located remotely from processor 41, which may be connected to a network connection device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 43 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the processing device of the server. The output device 44 may include a display device such as a display screen.
One or more modules are stored in the memory 42 and, when executed by the one or more processors 41, perform the method as shown in fig. 1 or 2.
An embodiment of the present invention further provides a fundus image detection system based on machine learning, and as shown in fig. 5, the system may include: the image acquisition device 100 is used for acquiring fundus images. In this embodiment, the image capturing device 100 may be a plurality of image capturing devices, and specifically, the image capturing device may be a fundus photographing device in each hospital, or may be a fundus photographing device of an individual user. In this embodiment, the fundus inspection system may further include a cloud server 200, an electronic device for executing the above-mentioned fundus image inspection method is disposed in the cloud server 200, and is in communication with the image capture device 100, specifically, a wireless communication form or a wired communication form may be adopted, the fundus image captured by the image capture device 100 is uploaded into the cloud server 200, and the fundus image inspection method is executed by the electronic device to obtain an inspection result, and the inspection result may be output by an output device, specifically, the output device 300 may be a display device or a printing device, and is printed in a report form, or may be a terminal device of a user, such as a mobile phone, a tablet, or a personal computer.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A fundus image detection method based on machine learning is characterized by comprising the following steps:
acquiring a fundus image to be detected;
performing first feature set detection on the whole area of the fundus image;
if a first feature set is not present in the fundus image, a second feature set detection is performed for a particular region in the fundus image to determine if the second feature set is present in the fundus image, wherein the first feature set is more salient than the second feature set.
2. A machine learning-based fundus image detecting method according to claim 1, wherein in the step of performing first feature set detection on the entire region of the fundus image, the first feature set detection is performed on the entire region of the fundus image using a first machine learning model; in the step of performing the second feature set detection for a specific region in the fundus image, different specific regions are detected using different second machine learning models, respectively.
3. A machine learning based fundus image detection method according to claim 2, wherein said first machine learning model and said second machine learning model perform a multi-label classification operation, the classification result of which is used to indicate whether a first feature and a second feature are included in said fundus image and a specific category of said first feature and said second feature.
4. A machine learning-based fundus image detection method according to claim 1, further comprising, after said performing a second set of feature detection for a specific region in said fundus image:
and performing third feature detection on the fundus images not containing the second feature set, wherein the significance of the third features is smaller than that of the second feature set.
5. A machine learning-based fundus image detection method according to claim 4, wherein said third feature set is a fundus lesion feature set.
6. The machine learning-based fundus image detecting method according to any one of claims 1 to 4, wherein the specific region includes:
at least one of a disc region, a macular region, a vascular region, and a retinal region.
7. A method of machine learning based detection of a fundus image according to any of claims 1 to 4 wherein the first and second feature sets are fundus lesion feature sets.
8. A fundus image detection apparatus based on machine learning, comprising:
the acquisition module is used for acquiring a fundus image to be detected;
the first detection module is used for carrying out first feature set detection on the whole fundus image region;
and the second detection module is used for detecting that the first feature set does not exist in the fundus image at the first detection module and carrying out second feature set detection on a specific area in the fundus image so as to determine whether the second feature set exists in the fundus image or not, wherein the significance of the first feature set is greater than that of the second feature set.
9. An electronic device, comprising: a controller comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the machine learning-based fundus image detection method of any of claims 1-7.
10. A fundus image detection system based on machine learning, comprising:
the image acquisition device is used for acquiring fundus images;
the electronic device of claim 9, disposed within a cloud server, in communication with the image capture device;
an output device, in communication with the electronic device, for outputting a result of fundus image detection.
CN201810387302.7A 2018-04-26 2018-04-26 Fundus image detection method, device and system based on machine learning Active CN108596895B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201810387302.7A CN108596895B (en) 2018-04-26 2018-04-26 Fundus image detection method, device and system based on machine learning
US16/623,202 US11501428B2 (en) 2018-04-26 2019-04-25 Method, apparatus and system for detecting fundus image based on machine learning
PCT/CN2019/084207 WO2019206208A1 (en) 2018-04-26 2019-04-25 Machine learning-based eye fundus image detection method, device, and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810387302.7A CN108596895B (en) 2018-04-26 2018-04-26 Fundus image detection method, device and system based on machine learning

Publications (2)

Publication Number Publication Date
CN108596895A CN108596895A (en) 2018-09-28
CN108596895B true CN108596895B (en) 2020-07-28

Family

ID=63610441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810387302.7A Active CN108596895B (en) 2018-04-26 2018-04-26 Fundus image detection method, device and system based on machine learning

Country Status (3)

Country Link
US (1) US11501428B2 (en)
CN (1) CN108596895B (en)
WO (1) WO2019206208A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108577803B (en) * 2018-04-26 2020-09-01 上海鹰瞳医疗科技有限公司 Fundus image detection method, device and system based on machine learning
CN108596895B (en) * 2018-04-26 2020-07-28 上海鹰瞳医疗科技有限公司 Fundus image detection method, device and system based on machine learning
US20210219839A1 (en) * 2018-05-31 2021-07-22 Vuno, Inc. Method for classifying fundus image of subject and device using same
US11615208B2 (en) * 2018-07-06 2023-03-28 Capital One Services, Llc Systems and methods for synthetic data generation
CN110327013B (en) * 2019-05-21 2022-02-15 北京至真互联网技术有限公司 Fundus image detection method, device and equipment and storage medium
CN110335254B (en) * 2019-06-10 2021-07-27 北京至真互联网技术有限公司 Fundus image regionalization deep learning method, device and equipment and storage medium
CN112190227B (en) * 2020-10-14 2022-01-11 北京鹰瞳科技发展股份有限公司 Fundus camera and method for detecting use state thereof
CN113344894A (en) * 2021-06-23 2021-09-03 依未科技(北京)有限公司 Method and device for extracting characteristics of eyeground leopard streak spots and determining characteristic index
CN113570556A (en) * 2021-07-08 2021-10-29 北京大学第三医院(北京大学第三临床医学院) Method and device for grading eye dyeing image
CN114998353B (en) * 2022-08-05 2022-10-25 汕头大学·香港中文大学联合汕头国际眼科中心 System for automatically detecting vitreous opacity spot fluttering range
CN115908402B (en) * 2022-12-30 2023-10-03 胜科纳米(苏州)股份有限公司 Defect analysis method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722735A (en) * 2012-05-24 2012-10-10 西南交通大学 Endoscopic image lesion detection method based on fusion of global and local features
CN107146231A (en) * 2017-05-04 2017-09-08 季鑫 Retinal image bleeding area segmentation method and device and computing equipment

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8098907B2 (en) * 2005-07-01 2012-01-17 Siemens Corporation Method and system for local adaptive detection of microaneurysms in digital fundus images
WO2010071898A2 (en) * 2008-12-19 2010-06-24 The Johns Hopkins Univeristy A system and method for automated detection of age related macular degeneration and other retinal abnormalities
CA2817963A1 (en) * 2010-11-17 2012-05-24 Optovue, Inc. 3d retinal disruptions detection using optical coherence tomography
US9849034B2 (en) * 2011-11-07 2017-12-26 Alcon Research, Ltd. Retinal laser surgery
JP6143096B2 (en) * 2013-08-07 2017-06-07 ソニー株式会社 Fundus image processing apparatus and program, and fundus image photographing apparatus
EP3061063A4 (en) * 2013-10-22 2017-10-11 Eyenuk, Inc. Systems and methods for automated analysis of retinal images
CN103870838A (en) * 2014-03-05 2014-06-18 南京航空航天大学 Eye fundus image characteristics extraction method for diabetic retinopathy
US10219693B2 (en) * 2015-03-12 2019-03-05 Nidek Co., Ltd. Systems and methods for combined structure and function evaluation of retina
NZ773819A (en) * 2015-03-16 2022-07-01 Magic Leap Inc Methods and systems for diagnosing and treating health ailments
WO2017020045A1 (en) * 2015-07-30 2017-02-02 VisionQuest Biomedical LLC System and methods for malarial retinopathy screening
US10722115B2 (en) * 2015-08-20 2020-07-28 Ohio University Devices and methods for classifying diabetic and macular degeneration
CN105513077B (en) * 2015-12-11 2019-01-04 北京大恒图像视觉有限公司 A kind of system for diabetic retinopathy screening
CN106530295A (en) * 2016-11-07 2017-03-22 首都医科大学 Fundus image classification method and device of retinopathy
CN108172291B (en) * 2017-05-04 2020-01-07 深圳硅基智能科技有限公司 Diabetic retinopathy recognition system based on fundus images
CN107680684B (en) * 2017-10-12 2021-05-07 百度在线网络技术(北京)有限公司 Method and device for acquiring information
CN108615051B (en) * 2018-04-13 2020-09-15 博众精工科技股份有限公司 Diabetic retina image classification method and system based on deep learning
CN108596895B (en) * 2018-04-26 2020-07-28 上海鹰瞳医疗科技有限公司 Fundus image detection method, device and system based on machine learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722735A (en) * 2012-05-24 2012-10-10 西南交通大学 Endoscopic image lesion detection method based on fusion of global and local features
CN107146231A (en) * 2017-05-04 2017-09-08 季鑫 Retinal image bleeding area segmentation method and device and computing equipment

Also Published As

Publication number Publication date
WO2019206208A1 (en) 2019-10-31
CN108596895A (en) 2018-09-28
US20210042912A1 (en) 2021-02-11
US11501428B2 (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN108596895B (en) Fundus image detection method, device and system based on machine learning
CN108577803B (en) Fundus image detection method, device and system based on machine learning
CN108346149B (en) Image detection and processing method and device and terminal
CN110060237B (en) Fault detection method, device, equipment and system
US10115191B2 (en) Information processing apparatus, information processing system, information processing method, program, and recording medium
CN109697719B (en) Image quality evaluation method and device and computer readable storage medium
KR20180065889A (en) Method and apparatus for detecting target
CN110728681B (en) Mura defect detection method and device
US11068740B2 (en) Particle boundary identification
CN113096097A (en) Blood vessel image detection method, detection model training method, related device and equipment
Borthakur et al. A comparative study of automated pcb defect detection algorithms and to propose an optimal approach to improve the technique
CN111753775B (en) Fish growth assessment method, device, equipment and storage medium
CN116993654A (en) Camera module defect detection method, device, equipment, storage medium and product
Cloppet et al. Adaptive fuzzy model for blur estimation on document images
KR102257998B1 (en) Apparatus and method for cell counting
CN109543565B (en) Quantity determination method and device
CN112712004B (en) Face detection system, face detection method and device and electronic equipment
CN116433671B (en) Colloidal gold detection method, system and storage medium based on image recognition
CN115423804B (en) Image calibration method and device and image processing method
TWI737447B (en) Image processing method, electronic device and storage device
EP3659333A1 (en) Evaluation of dynamic ranges of imaging devices
CN113763354A (en) Image processing method and electronic equipment
CN110473198A (en) Deep learning system and method
CN116797478A (en) Method and device for enhancing contraband data of X-ray security inspection image and computer equipment
CN116758032A (en) Identification method, identification device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Zhang Dalei

Inventor after: Zhao Xin

Inventor after: Xiong Jianhao

Inventor after: Li Shulei

Inventor after: Ma Yongpei

Inventor before: Zhao Xin

Inventor before: Xiong Jianhao

Inventor before: Li Shulei

Inventor before: Ma Yongpei

CB03 Change of inventor or designer information
TR01 Transfer of patent right

Effective date of registration: 20220803

Address after: Room 21, floor 4, building 2, yard a 2, North West Third Ring Road, Haidian District, Beijing 100083

Patentee after: Beijing Yingtong Technology Development Co.,Ltd.

Patentee after: SHANGHAI EAGLEVISION MEDICAL TECHNOLOGY Co.,Ltd.

Address before: 200030 room 01, 8 building, 1 Yizhou Road, Xuhui District, Shanghai, 180

Patentee before: SHANGHAI EAGLEVISION MEDICAL TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right