CN114419377A - Image classification method based on deep neural network - Google Patents

Image classification method based on deep neural network Download PDF

Info

Publication number
CN114419377A
CN114419377A CN202210308441.2A CN202210308441A CN114419377A CN 114419377 A CN114419377 A CN 114419377A CN 202210308441 A CN202210308441 A CN 202210308441A CN 114419377 A CN114419377 A CN 114419377A
Authority
CN
China
Prior art keywords
image
classification
neural network
resolution model
deep neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210308441.2A
Other languages
Chinese (zh)
Other versions
CN114419377B (en
Inventor
吉杰
张铭志
岑令平
林建伟
邱坤良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mediworks Precision Instruments Co Ltd
Original Assignee
Shantou University Chinese University Of Hong Kong And Shantou International Ophthalmology Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shantou University Chinese University Of Hong Kong And Shantou International Ophthalmology Center filed Critical Shantou University Chinese University Of Hong Kong And Shantou International Ophthalmology Center
Priority to CN202210308441.2A priority Critical patent/CN114419377B/en
Publication of CN114419377A publication Critical patent/CN114419377A/en
Application granted granted Critical
Publication of CN114419377B publication Critical patent/CN114419377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Abstract

The invention discloses an image classification method based on a deep neural network, which comprises the following steps: establishing a secondary classification system, putting the types of the image details needing to be identified into the same large class, putting the types only related to a certain local area of the image into the same large class, and putting other disease types or lesions into the large classes respectively. Inputting a retina fundus image, firstly using a common resolution model to classify a large class, then using a large resolution model molecule class for the large class needing to identify image details, for the large class only related to a small local area of the image, firstly positioning and cutting the local area, and then using a small resolution model to distinguish subclasses. Compared with the traditional classification method, the method can obviously improve the classification accuracy. Compared with the traditional classification method, the method can obviously improve the classification accuracy.

Description

Image classification method based on deep neural network
Technical Field
The invention belongs to the field of artificial intelligence deep learning and computer vision, and particularly relates to an image classification method based on a deep neural network.
Background
Image classification is widely applied, and is the most basic problem to be solved by computer vision, and the solution of other problems such as target detection, semantic segmentation, instance segmentation and the like also depends on image classification. The deep neural network technology firstly breaks through in image classification, then a huge progress is obtained, the network structure is from AlexNet to VGG, ResNet to latest SE-Net and NASN, the classification method comprises different classification modes such as two-classification, Multi-classification (Multi-Classes), Multi-label classification (Multi-labels) and the like, and the classification accuracy is higher and higher as the technologies of image preprocessing, data enhancement, Multi-model integration, multiple input of the same image into the neural network after different processing (such as random cutting) and integration of prediction results are integrated. The image classification method based on the neural network has higher accuracy than that of human beings in competitions such as ImageNet and the like, and has a very good effect in most practical application scenes.
The current image classification adopts a deep neural network model with the same input resolution aiming at the same classification problem, although the image classification does not have problems in most scenes and has very poor effect in some scenes, wherein the scenes refer to that some of different categories are based on the overall characteristics of the image, some categories are only related to tiny details of the image, and some categories are only related to a tiny part of the image.
Similar to the feature pyramid fpn (feature pyramid) method, features of different levels can be merged in a deep neural network, and these features include both low-level features: location accurate but overall feature semantic information is little, and high-level features: semantic information is rich, but the target location is coarse. The method of the feature pyramid has a good effect on target detection and the like, but cannot solve the problem that the feature level difference between different categories is huge in the complex classification scene. Therefore, only using the deep neural network model with the same input resolution cannot take into account the overall features, the local features and the detail features of the recognized image. A network structure such as a feature pyramid and the like which integrates multi-scale features cannot solve the problem that feature level differences among different categories are huge in a complex classification scene.
Taking the classification of the retinal fundus image as an example, that is, inputting the fundus image, outputting that the image is normal or a certain lesion. Some of these categories are related to the overall characteristics of the image, such as leopard streaks, retinitis pigmentosa, and the like. Some categories require identifying minute details of the image, such as DR1 (DR international standard classified as five-level DR1-DR 5) of Diabetic Retinopathy (DR), which is characterized by the presence of microangiomas that appear as small red dots on the image, even if the original image has only a few pixel sizes. While some categories are associated with only a small area in the original image, such as glaucoma and optic atrophy associated with only the inner and peripheral disc regions, macular edema associated with only the macular region, etc.
In view of the above, if all classes are distinguished by using a deep neural network with the same resolution, the resolution of the model must be very large (e.g., 512 × 512) so that it is possible to identify details such as microangiomas and to classify the classes DR1 well. The use of large models is necessary for diagnosing DR1 on the one hand, but on the other hand brings the worst for other categories: including more model parameters, longer training and prediction times, especially because large models are more prone to overfitting, resulting in reduced classification accuracy.
Theoretically, it is entirely feasible to use a common deep neural network model to distinguish classes (e.g., glaucoma and optic atrophy) that are only associated with a small portion of the image from the original image, but this is often not practical. The reason is that the deep neural network learns to automatically extract features according to training samples, and because real features are concentrated in a small area of the original image, and other areas in the original image have a large number of irrelevant features, a large number of training samples are necessary to teach the deep neural network to extract the correct features, and in reality, it is often difficult to obtain a large number of required training samples, for example, in the field of medical images, it is very costly to obtain labeled medical images, and even impossible to obtain many disease types (such as rare diseases).
Disclosure of Invention
In order to solve the above problems, the present invention provides the following solutions: an image classification method based on a deep neural network comprises the following steps:
establishing a secondary classification system for image classification, and inputting the fundus images into the secondary classification system for image classification to obtain a classification result; and judging whether the image is normal or not according to the classification result.
Preferably, the establishing of the secondary classification system includes dividing the category of the image micro-details to be identified into a first large category, and distinguishing sub-categories based on the first large category; dividing a category which is only related to a certain local part of the image into a second large category, and distinguishing sub-categories based on the second large category; other disease species or characteristic types are placed in other broad categories.
Preferably, before inputting the fundus image into the secondary classification system, constructing a deep neural network model;
the deep neural network model comprises a common resolution model, a large resolution model and a small resolution model;
the input of the large-resolution model is 1.4 times of that of the ordinary resolution model, and the input of the small-resolution model is 0.7 times of that of the ordinary resolution model.
Preferably, inputting the fundus image into the secondary classification system for image classification includes classifying the fundus image into a large class by the normal resolution model; and aiming at the first large class, positioning and cutting small local areas by the large-resolution model molecule class, and aiming at the second large class, dividing subclasses by the small-resolution model.
Preferably, obtaining the classification result comprises obtaining a first classification result;
obtaining the first classification result comprises classifying the fundus images into a first large class, a second large class and other large classes according to the identification image condition through the common resolution model;
the first large class is the tiny details that need to be recognized for the image;
the second broad category relates to the local area where the image and the original image need to be identified.
Preferably, obtaining the classification result comprises obtaining a second classification result;
obtaining the second classification result comprises, for the classification result of the first large class, dividing the subordinate subclasses of the first large class by a high-resolution model; and aiming at the classification result of the second large class, positioning and cutting out a classified related region in the fundus image, and dividing the subordinate subclasses of the second large class through a small-resolution model.
The invention discloses the following technical effects:
the invention provides an image classification method based on a deep neural network, which is used for classifying images of fundus oculi of collected retina by constructing an initial neural network model for setting classification hierarchical structure relationship for image classification; and judging whether the image is normal or a certain lesion according to the classification result. The method can obviously improve the classification accuracy compared with the traditional method aiming at the scenes that some categories are based on the overall characteristics of the image, some categories are only related to tiny details of the image, and some categories are only related to a certain tiny local part of the image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flow chart of a method of an embodiment of the present invention;
FIG. 2 is a network structure diagram of a custom large model Resnet according to an embodiment of the present invention;
fig. 3 is a network structure diagram of the custom small model Resnet according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1, the present invention provides an image classification method based on a deep neural network, including:
establishing a secondary classification system for image classification, and inputting the fundus images into the secondary classification system for image classification to obtain a classification result; and judging whether the image is normal or not according to the classification result.
Establishing the secondary classification system comprises the steps of dividing the classes of the micro details of the image to be identified into a first large class and then distinguishing sub-classes based on the first large class; dividing a category which is only related to a certain local part of the image into a second large category, and distinguishing sub-categories based on the second large category; other disease species or characteristic types are placed in other broad categories.
Before the fundus image is input into the secondary classification system, constructing a deep neural network model;
the deep neural network model comprises a common resolution model, a large resolution model and a small resolution model;
the input of the large-resolution model is 1.4 times of that of the ordinary resolution model, and the input of the small-resolution model is 0.7 times of that of the ordinary resolution model.
Inputting the fundus images into the secondary classification system for image classification, wherein the classification is carried out through the common resolution model; and aiming at the first large class, positioning and cutting small local areas by the large-resolution model molecule class, and aiming at the second large class, dividing subclasses by the small-resolution model.
Obtaining the classification result comprises obtaining a first classification result;
obtaining the first classification result comprises classifying the fundus images into a first large class, a second large class and other large classes according to the identification image condition through the common resolution model;
the first large class is the tiny details that need to be recognized for the image;
the second broad category relates to the local area where the image and the original image need to be identified.
Obtaining the classification result comprises obtaining a second classification result;
obtaining the second classification result comprises, for the classification result of the first large class, dividing the subordinate subclasses of the first large class by a high-resolution model; and aiming at the classification result of the second large class, positioning and cutting out a classified related region in the fundus image, and dividing the subordinate subclasses of the second large class through a small-resolution model.
Example one
As shown in fig. 1 to 3, the present invention provides an image classification method based on a deep neural network, comprising the following steps:
firstly, establishing a classification hierarchical relationship, putting the categories of the micro details of the image to be identified into the same category, and putting the categories which are only related to a certain part of the image into the same category. The generic resolution model is used to classify large classes and then sub-classify them. And for each sub-class of the large class which is only related to a certain small part of the image, firstly positioning and cutting a target area of the original image, and then classifying the cut image by using the deep neural network model with low resolution. Wherein, the input of the large model is 1.4 times of the resolution of the common model, and the input of the small model is 0.7 times of the resolution of the common model.
Further, the detailed description of the technical scheme of the invention
As shown in fig. 1, the image classification method based on the deep neural network proposed in the present invention includes a large class a and a large class B, wherein the large class a and the large class B both include several sub-classes, and the sub-class for distinguishing the large class a needs to identify the minuteness of the image, and the sub-class for distinguishing the large class B is only related to a very small local area of the original image.
Taking the classification of the retinal fundus image as an example, that is, inputting the fundus image, outputting that the image is normal or a certain lesion. The key is that some of these categories are related to the overall characteristics of the image, such as leopard print, retinitis pigmentosa, etc. Some classes require the identification of minute details of the image, such as the DR1 of diabetic retinopathy, which is characterized by the presence of microangiomas that appear as small red dots on the image, even if only a few pixels in size on the original image. Some categories are associated with only a small area in the original image, e.g. glaucoma and optic atrophy associated with only the inner and peripheral disc regions, macular edema associated with only the macular region, etc.
Unlike traditional deep neural networks that use the same resolution in the form of multiple secondary classifications, multiple classifications (Multi-Classes), or Multi-label classifications (Multi-Labels), we first define a hierarchy of classifications: fundus images were divided into 29 major categories, some of which included several sub-categories. The 29 major classes include class 0 (including normal and DR 1), class 1 (including glaucoma and optic atrophy), BRVO (retinal score vein occlusion), CRVO (central retinal vein occlusion), RP (retinitis pigmentosa), recoverable DR (diabetic retinopathy requiring treatment), RD (retinal detachment), Silicon oil in eye, etc. (for convenience of description, the other major classes are omitted here, and some of these classes also contain subclasses). The core idea of designing a class hierarchy is as follows: the method comprises the steps of putting categories needing to identify the micro details of the image into the same large class, then distinguishing subclasses in the large class, putting categories only locally related to a certain part of the image into a large class, and then distinguishing the subclasses in the large class.
After the classified hierarchical relation is defined, a common resolution model is firstly used for classifying large classes during classification, then for some large classes, the large resolution model is used for distinguishing subclasses to which the large classes needing to identify the micro details of the image belong, for each large class only related to a certain small local area of the original image, the target area of the original image is firstly positioned and cut, and then the cut image is classified by using a low-resolution deep neural network model.
Since distinguishing large classes does not require identifying subtle details of the image, general models can be used to classify large classes, such as inclusion-V3 for input image resolution 299 x 299, inclusion ResNet-V2, VGGNet for 224 x 224, ResNet (residual network), etc. This example was developed using a Tensorflow + Keras framework with a Keras insert supporting most of the common models, such as all of the models mentioned above. Multiple independent training of multiple different types of models is adopted, then the accuracy is improved by combining the prediction results (Ensemble Learning) of the multiple models, and the simplified implementation codes are as follows:
NUM _ BIG _ CLASSES =29 # defines that the number of large CLASSES is 29
model1 = Keras. applications. inclusion resenetv2 (include _ top = True, weights = None, CLASSES = NUM _ BIG _ CLASSES) # deep neural network model was defined using Keras-built model inclusion resenetv 2
After the model is trained, it is used for classification, and the probability that the output result is 29 major classes:
probabilities=model1.predict(x)
selecting the category with the maximum probability value as a predicted category:
y_pred1 = probabilities.argmax(axis=-1)
if an image is classified as large as class 0, it is then classified as a subclass, i.e., normal or diabetic retinopathy DR 1. Since the DR1 is characterized by microangiomas which appear as small red dots on the image, the details of the original image, which are only several pixels in size, i.e. several pixels in size, may determine the category of the whole image, and therefore, the image must be identified by a deep neural network model with a large resolution.
The standard Resnet input size is 224 x 224, since the model uses a Global Average Pooling Layer (Global Average potential Layer) instead of a Fully Connected Layer (full Connected Layer), other sizes of input may be accepted, but should not differ too much from 224. A standard ResNet contains 4 volume Blocks (Conv Blocks), each consisting of several Residual Blocks (Residual Blocks).
The invention makes the following improvements: the input image is doubled by 448 x 448, adding one convolution block accordingly, and changing the convolution kernel of the first convolution layer from 7 x 7 to 5 x 5, the number of convolution kernels is reduced from 64 to 32 (since 32 is sufficient from the feature extraction perspective, reducing from 64 to 32 can reduce memory consumption greatly, improve training and prediction speed), besides the model is structured according to the structure of Resnet V2, including the structure of Pre Activation (non-linear Activation layers located before convolution layers) and BottleNeck (each resactual block includes three convolution layers 1 x1, 3 x 3,1 x1), which is shown in fig. 2. By using the fundus image provided by the Shantou university-hong Kong Chinese university combined Shantou International ophthalmologic center (the fifth subsidiary hospital of the Shantou university medical school), for the class 0 molecule class, the classification accuracy rate is improved by 8.5% by adopting the customized large model compared with the common model, the sensitivity and specificity are obviously improved, and the confusion matrix adopting the customized model is as follows.
Training set:
Figure 757774DEST_PATH_IMAGE001
and (4) verification set:
Figure 173712DEST_PATH_IMAGE002
this accuracy is already good, approaching the level of a specialist, since DR1 is difficult to detect.
If the result of classifying a certain image as large class 1, it needs to be further classified to distinguish glaucoma from optic atrophy. The distinction is only relevant to the inner and peripheral areas of the disc, which occupies a small portion of the entire fundus image. This embodiment is done by first locating the cropped disc area and then classifying the cropped image using a low-resolution deep neural network. The method for positioning the optic disc is various, and the traditional image processing method is a deep neural network method. Conventional image processing methods include methods based on image histograms, templates, combined vessel trends, etc. Locating the disc by means of a deep neural network also includes different methods: positioning, target detection, semantic segmentation and instance segmentation. The positioning method is realized by regression, because each fundus image has only one optic disc at most, the output layer of the deep neural network is modified, and the coordinates of the upper left and the lower right of the optic disc area are output. Target detection includes two steps, namely a Single step method and a two-step method, wherein the Single step method includes YOLO (only one-time), YOLOV2, YOLOV3, SSD (Single Shot Multi Box Detector) and latest RetinaNet (retina network), and the two-step method includes fast R-CNN (fast area convolution network), FCN (full convolution network) and the like. The semantic segmentation includes FCN, U-Net (U-shaped network), and the like, and the example segmentation includes Mask R-CNN (Mask-generating area convolution network), and the like.
In the embodiment, a plurality of different methods are tried for positioning and cutting the optic disc area, Matlab is used for positioning the optic disc according to the histogram of the image at first, then a RetinaNet target detection model is used, and finally an example segmentation model Mask R-CNN is used. The reason is that the accuracy is higher than that of the traditional image method by using a deep neural network, and the output of the Mask R-CNN comprises three parts: confidence, BBOX coordinates of the target, Mask in binary. The confidence can judge whether the optic disc is accurately detected (the image histogram and the method using the deep neural network for positioning have no confidence), and a mask image segmented by using the optic disc (the target detection RetinaNet does not output a mask) can be used for segmenting the optic cup in the optic disc later, calculating the cup-disc ratio and the like.
Assuming results is the output of Mask R-CNN, the code to crop the optic disc area according to BBOX coordinates is as follows, using Python + OpenCV:
y1, x1, y2, x2= results [0] [ 'rois' ] [0] # obtain the coordinates of BBOX
center _ x = (x2 + x 1)// 2 # acquisition of coordinates of disc center point
center_y = (y2 + y1) // 2
r = max((x2-x1)//2, (y2-y1)//2)
r = r + 40
# Square
left = int(max(0, center_x - r))
right = int(min(image_preprocess.shape[1], center_x + r))
bottom = int(max(0, center_y - r))
top = int(min(image_preprocess.shape[1], center_y + r))
image_crop = image [bottom:top, left:right]
image_crop = cv2.resize(image_crop, (112, 112))
The output image size of the algorithm is 112 × 112, including the inner and peripheral areas of the optic disc, although the image after cutting is twice smaller, the resolution of the image is higher than that of the image based on the original image by using a common model input model, and then the output image is input into a customized low-resolution deep neural network to further distinguish the glaucoma and optic atrophy subclasses.
The standard Resnet input size is 224 × 224, which contains 4 volume blocks, and since the input image after the video disc is cropped is 112 × 112, which is only half of the original, a custom designed model is required, which deletes one volume block, as shown in fig. 3.
By using the fundus image provided by the Shantou university-hong Kong Chinese university combined Shantou International ophthalmologic center, if the common model is used for classifying the subclasses of Class 1, although the training accuracy is high, the verification accuracy is almost the same as the blind guess, in addition, the characteristics are not correctly extracted by judging the deep neural network according to the thermodynamic diagram (Class Activation Maps) output by the model, and most of the extracted characteristics are in irrelevant areas. And after cutting, the user-defined small models are used for classification, the effect is very good, and a confusion matrix is as follows:
training set:
Figure 237483DEST_PATH_IMAGE003
and (4) verification set:
Figure 701962DEST_PATH_IMAGE004
analysis is carried out on thermodynamic diagrams (Class Activation Maps) output by the partial image custom model, and the thermodynamic diagrams clearly show that a red area (with a higher value) is a key area of a real lesion. According to comprehensive analysis of the confusion matrix and the thermodynamic diagram, the classification method is obviously improved compared with the traditional method.
The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solutions of the present invention are within the scope of the present invention defined by the claims.

Claims (6)

1. An image classification method based on a deep neural network is characterized by comprising the following steps:
establishing a secondary classification system for image classification, and inputting the fundus images into the secondary classification system for image classification to obtain a classification result; and judging whether the image is normal or not according to the classification result.
2. The deep neural network-based image classification method according to claim 1,
establishing the secondary classification system comprises the steps of dividing the classes of the micro details of the image to be identified into a first large class and then distinguishing sub-classes based on the first large class; dividing a category which is only related to a certain local part of the image into a second large category, and distinguishing sub-categories based on the second large category; other disease species or characteristic types are placed in other broad categories.
3. The deep neural network-based image classification method according to claim 2,
before the fundus image is input into the secondary classification system, constructing a deep neural network model;
the deep neural network model comprises a common resolution model, a large resolution model and a small resolution model;
the input of the large-resolution model is 1.4 times of that of the ordinary resolution model, and the input of the small-resolution model is 0.7 times of that of the ordinary resolution model.
4. The deep neural network-based image classification method according to claim 3,
inputting the fundus images into the secondary classification system for image classification, wherein the classification is carried out through the common resolution model; and aiming at the first large class, positioning and cutting small local areas by the large-resolution model molecule class, and aiming at the second large class, dividing subclasses by the small-resolution model.
5. The deep neural network-based image classification method according to claim 4,
obtaining the classification result comprises obtaining a first classification result;
obtaining the first classification result comprises classifying the fundus images into a first large class, a second large class and other large classes according to the identification image condition through the common resolution model;
the first large class is the tiny details that need to be recognized for the image;
the second broad category relates to the local area where the image and the original image need to be identified.
6. The deep neural network-based image classification method according to claim 4,
obtaining the classification result comprises obtaining a second classification result;
obtaining the second classification result comprises, for the classification result of the first large class, dividing the subordinate subclasses of the first large class by a high-resolution model; and aiming at the classification result of the second large class, positioning and cutting out a classified related region in the fundus image, and dividing the subordinate subclasses of the second large class through a small-resolution model.
CN202210308441.2A 2022-03-28 2022-03-28 Image classification method based on deep neural network Active CN114419377B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210308441.2A CN114419377B (en) 2022-03-28 2022-03-28 Image classification method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210308441.2A CN114419377B (en) 2022-03-28 2022-03-28 Image classification method based on deep neural network

Publications (2)

Publication Number Publication Date
CN114419377A true CN114419377A (en) 2022-04-29
CN114419377B CN114419377B (en) 2022-06-24

Family

ID=81263441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210308441.2A Active CN114419377B (en) 2022-03-28 2022-03-28 Image classification method based on deep neural network

Country Status (1)

Country Link
CN (1) CN114419377B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100183225A1 (en) * 2009-01-09 2010-07-22 Rochester Institute Of Technology Methods for adaptive and progressive gradient-based multi-resolution color image segmentation and systems thereof
CN113205085A (en) * 2021-07-05 2021-08-03 武汉华信数据系统有限公司 Image identification method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100183225A1 (en) * 2009-01-09 2010-07-22 Rochester Institute Of Technology Methods for adaptive and progressive gradient-based multi-resolution color image segmentation and systems thereof
CN113205085A (en) * 2021-07-05 2021-08-03 武汉华信数据系统有限公司 Image identification method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邓旭冉等: "深度细粒度图像识别研究综述", 《南京信息工程大学学报(自然科学版)》, no. 06, 28 November 2019 (2019-11-28) *

Also Published As

Publication number Publication date
CN114419377B (en) 2022-06-24

Similar Documents

Publication Publication Date Title
Dharmawan et al. A new hybrid algorithm for retinal vessels segmentation on fundus images
Wisaeng et al. Exudates detection using morphology mean shift algorithm in retinal images
CN110211087B (en) Sharable semiautomatic marking method for diabetic fundus lesions
Pathan et al. Automated segmentation and classifcation of retinal features for glaucoma diagnosis
Gupta et al. A robust framework for glaucoma detection using CLAHE and EfficientNet
JP2008530701A (en) Face feature detection method
US20100066761A1 (en) Method of designating an object in an image
CN112883962B (en) Fundus image recognition method, fundus image recognition apparatus, fundus image recognition device, fundus image recognition program, and fundus image recognition program
Saha et al. Fully convolutional neural network for semantic segmentation of anatomical structure and pathologies in colour fundus images associated with diabetic retinopathy
Swathi et al. A smart application to detect pupil for small dataset with low illumination
CN112669343A (en) Zhuang minority nationality clothing segmentation method based on deep learning
CN112232448A (en) Image classification method and device, electronic equipment and storage medium
Das et al. An efficient deep sclera recognition framework with novel sclera segmentation, vessel extraction and gaze detection
CN111368845A (en) Feature dictionary construction and image segmentation method based on deep learning
Pavithra et al. Computer aided diagnosis of diabetic macular edema in retinal fundus and OCT images: A review
Malhi et al. Detection and diabetic retinopathy grading using digital retinal images
Singh et al. Optimized convolutional neural network for glaucoma detection with improved optic-cup segmentation
Thiyaneswaran et al. Extraction of mole from eye sclera using object area detection algorithm
CN112215285B (en) Cross-media-characteristic-based automatic fundus image labeling method
Jana et al. A semi-supervised approach for automatic detection and segmentation of optic disc from retinal fundus image
Pradhan et al. Diabetic retinopathy detection on retinal fundus images using convolutional neural network
CN114419377B (en) Image classification method based on deep neural network
Zabihi et al. Vessel extraction of conjunctival images using LBPs and ANFIS
Zou et al. Deep learning and its application in diabetic retinopathy screening
Mohammedhasan et al. A new deeply convolutional neural network architecture for retinal blood vessel segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230810

Address after: Building 7, Mingpu Square, No. 3279 Sanlu Road, Minhang District, Shanghai, 201100

Patentee after: SHANGHAI MEDIWORKS PRECISION INSTRUMENTS Co.,Ltd.

Address before: 515000 Dongxia North Road, Jinping District, Shantou City, Guangdong Province

Patentee before: Shantou University The Chinese University of Hong Kong and Shantou International Ophthalmology Center