CN115985472B - Fundus image labeling method and fundus image labeling system based on neural network - Google Patents

Fundus image labeling method and fundus image labeling system based on neural network Download PDF

Info

Publication number
CN115985472B
CN115985472B CN202211524350.9A CN202211524350A CN115985472B CN 115985472 B CN115985472 B CN 115985472B CN 202211524350 A CN202211524350 A CN 202211524350A CN 115985472 B CN115985472 B CN 115985472B
Authority
CN
China
Prior art keywords
image
fundus image
neural network
labeling
fundus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211524350.9A
Other languages
Chinese (zh)
Other versions
CN115985472A (en
Inventor
何文淦
郝宇飞
王凯
何海燕
李雨浛
刘浚源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Quanyi Technology Co ltd
Original Assignee
Zhuhai Quanyi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Quanyi Technology Co ltd filed Critical Zhuhai Quanyi Technology Co ltd
Priority to CN202211524350.9A priority Critical patent/CN115985472B/en
Publication of CN115985472A publication Critical patent/CN115985472A/en
Application granted granted Critical
Publication of CN115985472B publication Critical patent/CN115985472B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Eye Examination Apparatus (AREA)

Abstract

The application discloses a fundus image labeling method and a fundus image labeling system based on a neural network, wherein the method comprises the following steps: acquiring an initial fundus image of a diabetic patient, and extracting fundus image features from the initial fundus image by using a preset first neural network; searching similar images from a predicted on-line database based on the fundus image characteristics, wherein the preset on-line database consists of a plurality of fundus images preprocessed by data; and marking the focus of the initial fundus image according to the similar image to obtain a marked image. According to the application, after the initial fundus image of a diabetic patient is acquired, the first neural network is utilized to extract image features from the initial fundus image, similar images are queried from the on-line database based on the image features, and focus labeling is carried out on the initial fundus image according to the similar images for a doctor to refer to, so that the time consumption of diagnosis is shortened, and the diagnosis efficiency is improved.

Description

Fundus image labeling method and fundus image labeling system based on neural network
Technical Field
The application relates to the technical field of image labeling, in particular to a fundus image labeling method and a fundus image labeling system based on a neural network.
Background
Diabetic retinopathy is a complication caused by diabetes mellitus. Common pathological changes are microangioma, hard exudation, cotton linter plaque, glass blood volume, new blood vessel, etc. The severity of diabetic retinopathy is classified into five classes (0-4) according to international clinical grading standards. More severe diabetic retinopathy may cause vision impairment and blindness. Thus, diabetics need to go to the ocular fundus periodically to discover lesions in time and slow or stop the progression of the condition.
The current common inspection method is fundus photography inspection, fundus lesion condition is recorded through images, and then the images are sent to doctors for diagnosis and follow-up.
However, the current common methods have the following technical problems: the traditional examination mode requires a professional doctor to diagnose, is long in time consumption and high in cost, and in view of the current situation that medical resources are unevenly distributed and the number of the professional doctors is limited, the demand for automatically diagnosing diabetic retinopathy is increasingly high, and the traditional method is difficult to meet the application demands of users.
Disclosure of Invention
The application provides a fundus image labeling method and a fundus image labeling system based on a neural network, wherein after an initial fundus image of a diabetic patient is acquired, image features are extracted from the initial fundus image by using the neural network, similar images are queried from an online database based on the image features, and focus labeling is carried out on the initial fundus image according to the similar images, so that the method is used for referring to doctors, the diagnosis time consumption is shortened, and the diagnosis efficiency is improved.
A first aspect of an embodiment of the present application provides a fundus image labeling method based on a neural network, the method including:
acquiring an initial fundus image of a diabetic patient, and extracting fundus image features from the initial fundus image by using a preset first neural network;
searching similar images from a predicted on-line database based on the fundus image characteristics, wherein the preset on-line database consists of a plurality of fundus images preprocessed by data;
and marking the focus of the initial fundus image according to the similar image to obtain a marked image.
In a possible implementation manner of the first aspect, the labeling the focus of the initial fundus image according to the similar image to obtain a labeling image includes:
predicting case information of the initial fundus image corresponding to the diabetic patient by using a preset second neural network, wherein the case information comprises: gender, age, smoking history, hypertension;
performing result correction on the similar images based on the case information to obtain corrected images;
performing interpretability auxiliary diagnosis on the corrected image by adopting a class activation labeling technology to obtain a focus region of the image;
and performing activation mapping on the initial fundus image according to the focus area, and highlighting and marking the focus area corresponding to the initial fundus image in a thermodynamic diagram mode to obtain a marked image.
In a possible implementation manner of the first aspect, the performing, based on the case information, result correction on the similar image to obtain a corrected image includes:
converting the case information into correction coefficients;
calculating the image similarity by adopting the correction coefficient;
and correcting the similar images by utilizing the image similarity to obtain corrected images.
In a possible implementation manner of the first aspect, the first neural network is an image processing model of image feature extraction;
the second neural network is an information processing model for information classification judgment.
In a possible implementation manner of the first aspect, the searching similar images from a predicted online database based on the fundus image features includes:
calculating the feature similarity of each fundus image in the predicted online database and the fundus image feature to obtain a plurality of feature similarities;
and selecting the feature similarity with the largest value from the feature similarities, and taking the fundus image corresponding to the feature similarity with the largest value as a similar image.
In a possible implementation manner of the first aspect, the preset labeled image data set used for the first neural network training includes image data of auxiliary labels and manual labels;
the model training comprises:
sequentially carrying out data preprocessing and data enhancement on the image data set, resampling a sample and adding auxiliary labels to obtain a training data set;
and performing model training on the deep neural network by adopting the training data set to obtain a preset first neural network.
In a possible implementation manner of the first aspect, after the step of labeling the initial fundus image according to the similar image, the method further includes:
storing the marked images into a preset image database, and counting the number of images of the preset image database;
and if the numerical value of the image quantity is larger than a preset quantity threshold value, retraining the preset neural network.
A second aspect of an embodiment of the present application provides a fundus image labeling system based on a neural network, the system including:
the extraction module is used for acquiring an initial fundus image of a diabetic patient and extracting fundus image characteristics from the initial fundus image by utilizing a preset first neural network;
the searching module is used for searching similar images from a predicted online database based on the fundus image characteristics, and the preset online database consists of a plurality of fundus images preprocessed through data;
and the labeling module is used for labeling the focus of the initial fundus image according to the similar images to obtain a labeling image.
In a possible implementation manner of the second aspect, the labeling module is further configured to:
predicting case information of the initial fundus image corresponding to the diabetic patient by using a preset second neural network, wherein the case information comprises: gender, age, smoking history, hypertension;
performing result correction on the similar images based on the case information to obtain corrected images;
performing interpretability auxiliary diagnosis on the corrected image by adopting a class activation labeling technology to obtain a focus region of the image;
and performing activation mapping on the initial fundus image according to the focus area, and highlighting and marking the focus area corresponding to the initial fundus image in a thermodynamic diagram mode to obtain a marked image.
In a possible implementation manner of the second aspect, the labeling module is further configured to:
converting the case information into correction coefficients;
calculating the image similarity by adopting the correction coefficient;
and correcting the similar images by utilizing the image similarity to obtain corrected images.
In a possible implementation manner of the second aspect, the first neural network is an image processing model of image feature extraction;
the second neural network is an information processing model for information classification judgment.
In a possible implementation manner of the second aspect, the search module is further configured to:
calculating the feature similarity of each fundus image in the predicted online database and the fundus image feature to obtain a plurality of feature similarities;
and selecting the feature similarity with the largest value from the feature similarities, and taking the fundus image corresponding to the feature similarity with the largest value as a similar image.
In a possible implementation manner of the second aspect, the annotated image dataset used by the preset first neural network training includes image data of auxiliary annotations and artificial annotations;
the model training comprises:
sequentially carrying out data preprocessing and data enhancement on the image data set, resampling a sample and adding auxiliary labels to obtain a training data set;
and performing model training on the deep neural network by adopting the training data set to obtain a preset first neural network.
In a possible implementation manner of the second aspect, the system further includes:
the statistics module is used for storing the marked images into a preset image database and counting the number of images of the preset image database;
and the retraining module is used for retraining the preset neural network if the numerical value of the image quantity is larger than the preset quantity threshold value.
Compared with the prior art, the fundus image labeling method and system based on the neural network provided by the embodiment of the application have the beneficial effects that: according to the application, after the initial fundus image of a diabetic patient is acquired, the neural network is utilized to extract image features from the initial fundus image, similar images are queried from the on-line database based on the image features, and focus labeling is carried out on the initial fundus image according to the similar images, so that the method is used for doctor reference, the time consumption of diagnosis is shortened, and the diagnosis efficiency is improved.
Drawings
Fig. 1 is a schematic flow chart of a fundus image labeling method based on a neural network according to an embodiment of the present application;
FIG. 2 is an operation flow chart of a fundus image labeling method based on a neural network according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a fundus image labeling system based on a neural network according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Diabetic retinopathy is a complication caused by diabetes mellitus. Common pathological changes are microangioma, hard exudation, cotton linter plaque, glass blood volume, new blood vessel, etc. The severity of diabetic retinopathy is classified into five classes (0-4) according to international clinical grading standards. More severe diabetic retinopathy may cause vision impairment and blindness. Thus, diabetics need to go to the ocular fundus periodically to discover lesions in time and slow or stop the progression of the condition.
The current common inspection method is fundus photography inspection, fundus lesion condition is recorded through images, and then the images are sent to doctors for diagnosis and follow-up.
However, the current common methods have the following technical problems: the traditional examination mode requires a professional doctor to diagnose, is long in time consumption and high in cost, and in view of the current situation that medical resources are unevenly distributed and the number of the professional doctors is limited, the demand for automatically diagnosing diabetic retinopathy is increasingly high, and the traditional method is difficult to meet the application demands of users.
In order to solve the above problems, a fundus image labeling method based on a neural network according to an embodiment of the present application will be described and illustrated in detail by the following specific examples.
Referring to fig. 1, a flowchart of a fundus image labeling method based on a neural network according to an embodiment of the present application is shown.
In an embodiment, the fundus image labeling method based on the neural network can be applied to a fundus image labeling system based on the neural network. The system can be installed and applied to a computer terminal.
The fundus image labeling method based on the neural network can comprise the following steps:
s11, acquiring an initial fundus image of a diabetic patient, and extracting fundus image features from the initial fundus image by using a preset first neural network.
In one embodiment, the initial fundus image may be a fundus image of the diabetic retinopathy level of a diabetic patient.
The feature extraction may be to average and pool the feature value of the last layer of the model convolution layer of the neural network to obtain the image feature.
Specifically, the feature extraction may be to extract features of fundus pictures using a trained deep neural network model such as an attention network model, an acceptance-v 3 model, a res net50 model as a feature extraction model.
Alternatively, feature data (feature vectors) may be generated and stored in the fundus image feature database.
In an alternative embodiment, the preset first neural network is a model training network using annotated image datasets, wherein the annotated image datasets include image data that is both auxiliary annotations and artificial annotations.
Wherein, as an example, the model training may comprise the sub-steps of:
and S111, sequentially carrying out data preprocessing, data enhancement, sample resampling and auxiliary labeling on the image data set to obtain a training data set.
And S112, performing model training on the deep neural network by adopting the training data set.
Specifically, the user may preset a database, where a training data set that has been marked by the user, an unlabeled training data set, offline feature data, and the like constitute an image data set.
The marked training data set can be a fundus image marked with the degree of diabetic retinopathy. The picture source is a Kaggle data set. Wherein the data distribution in the training set is that there is no pathology: light: and (3) moderately: severe: proliferative diabetic retinopathy = 25810:2443:5292:873:708.
there are millions of fundus images in an unlabeled training dataset.
The offline feature data may be stored feature data of an existing marked fundus image.
In an alternative embodiment, the application may assist in labeling. The auxiliary labeling can be to label an unlabeled training data set by using a trained model, namely, label a five-level label of the diabetic retinopathy.
The manual labeling module provides non-labeling fundus images for professionals to label.
Before training, the data set needs to be processed, and the processing may include: data preprocessing, data enhancement, sample resampling and auxiliary labeling are added to obtain a training data set.
Specifically, the data preprocessing includes appropriate clipping, flipping, noise addition, contrast change, and the like of the noted fundus image.
For example, the bottom-of-eye image may be appropriately cropped to remove the excess background portion, and then the missing values of the image may be complemented and noise eliminated.
Data enhancement adopts data enhancement technology data enhancement, and samples are resampled to increase data, such as resampling the existing samples with fewer categories or generating new samples by using image generation technologies such as CycleGan.
Adding auxiliary labels may add auxiliary labels to the dataset using the trained model.
After the above processing is completed, model training and feature extraction may be performed using the processed dataset to extract features in the image.
Specifically, a pre-trained model EfficientNet-B0 can be employed and developed based on the PyTorch framework. The classification categories are five categories, namely five classifications of diabetic retinopathy. And training a model by adopting an Adam optimizer and a cross entropy loss function. Model performance is assessed based on F1 score and accuracy.
S12, searching similar images from a predicted online database based on the fundus image characteristics, wherein the preset online database consists of a plurality of fundus images preprocessed through data.
The on-line database is internally provided with a plurality of fundus images which are preprocessed by data, and the fundus images can be fundus images of the previous patient, and can be marked by fundus images of the previous patient, so that the follow-up diagnosis of doctors is facilitated, and the working efficiency is improved.
In an alternative embodiment, step S12 may comprise the sub-steps of:
s121, calculating the feature similarity of each fundus image in the predicted online database and the fundus image feature to obtain a plurality of feature similarities.
S122, selecting the feature similarity with the largest value from the feature similarities, and taking the fundus image corresponding to the feature similarity with the largest value as a similar image.
Specifically, the similarity between the features of the input image and the database features can be calculated by using the Euclidean distance, so as to obtain a plurality of feature similarities.
And then outputting N fundus images and related information of the images which are ranked in front according to the similarity, forming a result image set, wherein the image with the highest similarity is a similar image. The Faiss framework can be used for constructing feature indexes and sequences.
And S13, marking the focus of the initial fundus image according to the similar image to obtain a marked image.
Because the similar image is similar to the initial fundus image, the focus position of the initial fundus image can be determined by referring to the focus position in the similar image, and focus labeling is performed according to the focus position, so that a doctor can conveniently perform subsequent diagnosis, and the diagnosis efficiency is improved.
In one embodiment, step S13 may include the sub-steps of:
s131, predicting case information of the diabetes patient corresponding to the initial fundus image by using a preset second neural network, wherein the case information comprises: gender, age, smoking history, hypertension.
And S132, correcting the similar images based on the case information to obtain corrected images.
S133, performing interpretability auxiliary diagnosis on the corrected image by adopting a class activation labeling technology to obtain a focus region of the image.
S134, performing activation mapping on the initial fundus image according to the focus area, and highlighting and marking the focus area corresponding to the initial fundus image in a thermodynamic diagram mode to obtain a marked image.
Specifically, medical record information such as gender, age, smoking history, hypertension and the like of a patient can be corresponding to a preset predictive input image, and then a diagnosis result of diabetic retinopathy grading of a similar image, basic information such as gender, age, smoking history, blood pressure and the like of the patient predicted by a system and focus marks of eye fundus images to be queried are obtained.
And determining the focus position in the similar image according to the diagnosis result, and correcting the similar image according to the focus position.
The correction function may be used in particular to correct the diabetic retinopathy level of similar images.
In one embodiment, step S132 may include:
s1321, converting the case information into a correction coefficient.
S1322, calculating the image similarity by using the correction coefficient.
S1323, correcting the similar image by using the image similarity to obtain a corrected image.
Specifically, information of gender, age, smoking history, hypertension, etc. may be converted into correction coefficients, into: if the blood pressure is high, the correction coefficient is 2, and the blood pressure is low and is 1; age 10-20, correction factor of 2, age 20-30, correction factor of 3, and so on. The specific conversion mode can be adjusted according to actual needs.
And then, substituting each correction coefficient into the correction function as a calculation weight to calculate so as to obtain the image similarity.
In one embodiment, the correction function may be represented by the following formula:
image similarity = w1×x1+w2×x2+w3×x3+w4×x4.
W1 is a correction coefficient of blood pressure (hypertension is 2, hypotension is 1), X1 is a blood pressure value, W2 is a correction coefficient of smoking history (having a smoking history of 1, no smoking history parameter of 0), X2 is a time period of smoking history, W3 is a correction coefficient of sex (male is 1, female is 2), X3 is a constant, W4 is a correction coefficient of age, X4 is an age value.
The image similarity can be calculated according to the formula, and then the image is corrected according to the image similarity to obtain a corrected image.
The specific correction method may be that when the image similarity is smaller than a preset value, the similar image is corrected according to the image similarity, and the correction method is respectively color, focus area size and the like.
It should be noted that, the neural network model responsible for prediction adopts an admission-v 3 neural network, and two models are respectively trained for continuous value prediction and classification prediction. The continuous value prediction model is used for predicting age and blood pressure, and the performance of the continuous value prediction model is estimated based on the average absolute error; the two-class prediction model is used to predict gender and smoking status, and performance of the two-class prediction model is evaluated based on AUC values.
Then, the corrected image can be subjected to an interpretative auxiliary diagnosis by adopting a class activation labeling technology to obtain a focus region of the image.
The feature data extracted from the fundus image may be specifically mapped to highlight the lesion area in the image by thermodynamic diagram.
Class activation thermodynamic diagrams may also be used to coarsely characterize images.
Optionally, if the user is a professional physician, the physician may also determine whether the diagnosis result has misdiagnosis according to his own diagnosis experience, if so, the user inputs the cause of misdiagnosis, the system records and corrects, and finally the physician confirms the system correction result again.
After labeling, the medical record information of the patient, the diabetic retinopathy label of the fundus image and the focus labeling of the fundus image can be combined together, and a diagnosis report is output, so that the doctor can check conveniently.
Alternatively, the fundus image input by the user can be marked with the lesion classification category according to the diagnosis result and synchronized with the marked data module
To further improve the accuracy of the neural network labeling, the method may further include:
s14, storing the marked images into a preset image database, and counting the number of images of the preset image database.
And S15, if the numerical value of the image quantity is larger than a preset quantity threshold value, retraining the preset neural network.
Each marked image is stored in a database, after the number of the images in the database is increased to a certain number, the model is retrained to improve the model performance, and the trained model is applied to eye fundus image feature extraction to be queried.
Meanwhile, after the model is updated, feature extraction can be performed again based on the new model, and the updated offline feature database can be used for eye fundus image similarity measurement to be queried.
It should be noted that, the first neural network of the present application is an image processing model for extracting image features, and may specifically be used for extracting image features, and comparing image features in an image library by using the image features.
The second neural network is an information processing model for information classification judgment, and specifically can be used for performing class judgment according to image features and information, for example, judging whether hypertension exists, judging whether smoking history exists, and the like.
Referring to fig. 2, an operation flowchart of a fundus image labeling method based on a neural network according to an embodiment of the present application is shown.
Specifically, the labeled data sets (including the manually labeled data set and the auxiliary labeled data set) may be prepared in advance; performing augmentation treatment on the marked data set, and performing model training on the neural network by using the treated data to obtain a markable neural network; after the fundus image of the patient is acquired, extracting image features and prediction information by using a neural network, searching similar images by using the image features, and correcting the similar images by using the prediction information; after correction, the images can be marked and the pathological grade can be determined. And finally, the marked image and various information of the integrated patient are sent to an intelligent terminal of a doctor or displayed on a screen of a system for the doctor to refer.
In this embodiment, the embodiment of the application provides a fundus image labeling method based on a neural network, which has the beneficial effects that: according to the application, after the initial fundus image of a diabetic patient is acquired, the neural network is utilized to extract image features from the initial fundus image, similar images are queried from the on-line database based on the image features, and focus labeling is carried out on the initial fundus image according to the similar images, so that the method is used for doctor reference, the time consumption of diagnosis is shortened, and the diagnosis efficiency is improved.
The embodiment of the application also provides a fundus image labeling system based on the neural network, and referring to fig. 3, a schematic structural diagram of the fundus image labeling system based on the neural network is shown.
Wherein, as an example, the fundus image labeling system based on the neural network may comprise:
an extraction module 301, configured to obtain an initial fundus image of a diabetic patient, and extract fundus image features from the initial fundus image by using a preset first neural network;
a searching module 302, configured to search for similar images from a predicted online database based on the fundus image features, where the preset online database is composed of a plurality of fundus images preprocessed by data;
and the labeling module 303 is configured to label the focus of the initial fundus image according to the similar image, so as to obtain a labeled image.
Optionally, the labeling module is further configured to:
predicting case information of the initial fundus image corresponding to the diabetic patient by using a preset second neural network, wherein the case information comprises: gender, age, smoking history, hypertension;
performing result correction on the similar images based on the case information to obtain corrected images;
performing interpretability auxiliary diagnosis on the corrected image by adopting a class activation labeling technology to obtain a focus region of the image;
and performing activation mapping on the initial fundus image according to the focus area, and highlighting and marking the focus area corresponding to the initial fundus image in a thermodynamic diagram mode to obtain a marked image.
Optionally, the labeling module is further configured to:
converting the case information into correction coefficients;
calculating the image similarity by adopting the correction coefficient;
and correcting the similar images by utilizing the image similarity to obtain corrected images.
Optionally, the first neural network is an image processing model of image feature extraction;
the second neural network is an information processing model for information classification judgment.
Optionally, the search module is further configured to:
calculating the feature similarity of each fundus image in the predicted online database and the fundus image feature to obtain a plurality of feature similarities;
and selecting the feature similarity with the largest value from the feature similarities, and taking the fundus image corresponding to the feature similarity with the largest value as a similar image.
Optionally, the noted image dataset used by the preset first neural network training includes image data of auxiliary annotations and manual annotations;
the model training comprises:
sequentially carrying out data preprocessing and data enhancement on the image data set, resampling a sample and adding auxiliary labels to obtain a training data set;
and performing model training on the deep neural network by adopting the training data set to obtain a preset first neural network.
Optionally, the system further comprises:
the statistics module is used for storing the marked images into a preset image database and counting the number of images of the preset image database;
and the retraining module is used for retraining the preset neural network if the numerical value of the image quantity is larger than the preset quantity threshold value.
It will be clearly understood by those skilled in the art that, for convenience and brevity, the specific working process of the system described above may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Further, an embodiment of the present application further provides an electronic device, including: the fundus image labeling method based on the neural network comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the fundus image labeling method based on the neural network according to the embodiment when executing the program.
Further, an embodiment of the present application also provides a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the fundus image labeling method based on the neural network as described in the above embodiment.
While the foregoing is directed to the preferred embodiments of the present application, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the application, such changes and modifications are also intended to be within the scope of the application.

Claims (7)

1. A fundus image labeling method based on a neural network, the method comprising:
acquiring an initial fundus image of a diabetic patient, and extracting fundus image features from the initial fundus image by using a preset first neural network;
searching similar images from a predicted on-line database based on the fundus image characteristics, wherein the preset on-line database consists of a plurality of fundus images preprocessed by data;
performing focus labeling on the initial fundus image according to the similar image to obtain a labeling image;
and labeling the focus of the initial fundus image according to the similar image to obtain a labeling image, wherein the labeling comprises the following steps:
predicting case information of the initial fundus image corresponding to the diabetic patient by using a preset second neural network, wherein the case information comprises: gender, age, smoking history, hypertension;
performing result correction on the similar images based on the case information to obtain corrected images;
performing interpretability auxiliary diagnosis on the corrected image by adopting a class activation labeling technology to obtain a focus region of the image;
performing activation mapping on the initial fundus image according to the focus area, and highlighting and marking the focus area corresponding to the initial fundus image in a thermodynamic diagram mode to obtain a marked image;
the step of correcting the similar images based on the case information to obtain corrected images includes:
converting the case information into correction coefficients;
calculating the image similarity by adopting the correction coefficient;
correcting the similar images by utilizing the image similarity to obtain corrected images;
the searching similar images from a predicted online database based on the fundus image features comprises:
calculating the feature similarity of each fundus image in the predicted online database and the fundus image feature to obtain a plurality of feature similarities;
and selecting the feature similarity with the largest value from the feature similarities, and taking the fundus image corresponding to the feature similarity with the largest value as a similar image.
2. The fundus image labeling method based on a neural network according to claim 1, wherein the first neural network is an image processing model of image feature extraction;
the second neural network is an information processing model for information classification judgment.
3. The fundus image labeling method based on the neural network according to claim 1, wherein the preset labeled image dataset used for the first neural network training comprises image data of auxiliary labeling and manual labeling;
the model training comprises:
sequentially carrying out data preprocessing and data enhancement on the image data set, resampling a sample and adding auxiliary labels to obtain a training data set;
and performing model training on the deep neural network by adopting the training data set to obtain a preset first neural network.
4. A neural network-based fundus image labeling method according to any of claims 1-3, wherein after said step of labeling said initial fundus image from said similar image, said method further comprises:
storing the marked images into a preset image database, and counting the number of images of the preset image database;
and if the numerical value of the image quantity is larger than a preset quantity threshold value, retraining the preset neural network.
5. A neural network-based fundus image labeling system, the system comprising:
the extraction module is used for acquiring an initial fundus image of a diabetic patient and extracting fundus image characteristics from the initial fundus image by utilizing a preset first neural network;
the searching module is used for searching similar images from a predicted online database based on the fundus image characteristics, and the preset online database consists of a plurality of fundus images preprocessed through data;
the labeling module is used for labeling the focus of the initial fundus image according to the similar images to obtain a labeling image;
the labeling module is further configured to:
predicting case information of the initial fundus image corresponding to the diabetic patient by using a preset second neural network, wherein the case information comprises: gender, age, smoking history, hypertension;
performing result correction on the similar images based on the case information to obtain corrected images;
performing interpretability auxiliary diagnosis on the corrected image by adopting a class activation labeling technology to obtain a focus region of the image;
performing activation mapping on the initial fundus image according to the focus area, and highlighting and marking the focus area corresponding to the initial fundus image in a thermodynamic diagram mode to obtain a marked image;
the step of correcting the similar images based on the case information to obtain corrected images includes:
converting the case information into correction coefficients;
calculating the image similarity by adopting the correction coefficient;
correcting the similar images by utilizing the image similarity to obtain corrected images;
the searching similar images from a predicted online database based on the fundus image features comprises:
calculating the feature similarity of each fundus image in the predicted online database and the fundus image feature to obtain a plurality of feature similarities;
and selecting the feature similarity with the largest value from the feature similarities, and taking the fundus image corresponding to the feature similarity with the largest value as a similar image.
6. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the neural network-based fundus image labeling method of any of claims 1-4 when the program is executed.
7. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the neural network-based fundus image labeling method according to any one of claims 1-4.
CN202211524350.9A 2022-12-01 2022-12-01 Fundus image labeling method and fundus image labeling system based on neural network Active CN115985472B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211524350.9A CN115985472B (en) 2022-12-01 2022-12-01 Fundus image labeling method and fundus image labeling system based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211524350.9A CN115985472B (en) 2022-12-01 2022-12-01 Fundus image labeling method and fundus image labeling system based on neural network

Publications (2)

Publication Number Publication Date
CN115985472A CN115985472A (en) 2023-04-18
CN115985472B true CN115985472B (en) 2023-09-22

Family

ID=85965581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211524350.9A Active CN115985472B (en) 2022-12-01 2022-12-01 Fundus image labeling method and fundus image labeling system based on neural network

Country Status (1)

Country Link
CN (1) CN115985472B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506770A (en) * 2017-08-17 2017-12-22 湖州师范学院 Diabetic retinopathy eye-ground photography standard picture generation method
CN108470359A (en) * 2018-02-11 2018-08-31 艾视医疗科技成都有限公司 A kind of diabetic retinal eye fundus image lesion detection method
CN110084252A (en) * 2019-04-29 2019-08-02 南京星程智能科技有限公司 Diabetic retinopathy image labeling method based on deep learning
CN110490236A (en) * 2019-07-29 2019-11-22 武汉工程大学 Automatic image marking method, system, device and medium neural network based
CN110706233A (en) * 2019-09-30 2020-01-17 北京科技大学 Retina fundus image segmentation method and device
CN111291765A (en) * 2018-12-07 2020-06-16 北京京东尚科信息技术有限公司 Method and device for determining similar pictures
CN111753861A (en) * 2019-03-28 2020-10-09 香港纺织及成衣研发中心有限公司 Automatic image annotation system and method for active learning
CN113793301A (en) * 2021-08-19 2021-12-14 首都医科大学附属北京同仁医院 Training method of fundus image analysis model based on dense convolution network model

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506770A (en) * 2017-08-17 2017-12-22 湖州师范学院 Diabetic retinopathy eye-ground photography standard picture generation method
CN108470359A (en) * 2018-02-11 2018-08-31 艾视医疗科技成都有限公司 A kind of diabetic retinal eye fundus image lesion detection method
CN111291765A (en) * 2018-12-07 2020-06-16 北京京东尚科信息技术有限公司 Method and device for determining similar pictures
CN111753861A (en) * 2019-03-28 2020-10-09 香港纺织及成衣研发中心有限公司 Automatic image annotation system and method for active learning
CN110084252A (en) * 2019-04-29 2019-08-02 南京星程智能科技有限公司 Diabetic retinopathy image labeling method based on deep learning
CN110490236A (en) * 2019-07-29 2019-11-22 武汉工程大学 Automatic image marking method, system, device and medium neural network based
CN110706233A (en) * 2019-09-30 2020-01-17 北京科技大学 Retina fundus image segmentation method and device
CN113793301A (en) * 2021-08-19 2021-12-14 首都医科大学附属北京同仁医院 Training method of fundus image analysis model based on dense convolution network model

Also Published As

Publication number Publication date
CN115985472A (en) 2023-04-18

Similar Documents

Publication Publication Date Title
JP2021531098A (en) Systems and methods for determining eye condition using AI
WO2021114817A1 (en) Oct image lesion detection method and apparatus based on neural network, and medium
CN117392470B (en) Fundus image multi-label classification model generation method and system based on knowledge graph
Yang et al. Discriminative dictionary learning for retinal vessel segmentation using fusion of multiple features
CN113222064A (en) Image target object real-time detection method, system, terminal and storage medium
CN117058676B (en) Blood vessel segmentation method, device and system based on fundus examination image
Zhang et al. MRMR optimized classification for automatic glaucoma diagnosis
CN112132801A (en) Lung bullae focus detection method and system based on deep learning
EP3939003B1 (en) Systems and methods for assessing a likelihood of cteph and identifying characteristics indicative thereof
CN113610118A (en) Fundus image classification method, device, equipment and medium based on multitask course learning
CN115206478A (en) Medical report generation method and device, electronic equipment and readable storage medium
CN115578783A (en) Device and method for identifying eye diseases based on eye images and related products
CN115471512A (en) Medical image segmentation method based on self-supervision contrast learning
CN117876242A (en) Fundus image enhancement method, fundus image enhancement device, fundus image enhancement apparatus, and fundus image enhancement program
CN115985472B (en) Fundus image labeling method and fundus image labeling system based on neural network
CN112668668B (en) Postoperative medical image evaluation method and device, computer equipment and storage medium
CN115409812A (en) CT image automatic classification method based on fusion time attention mechanism
CN115170492A (en) Intelligent prediction and evaluation system for postoperative vision of cataract patient based on AI (artificial intelligence) technology
CN114649092A (en) Auxiliary diagnosis method and device based on semi-supervised learning and multi-scale feature fusion
CN114330484A (en) Method and system for classification and focus identification of diabetic retinopathy through weak supervision learning
Sharma et al. Cracking Light on Cataract Detection by Implementing VGG16 Transfer Learning-Based Model on Fundus Images
CN112862761B (en) Brain tumor MRI image segmentation method and system based on deep neural network
Ali et al. Classifying Three Stages of Cataract Disease using CNN
WO2024098379A1 (en) Fully automatic cardiac magnetic resonance imaging segmentation method based on dilated residual network
EP4338680A1 (en) Methods and systems for analyzing diastolic function using 2d echocardiographic images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant