CN114334098A - Target identification system and method for eyeground dazzle color imaging - Google Patents

Target identification system and method for eyeground dazzle color imaging Download PDF

Info

Publication number
CN114334098A
CN114334098A CN202111484955.5A CN202111484955A CN114334098A CN 114334098 A CN114334098 A CN 114334098A CN 202111484955 A CN202111484955 A CN 202111484955A CN 114334098 A CN114334098 A CN 114334098A
Authority
CN
China
Prior art keywords
model
training
module
data
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111484955.5A
Other languages
Chinese (zh)
Inventor
郑健
徐立璋
靳雪
刘国
尹荣荣
张倩
洪姣
邓科
章书波
胡汉平
毛昱升
姜兴民
朱松林
刘芷萱
赵先洪
李银谷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Aiyanbang Technology Co ltd
Original Assignee
Wuhan Aiyanbang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Aiyanbang Technology Co ltd filed Critical Wuhan Aiyanbang Technology Co ltd
Priority to CN202111484955.5A priority Critical patent/CN114334098A/en
Publication of CN114334098A publication Critical patent/CN114334098A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a target identification system for eyeground colorful imaging, which comprises a cloud server and a mobile client PowerRealInfo; the eyeground dazzling image imaging equipment is used for collecting images, and the eyeground dazzling images are input as model images; the cloud server comprises a model pre-training module and a model prediction module; the model pre-training module in the server is connected with the background control end through a network, the model prediction module is also connected with the background control end through a network, and the connection between the model pre-training module and the model prediction module is controlled through the background; the mobile client comprises a result display processing module; the mobile client is connected with a model prediction module in the cloud server through a network; the model prediction module is connected with the result display module. The method is used for realizing high-performance fundus image target identification, and helps medical personnel to quickly check the states of various focus points, and the accuracy is high.

Description

Target identification system and method for eyeground dazzle color imaging
Technical Field
The invention relates to a fundus image lesion recognition system, in particular to a target recognition system for fundus dazzle color imaging. In addition, the invention also relates to a method for realizing the target recognition system of the eyeground dazzle color imaging.
Background
The invention is mainly based on two backgrounds, namely, Diabetic Retinopathy (DR) is the most rapidly growing blindness cause in the world at present and is one of complications of diabetic patients, and each patient is possibly blinded due to diabetic retinopathy; secondly, the deep learning is developed in the field of image processing, and particularly in a medical image labeling system, the deep learning is most widely applied and the technical development is most mature.
In one aspect, diabetic retinopathy is free of any clinical symptoms prior to the development of irreversible visual impairment. The main means for preventing blindness at present is to perform regular examination, collect fundus pictures of patients by using special equipment, and diagnose the collected images by a professional ophthalmologist. However, in many places, there are not enough doctors to do this and almost half of the patients are not yet in time to be diagnosed and are already unknown. Therefore, with the rising incidence of diabetes, it is important to automatically diagnose diabetic retinopathy with low cost; on the other hand, image annotation is more and more widely applied to various mechanisms such as communication, traffic, military command and the like, and plays an important role in providing shared information, decision support and situation display. With the improvement of the technology, the functions of the image labeling system are developed more and more. However, the most basic functions of an image annotation system are to display an image and to identify a pre-established target to provide the person with desired image characteristics. The user can upload images through the mobile terminal, and meanwhile, the result can be obtained on the mobile terminal, so that convenience is provided for most users.
In the prior art, part of AI systems based on fundus photography exist, but because various standards of medical AI are not perfect, no reference experience exists, and no actual product appears, under the condition, how to realize high-performance fundus image target identification and help medical personnel to quickly check the states of various focus points is a main difficulty faced at present.
Compared with a similar patent of 2017106632189 < lesion identification system based on fundus images >, the method selects the most reasonable model by using the self-established result judgment function in a plurality of deep learning models through multi-aspect comparison, and can automatically adjust the self-adaptive parameters and the model according to the actual condition without manually changing and adjusting the model parameters according to the actual condition; compared with patent 201711439401, patent 201711439401 uses a fixed decision device, and each module is connected through hardware (or the whole system is installed on one device), and this way can achieve fast transmission speed and stable transmission, but because of designing too many connections on hardware, the dependency on the system stability is higher, and meanwhile, the maintenance cost is higher and more cumbersome than the way of deploying to the server cloud. And because all systems are borne on hardware equipment, the mobile convenience and the expansibility are relatively restricted. Compared with the mode depending on hardware connection, all services and systems are placed on the cloud server, all operations can be completed through the mobile phone terminal, and the method is more convenient and faster. Compared with the method adopted by the invention, the method has the advantages of heavy equipment load and high requirement on equipment.
Disclosure of Invention
The invention aims to provide a target recognition system for fundus colorful imaging, which realizes high-performance fundus image target recognition and helps medical staff to quickly check the states of various focus points.
Firstly, since the related researches only introduce a plurality of simple methods for deep learning to identify images, some objects with more obvious characteristics are identified: for example, some cars, signs, trees, etc. on the road are identified, and the networks used are different, such as UNET network, mask-cnn network, mask-rcnn network, resnet network, etc., so that what network model to select and what method to evaluate the model are the first problems we encounter.
Furthermore, when we determine that the maskrnn model is used, because parameters of the maskrnn model are numerous, slight differences of the parameters have a strong influence on the training of the model, and related researches have not been provided before to provide corresponding referenceable parameter configuration and initial training samples, so the parameter configuration and the initial model samples are the second problems encountered by us.
The invention aims to provide a target recognition system for fundus colorful imaging, which is used for realizing high-performance fundus image target recognition and helping medical personnel to quickly check the states of various focus points. The invention also provides a method for realizing the target identification.
In order to achieve the purpose, the invention adopts the following technical scheme:
a target recognition system for eyeground dazzle color imaging comprises a cloud server, a mobile client PowerRealInfo (installed at the mobile end);
wherein:
the eyeground dazzling image imaging device is used for collecting images, the eyeground dazzling images are input as model images, and a typical eyeground dazzling image imaging device is a Heidelberg related imaging instrument;
the cloud server comprises a model pre-training module (controlled by a background and installed on the cloud server) and a model prediction module (installed on the cloud server);
the model pre-training module in the server is connected with the background control end through a network, the model prediction module is also connected with the background control end through a network, and the connection between the model pre-training module and the model prediction module is controlled through the background;
the mobile client comprises a result display processing module (installed on the mobile client);
the mobile client is connected with a model prediction module in the cloud server through a network;
the mobile client comprises a doctor module, a data processing module and a data processing module, wherein the doctor module is used for carrying out data processing operations including doctor account login, doctor resume patient files, doctor and patient communication data information storage, uploading eye fundus colorful pictures of patients and case history file processing;
the mobile client doctor operating steps include:
a doctor logs in an account, checks the information of a patient in the current period, and communicates with the patient through a client;
in the face of a new patient, a doctor firstly establishes a patient file and inputs and stores the relevant identity information of the patient; performing eye ground dazzle color photographing on the patient to obtain an eye ground dazzle color picture of the patient, manually downloading the picture and transmitting the picture to the system, and after photographing is finished, looking up by a doctor and the patient through a mobile client;
the doctor is connected with the cloud server through the mobile client, processes the eyeground dazzling picture which is shot or uploaded by using the deep learning button, and returns a prediction result of deep learning;
the model prediction module is connected with the result display module;
a target identification method for fundus dazzle color imaging comprises the following steps: the method comprises the following steps:
1. a model pre-training module in the background control cloud server performs pre-training model test;
1a, uploading a processed pre-training sample from a background, and then using configured maskrcnn parameter configuration for training; and then testing the trained model by using a testing module in a background.
The sample is an image collected by eyeground colorful image imaging equipment and is input as a model image; input image size 768 × 868, three channels
1b, dividing each fundus dazzle color picture into corresponding data sets according to the type of lesion of the fundus dazzle color picture, and dividing each picture into one class of the data sets;
the data sets are divided into nine major categories: 0: normal, 1: pathological myopia, 2: dry AMD, 3: wet AMD, 4: retinal artery occlusion, 5: retinal branch vein occlusion, 6: epiretinal membrane, 7: central retinal vein occlusion, 8: diabetic retinopathy; total nine major lesions;
1c, pre-training, namely expanding the number of the data sets in a mode of carrying out mirror image overturning and adding random noise on the data sets, and increasing the training sets;
when the acquired fundus colorful images are expanded to a data set, image blocks of pathological changes in a small number of pathological changes are independently intercepted and placed into the pathological change classified data; the size of the image block is cut into 96 × 96; all images are all thumbnail to a fixed 224 x 224 size before finally entering the neural network training input.
The data set image comprises an original image shot by colorful imaging and a lesion area image obtained by intercepting the original image, and the original image and the lesion area image are mixed together and matched to be placed into a training data set.
The data set comprises initial data which is the existing picture provided by the hospital;
the initial data of the current data set is provided for people hospitals of Wuhan university, and the number of the data sets is about two thousand;
because the pixels of the conventional fundus colorful images are large, in order to achieve a better training effect, the original image and the image blocks of the pathological change area are mixed and placed in a training set, so that the number of training images is increased, more importantly, because only a small part of the pathological change area is usually true and the rest parts are normal areas, the number of images of the pathological change area is increased, the robustness of the model to the pathological change area is facilitated, and the performance of the neural network algorithm model is enhanced.
2. And after the pre-trained model passes the test, loading the pre-trained model into the model prediction module through background control.
The pre-training model is a network generated by training an ImageNet or CoCo data set by using a complete network model, and has the advantages of generating initial network parameters with good fitting, reducing the time required by the eye ground dazzling color picture training in the later period and greatly improving the accuracy of the eye ground dazzling color picture training. And after loading the pre-training model, continuously training the fundus oculi colorful picture input into the neural network by using the pre-training model to obtain a final classification prediction network model.
3. After the model is loaded, a user uses app to connect the cloud server at a mobile terminal, the shot fundus picture is uploaded to the cloud server, and the button control model prediction module for AI judgment is clicked to judge.
4. The server receives a Get command sent by the client, sends the data of the current client to a buffer area, stores the obtained data in a server model storage area, judges the uploaded data by using a model judgment model, marks a seepage point, gives possible lesion degree and possibility, and finally sends the result to the mobile terminal; in viewing the results of the AI diagnosis, the area of the lesion is marked with a box on the display screen, giving the possible lesion type and the confidence given by the neural network.
The detailed control process of the pre-training model and the model prediction module is as follows:
1. the background uses the vpn to connect the server, then uses the jupyter to upload the processed pre-training data, and uploads the pre-training data to the server pre-training module;
2. pre-training the waiting operation, and selecting one of the following three operations:
(1) pre-training: loading pre-training data and training parameters, then carrying out model training, and storing the model into a model storage area after the training is finished;
(2) testing a training model: loading a trained pre-training model, selecting a test mode in a pre-training module by using preset test data, and sending a test result to a background after the test is finished;
(3) if the test result is normal, the model is loaded into the model prediction module, and if the test result is incorrect, the training parameters are adjusted in the first step to be trained again.
3. The mobile terminal is connected with the server by using the app, the photographed fundus picture is selected, AI judgment is selected, and then the picture is uploaded to the server model prediction module by the app;
4. operation of the model prediction module: and loading the trained pre-training model, using the picture uploaded by the app, using the pre-training model to judge the model, and sending a judgment result to the mobile terminal through the app.
Compared with the prior art, the invention has the following beneficial effects:
the calculation of the system of the invention depends on the cloud server, and firstly, the time required by model training can be greatly shortened; secondly, the model is stored in the server, so that the load of a mobile terminal can be reduced, and the adjustment of the parameters of the model by a background is facilitated; furthermore, the invention adopts a model self-adaptive adjustment function, and does not need to carry out manual guess type model parameter adjustment according to the specific training set condition. Meanwhile, a doctor-patient management platform of a WeChat small program is established by utilizing wide users of WeChat, so that the patient is served, a relatively accurate disease judgment and referral suggestion can be obtained in the shortest time, and an auxiliary diagnosis is provided for the doctor, so that the focus appearing in the slit lamp picture can be more accurately positioned and treated. The system based on the small program of the WeChat has the characteristics and the advantages of convenience and rapidness, and diagnosis and treatment are carried out on patients in remote areas, community hospitals and other areas without the limitation of network and hardware conditions.
The deep learning model of the invention provides multiple and multistage pathological change diagnosis and treatment, has high accuracy and leads most of the similar platforms at present.
Drawings
FIG. 1 is a block diagram of a target recognition system for fundus dazzle color imaging;
FIG. 2 is a flow chart of a target identification method for fundus dazzle color imaging;
FIG. 3 is a flow chart of a control process for the pre-training model and model prediction module.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings:
as shown in fig. 1, the eye fundus colorful image recognition system of the present invention includes a network service cloud, a mobile client, a mobile network interface, a background control module, a data transmission module, a model pre-training module, and a model discrimination module. The mobile network interface is responsible for connecting the network service cloud, the mobile client and the background control module, the background control module is connected with the model pre-training module, and the mobile client is connected with the data transmission module and used for switching on and off the model judging module.
The model pre-training module and the model prediction module are main modules of the invention, are core parts of the invention and are positioned at the cloud end of the network service.
As shown in fig. 2, the main functions of the model pre-training module are: the method comprises the following steps of pre-training a model, testing the model, using the model, adaptively adjusting the model, installing the model pre-training module and the model prediction module at a network service cloud end, jointly controlling the model pre-training module and a mobile client end, pre-training the model by using a marked image, starting a data transmission module under the instruction of the background controller to transmit the marked data/image from the mobile background controller to a data storage area to be tested of the model pre-training module, setting the data as model pre-training data (randomly dividing the marked data into a training set, a testing set and a verification set, setting a storage address in a data reading function in the model pre-training module), pre-training the model after setting training parameters, and testing the performance of each aspect of the pre-trained model after the pre-training is finished: the method comprises the steps of accuracy testing (false positive and true negative), complexity testing, reliability testing, speed testing, performance testing data manual distinguishing or performance testing data input to a performance judgment function, if the performance is unqualified, training parameters are adaptively adjusted according to a testing result, finally, an optimal pre-training model is selected from all pre-training models and stored in a model storage area in a network service cloud model prediction module, and the model is set as a model judgment use model (a model storage address is set in a model reading function in the model prediction module).
As shown in fig. 2 and 3, after receiving an instruction sent from the mobile client, the model prediction module starts the data transmission module to obtain data/picture to be measured from the mobile client, caches the data/picture to be measured in the data storage area of the model prediction module, sets the data as data to be measured for model decision (the data storage address to be measured is set in the data reading function in the model prediction module), then starts the model decision function to perform target identification and labeling on the data/image to be measured, and after the decision is completed, transmits the decision result to the mobile client through the data transmission module again.
The mobile client is an external interface program and is responsible for providing a data/picture transmission interface and a model decision switch.
The background controller is responsible for allocating each interface, module and data storage in the cloud server, and can modify the pre-training parameters to control the execution of the model pre-training module,
as shown in fig. 1, 2 and 3, the specific control processes of the client and the server are described in detail as follows:
1. starting a server application program (adopting a general server application program) and connecting a real-time database of the system; starting a client application program (adopting a universal client application program) and trying to connect with a server host;
2. the server establishes a thread which is at a port and waits for connection, waits for network connection of a client, and simultaneously reads a real-time database through time delay set by a user by a main thread of the server;
3. if the server side is successfully connected with the client side, the server side creates a thread, waits for receiving the command, and sends a Go command to the client side; if the client fails to connect the server, the client waits for the time delay set by the user;
4. the server receives a Go command sent by the client, and uses a model prediction module to import a pre-trained model stored in the model/patch _ x.
5. The server side sends the data of the current client side to the buffer area, stores the obtained data in a dataset directory and names the data as expire/1 and expire/year-month-day/hour-minute, and executes a model prediction module.
6. And after the judgment is finished, the server sends a judgment result to the data end, then sends an Exit command to disconnect, waits for the time delay set by the user, receives the Exit command, exits the connection thread and closes the connection.
The detailed control process of the background and the server is as follows:
1. starting a server application program (adopting a general server application program) and connecting a real-time database of the system; starting a client application program (adopting a universal client application program) and trying to connect with a server host;
2. if the server side is successfully connected with the client side, the server side creates a thread, waits for receiving the command, and sends a Go command to the client side; if the client fails to connect the server, the client waits for the time delay set by the user;
3. waiting for the selection mode: one of the following two modes is selected
(1) Model pre-training mode:
the background sends data used for pre-training to a server buffer, the server stores the obtained data in a model _ train directory and names train/1, test/1, train/year-month-day/time-minute and test/year-month-day/time-minute, then sends pre-training parameters to the server buffer, the server stores the obtained data in a config directory and names config/1 and config/year-month-day/time-minute, and executes a model pre-training module. Storing the training result to pre _ model/year-month-day/hour-minute
(2) Model test mode:
the background selects an existing model (in a pre _ model folder) or uploads the model to the pre _ model by itself and selects a model running test mode, the server sends a test result to the background, and the background adaptively adjusts model pre-training parameters according to the test result and then pre-trains again.
The technical scheme of the invention is as follows:
a target recognition system for eyeground dazzle color imaging comprises a cloud server, a mobile client PowerRealInfo (installed at the mobile end);
wherein:
the eyeground dazzling image imaging device is used for collecting images, the eyeground dazzling images are input as model images, and a typical eyeground dazzling image imaging device is a Heidelberg related imaging instrument;
the cloud server comprises a model pre-training module (controlled by a background and installed on the cloud server) and a model prediction module (installed on the cloud server);
the model pre-training module in the server is connected with the background control end through a network, the model prediction module is also connected with the background control end through a network, and the connection between the model pre-training module and the model prediction module is controlled through the background;
the mobile client comprises a result display processing module (installed on the mobile client);
the mobile client is connected with a model prediction module in the cloud server through a network;
the mobile client comprises a doctor module, a data processing module and a data processing module, wherein the doctor module is used for carrying out data processing operations including doctor account login, doctor resume patient files, doctor and patient communication data information storage, uploading eye fundus colorful pictures of patients and case history file processing;
the mobile client doctor operating steps include:
a doctor logs in an account, checks the information of a patient in the current period, and communicates with the patient through a client;
in the face of a new patient, a doctor firstly establishes a patient file and inputs and stores the relevant identity information of the patient; performing eye ground dazzle color photographing on the patient to obtain an eye ground dazzle color picture of the patient, manually downloading the picture and transmitting the picture to the system, and after photographing is finished, looking up by a doctor and the patient through a mobile client;
the doctor is connected with the cloud server through the mobile client, processes the eyeground dazzling picture which is shot or uploaded by using the deep learning button, and returns a prediction result of deep learning;
the model prediction module is connected with the result display module;
a target identification method for fundus dazzle color imaging comprises the following steps: the method comprises the following steps:
1. a model pre-training module in the background control cloud server performs pre-training model test;
1a, uploading a processed pre-training sample from a background, and then using configured maskrcnn parameter configuration for training; and then testing the trained model by using a testing module in a background.
The sample is an image collected by eyeground colorful image imaging equipment and is input as a model image; input image size 768 × 868, three channels
1b, dividing each fundus dazzle color picture into corresponding data sets according to the type of lesion of the fundus dazzle color picture, and dividing each picture into one class of the data sets;
the data sets are divided into nine major categories: 0: normal, 1: pathological myopia, 2: dry AMD, 3: wet AMD, 4: retinal artery occlusion, 5: retinal branch vein occlusion, 6: epiretinal membrane, 7: central retinal vein occlusion, 8: diabetic retinopathy; total nine major lesions;
1c, pre-training, namely expanding the number of the data sets in a mode of carrying out mirror image overturning and adding random noise on the data sets, and increasing the training sets;
when the acquired fundus colorful images are expanded to a data set, image blocks of pathological changes in a small number of pathological changes are independently intercepted and placed into the pathological change classified data; the size of the image block is cut into 96 × 96; all images are all thumbnail to a fixed 224 x 224 size before finally entering the neural network training input.
The data set image comprises an original image shot by colorful imaging and a lesion area image obtained by intercepting the original image, and the original image and the lesion area image are mixed together and matched to be placed into a training data set.
The data set comprises initial data which is the existing picture provided by the hospital;
the initial data of the current data set is provided for people hospitals of Wuhan university, and the number of the data sets is about two thousand;
because the pixels of the conventional fundus colorful images are large, in order to achieve a better training effect, the original image and the image blocks of the pathological change area are mixed and placed in a training set, so that the number of training images is increased, more importantly, because only a small part of the pathological change area is usually true and the rest parts are normal areas, the number of images of the pathological change area is increased, the robustness of the model to the pathological change area is facilitated, and the performance of the neural network algorithm model is enhanced.
2. And after the pre-trained model passes the test, loading the pre-trained model into the model prediction module through background control.
The pre-training model is a network generated by training an ImageNet or CoCo data set by using a complete network model, and has the advantages of generating initial network parameters with good fitting, reducing the time required by the eye ground dazzling color picture training in the later period and greatly improving the accuracy of the eye ground dazzling color picture training. And after loading the pre-training model, continuously training the fundus oculi colorful picture input into the neural network by using the pre-training model to obtain a final classification prediction network model.
3. After the model is loaded, a user uses app to connect the cloud server at a mobile terminal, the shot fundus picture is uploaded to the cloud server, and the button control model prediction module for AI judgment is clicked to judge.
4. The server receives a Get command sent by the client, sends the data of the current client to a buffer area, stores the obtained data in a server model storage area, judges the uploaded data by using a model judgment model, marks a seepage point, gives possible lesion degree and possibility, and finally sends the result to the mobile terminal; in viewing the results of the AI diagnosis, the area of the lesion is marked with a box on the display screen, giving the possible lesion type and the confidence given by the neural network.
The detailed control process of the pre-training model and the model prediction module is as follows:
1. the background uses the vpn to connect the server, then uses the jupyter to upload the processed pre-training data, and uploads the pre-training data to the server pre-training module;
2. pre-training the waiting operation, and selecting one of the following three operations:
(1) pre-training: loading pre-training data and training parameters, then carrying out model training, and storing the model into a model storage area after the training is finished;
(2) testing a training model: loading a trained pre-training model, selecting a test mode in a pre-training module by using preset test data, and sending a test result to a background after the test is finished;
(3) if the test result is normal, the model is loaded into the model prediction module, and if the test result is incorrect, the training parameters are adjusted in the first step to be trained again.
3. The mobile terminal is connected with the server by using the app, the photographed fundus picture is selected, AI judgment is selected, and then the picture is uploaded to the server model prediction module by the app;
4. operation of the model prediction module: and loading the trained pre-training model, using the picture uploaded by the app, using the pre-training model to judge the model, and sending a judgment result to the mobile terminal through the app.

Claims (10)

1. A target identification system for eyeground dazzle color imaging comprises a cloud server and a mobile client PowerRealInfo;
wherein:
the eyeground dazzling image imaging equipment is used for collecting images, and the eyeground dazzling images are input as model images;
the cloud server comprises a model pre-training module and a model prediction module;
the model pre-training module in the server is connected with the background control end through a network, the model prediction module is also connected with the background control end through a network, and the connection between the model pre-training module and the model prediction module is controlled through the background;
the mobile client comprises a result display processing module;
the mobile client is connected with a model prediction module in the cloud server through a network;
the model prediction module is connected with the result display module.
2. An object recognition system for fundus dazzle color imaging according to claim 1 wherein:
the model pre-training module is controlled by a background and is installed on a cloud server;
the model prediction module is installed on the cloud server;
the result display processing module is installed on the mobile client.
3. An object recognition system for fundus dazzle color imaging according to claim 1 wherein:
the mobile client comprises a doctor module and is used for carrying out data processing operation by doctors, wherein the data processing operation comprises doctor account login, doctor resume patient files, doctor patient communication data information storage, uploading of eye fundus dazzling colorful pictures of patients and medical record file processing.
4. An object recognition system for fundus dazzle color imaging according to claim 3 wherein:
the mobile client doctor operating steps include:
a doctor logs in an account, checks the information of a patient in the current period, and communicates with the patient through a client;
in the face of a new patient, a doctor firstly establishes a patient file and inputs and stores the relevant identity information of the patient; performing eye ground dazzle color photographing on the patient to obtain an eye ground dazzle color picture of the patient, manually downloading the picture and transmitting the picture to the system, and after photographing is finished, looking up by a doctor and the patient through a mobile client;
the doctor is connected with the cloud server through the mobile client, uses the deep learning button to process the eyeground dazzling color pictures which are shot or uploaded, and returns the prediction result of the deep learning.
5. A target identification method for fundus dazzle color imaging comprises the following steps: the method comprises the following steps:
s1, a model pre-training module in the background control cloud server performs pre-training model test; wherein:
s1a, uploading the processed pre-training sample from the background, and then training by using the configured maskrcnn parameter configuration; then, testing the trained model by using a testing module in a background;
the sample is an image collected by eyeground colorful image imaging equipment and is input as a model image; the size of the input image is 768 × 868, and three channels are formed;
s1b, dividing each fundus colorful picture into corresponding data sets according to the type of lesion of the fundus colorful picture, and dividing each picture into one type of the data sets;
s1c, pre-training, namely expanding the number of data sets by carrying out mirror image inversion on the data sets and adding random noise, and increasing the training sets;
s2, loading the pre-trained model into a model prediction module through background control after the pre-trained model passes the test;
s3, after the model is loaded, the user uses app to connect the cloud server at the mobile terminal, uploads the shot fundus picture to the cloud server, and clicks the button for AI judgment to control the model prediction module to make judgment;
s4, the server receives a Get command sent by the client, sends the data of the current client to a buffer area, the server stores the obtained data in a server model storage area, then judges the uploaded data by using a model prediction module, marks a seepage point, gives possible lesion degree and possibility, and finally sends the result to the mobile terminal; in viewing the results of the AI diagnosis, the area of the lesion is marked with a box on the display screen, giving the possible lesion type and the confidence given by the neural network.
6. The method of claim 5, wherein the method comprises:
the data sets are divided into nine major categories: 0: normal, 1: pathological myopia, 2: dry AMD, 3: wet AMD, 4: retinal artery occlusion, 5: retinal branch vein occlusion, 6: epiretinal membrane, 7: central retinal vein occlusion, 8: diabetic retinopathy; in total, nine major groups of lesions.
7. The method of claim 5, wherein the method comprises:
the number of the expansion data sets comprises: when the acquired fundus colorful images are expanded to a data set, image blocks of pathological changes in a small number of pathological changes are independently intercepted and are placed into the pathological change classified data; the size of the image block is cut into 96 × 96; all images are all thumbnail to a fixed 224 x 224 size before finally entering the neural network training input.
8. The method of claim 7, wherein the method comprises:
the data set images comprise original images shot by colorful imaging and lesion area images of the original images, and the original images and the lesion area images are mixed and matched together and are placed into a training data set;
the data set used includes initial data, which is an existing picture provided by the hospital.
9. The method of claim 5, wherein the method comprises:
the pre-training model is a network generated by training an ImageNet or CoCo data set by using a complete network model.
10. The method of claim 5, wherein the method comprises:
the specific control process steps of the pre-training model and the model prediction module are as follows:
s1, the background uses the vpn to connect the server, then uses the jupyter to upload the processed pre-training data, and uploads the pre-training data to the server pre-training module;
s2, pre-training waiting operation, selecting and adopting one of the following three operations:
s2a, pre-training: loading pre-training data and training parameters, then carrying out model training, and storing the model into a model storage area after the training is finished;
s2b, training model test: loading a trained pre-training model, selecting a test mode in a pre-training module by using preset test data, and sending a test result to a background after the test is finished;
s2c, if the test result is normal, loading the model into the model prediction module, if the test result is incorrect, returning to the first step to adjust the training parameters for training again;
s3, the mobile terminal uses the app to connect with the server, the well-shot fundus picture is selected, AI judgment is selected, and then the picture is uploaded to the server model prediction module by the app;
s4, model prediction module: and loading the trained pre-training model, using the picture uploaded by the app, using the pre-training model to judge the model, and sending a judgment result to the mobile terminal through the app.
CN202111484955.5A 2021-12-07 2021-12-07 Target identification system and method for eyeground dazzle color imaging Pending CN114334098A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111484955.5A CN114334098A (en) 2021-12-07 2021-12-07 Target identification system and method for eyeground dazzle color imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111484955.5A CN114334098A (en) 2021-12-07 2021-12-07 Target identification system and method for eyeground dazzle color imaging

Publications (1)

Publication Number Publication Date
CN114334098A true CN114334098A (en) 2022-04-12

Family

ID=81049423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111484955.5A Pending CN114334098A (en) 2021-12-07 2021-12-07 Target identification system and method for eyeground dazzle color imaging

Country Status (1)

Country Link
CN (1) CN114334098A (en)

Similar Documents

Publication Publication Date Title
CN111481166B (en) Automatic identification system based on eye ground screening
US20230200645A1 (en) Device and method for capturing, analyzing, and sending still and video images of the fundus during examination using an ophthalmoscope
US10441160B2 (en) Method and system for classifying optic nerve head
CN110021009B (en) Method, device and storage medium for evaluating fundus image quality
KR20200005409A (en) Fundus image management device and method for determining suitability of fundus image
CN109431452B (en) Unmanned eye health screening instrument
WO2014186838A1 (en) A system and method for remote medical diagnosis
JPH10510187A (en) Electronic imaging device for retinal examination and treatment
KR101998595B1 (en) Method and Apparatus for jaundice diagnosis based on an image
Pieczynski et al. The role of telemedicine, in-home testing and artificial intelligence to alleviate an increasingly burdened healthcare system: Diabetic retinopathy
JP7270686B2 (en) Image processing system and image processing method
US11062444B2 (en) Artificial intelligence cataract analysis system
US20210244272A1 (en) Anterior eye disease diagnostic system and diagnostic method using same
CN112053321A (en) Artificial intelligence system for identifying high myopia retinopathy
CN102567734A (en) Specific value based retina thin blood vessel segmentation method
KR102220573B1 (en) Method, apparatus and computer program for calculating quality score of fundus image data using artificial intelligence
CN110766656A (en) Method, device, equipment and storage medium for screening abnormality of eyeground macular region
CN111179258A (en) Artificial intelligence method and system for identifying retinal hemorrhage image
CN111461218A (en) Sample data labeling system for fundus image of diabetes mellitus
CN108836259B (en) Self-photographing type shared fundus photographing device
CN114334098A (en) Target identification system and method for eyeground dazzle color imaging
CN112381821A (en) Intelligent handheld fundus camera and image analysis method
JP2021022350A (en) Method of constructing retinopathy diagnostic model, and construct system of retinopathy diagnostic model for implementing the method
US20220000360A1 (en) Method and system for performing intelligent refractive errors diagnosis
CN111260635A (en) Full-automatic fundus photo acquisition, eye disease identification and personalized management system with embedded lightweight artificial neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination