CN114841952A - Cloud-edge cooperative detection system and detection method for retinopathy of prematurity - Google Patents

Cloud-edge cooperative detection system and detection method for retinopathy of prematurity Download PDF

Info

Publication number
CN114841952A
CN114841952A CN202210460497.XA CN202210460497A CN114841952A CN 114841952 A CN114841952 A CN 114841952A CN 202210460497 A CN202210460497 A CN 202210460497A CN 114841952 A CN114841952 A CN 114841952A
Authority
CN
China
Prior art keywords
detection
cloud
model
edge
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210460497.XA
Other languages
Chinese (zh)
Other versions
CN114841952B (en
Inventor
万加富
侯宁
聂川
汪翠翠
丁晓璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202210460497.XA priority Critical patent/CN114841952B/en
Publication of CN114841952A publication Critical patent/CN114841952A/en
Application granted granted Critical
Publication of CN114841952B publication Critical patent/CN114841952B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a cloud edge cooperative premature infant retinopathy detection system, which comprises: the system comprises a retina image acquisition device for premature infants, a detection application terminal and a cloud server; wherein: the premature infant retina image acquisition equipment is used for acquiring retina images of premature infants from multiple visual angles and sending the retina images to a detection application end; the retinal image constitutes a dataset; the detection application end is installed on the edge device and performs preprocessing operation on the retina image of the premature infant through the edge device, wherein the preprocessing operation comprises geometric transformation and/or image enhancement; the detection application end comprises an image detection module, a medical science popularization module and an information communication module; the cloud server comprises a user information database and a lesion detection model, and the content in the user information database can be transmitted to the detection application terminal through data transmission and rendered on the detection application terminal page. Corresponding methods, electronic devices, and computer-readable storage media are also disclosed.

Description

Cloud-edge cooperative detection system and detection method for retinopathy of prematurity
Technical Field
The invention belongs to the technical field of computers, intelligent medical treatment and image processing, and particularly relates to a cloud-edge cooperative retinopathy of prematurity detection system and a detection method.
Background
Retinopathy of Prematurity (ROP) is one of the most important causes of blindness and impaired vision in children, and timely screening, early identification and intervention can effectively prevent blindness caused by Retinopathy of Prematurity. The prescription (revised edition) for the treatment of oxygen and retinopathy of prematurity issued in 2016 of China: premature infants with a birth gestational age of less than or equal to 34 weeks or with a birth weight of less than 2000g must be screened for ROP. At present, ROP screening work is mainly carried out by an experienced ophthalmologist for binocular indirect fundus examination, and screening equipment and the experienced ROP screening ophthalmologist are all in no way available. However, the screening resources are unevenly distributed worldwide, so that premature infants in the grassroots or remote areas cannot be screened for exacerbation or even blindness in a timely manner, and for example, the problems are found in the research of a retinopathy prevention and treatment system of premature infants in China: ROP screening rates in various regions in China are greatly different, and personnel, equipment and technology restrict the development of ROP screening.
Artificial intelligence has already begun to be applied to the medical field, but there are still deficiencies in the application in the aspect of screening retinopathy of prematurity, such as single data source, insufficient self-adaptation and self-optimization capabilities, limited screening accuracy by equipment, insufficient protection of patient privacy information, and the like.
Disclosure of Invention
The invention aims to provide a cloud-edge cooperative retinopathy of prematurity detection system and a detection method, which are used for constructing an edge cloud cooperative architecture facing a ROP screening system of a prematurity, establishing a system self-adaptive self-optimization management mechanism, improving the universality of an ROP intelligent detection model and enabling the ROP intelligent detection model to have self-adaptive and self-optimization capabilities.
The invention provides a cloud-edge cooperative retinopathy of prematurity detection system, which comprises: the system comprises a premature infant retina image acquisition device, a detection application end and a cloud server; wherein:
the premature infant retina image acquisition equipment is used for acquiring retina images of premature infants from multiple visual angles and sending the retina images to a detection application end; forming a data set from a plurality of views of the retinal image of the premature infant acquired over a period of time;
the detection application end is installed on a marginal device, and the detection application end carries out preprocessing operation on the retina image of the premature infant through the marginal device, wherein the preprocessing operation comprises geometric transformation and/or image enhancement; the detection application end comprises an image detection module, a medical science popularization module and an information communication module;
the cloud server comprises a user information database and a lesion detection model, namely the cloud server deploys the user information database and the trained lesion detection model, and the content in the user information database can be transmitted to the detection application terminal through data transmission and rendered on a page of the detection application terminal.
Preferably, the image detection module includes:
the marginal lesion detection model is used for locally executing a detection task of the retina image;
a sending unit, configured to transmit data to the cloud server, where the data includes data of the retina image and an intermediate result in a detection process;
and the receiving unit is used for receiving the detection result of the retina image transmitted back from the cloud.
Preferably, the image detection module includes three main detection modes of an edge detection mode, a cooperative detection mode and a cloud detection mode, and an auxiliary retrieval mode, wherein: selecting one of three main detection modes according to the requirements of a user, and inputting the retina image into a trained neural network model for detection;
in the edge terminal detection mode, the retina image is only processed on edge equipment and is not uploaded to the cloud server;
under a cooperative detection mode, dividing a trained neural network model into a first part and a second part according to the current network condition and the current task amount, processing the retinal image by the first part model on edge equipment to obtain an intermediate result, uploading the intermediate result to the cloud server to complete the detection, returning the detection result, and determining the position of the lesion detection model by calculating the task execution time delay;
in a cloud detection mode, uploading the retina image to the cloud server for processing, removing user information from local data, encrypting and uploading the local data to a cloud, detecting the image by a lesion detection model of the cloud, and updating the cloud model in time;
and in the auxiliary retrieval mode, aiming at the retina image with higher detection difficulty, an expert logs in a special account of the detection application end to perform manual detection on the retina image, and sends a detection result to a user corresponding to the retina image.
Preferably, the system further comprises a case report output module, wherein the case report output module is used for forming an auxiliary detection result according to the retinopathy analysis result of the retinopathy of prematurity analysis module, and forming a detection report through confirmation, modification and/or input of a medical order by a doctor.
Preferably, the information exchange module is connected with the cloud server, the user issues content through the information exchange module, and the content is stored in a user information database on the cloud server, so that information exchange and sharing among different users are realized.
Preferably, the lesion detection model is obtained by processing and analyzing a data set, and the establishing of the lesion detection model includes:
dividing the data set into a training set, a verification set and a test set;
inputting the images in the training set into a neural network model, and adjusting a first parameter of the neural network model;
inputting the verification set into a neural network model, and adjusting second parameters of the neural network model;
inputting the test set into a neural network model, and evaluating the neural network model to finally obtain the lesion detection model aiming at the retina of the premature infant.
The four performance evaluation indexes of Accuracy (Accuracy), Precision (Precision), Recall (Recall) and comprehensive evaluation index (F1-Measure) are adopted for evaluating the neural network model, and are specifically defined as follows:
Figure BDA0003622003110000031
Figure BDA0003622003110000032
Figure BDA0003622003110000033
Figure BDA0003622003110000041
among them, TP (True Positive): predicting the positive class as a positive class number; TN (True Negative): predicting a negative class as a negative class number; FP (False Positive): predicting the negative class as a positive class number, and carrying out false alarm; FN (False Negative): predicting the positive class as a negative class number, and missing report;
and applying a neural network model with good performance evaluation indexes as the retinopathy of prematurity detection model.
The second aspect of the invention provides a cloud-edge cooperative retinopathy of prematurity detection method, which comprises the following steps:
constructing a model, and taking a retina image of the premature infant acquired by image acquisition equipment as a data set; data preprocessing is carried out on the data in the data set; solving the problem of data imbalance by performing data amplification operation on the preprocessed data; inputting the data subjected to data amplification into a neural network model for training to obtain a lesion detection model of the retina of the premature infant, and deploying the lesion detection model into edge equipment and a cloud server;
detecting the pathological changes, namely considering detection time delay, energy consumption and user privacy requirements, selecting a proper detection mode according to the user requirements, selecting a proper model segmentation point for the edge cloud cooperative detection, inputting the retina image of the premature infant to be detected into the trained pathological change detection model for detection, and for the case with higher detection difficulty, processing the retina image of the premature infant at a detection application end by an expert through manual work; the method comprises the following steps:
selecting the optimal model segmentation point, thereby maximally exerting the advantage of cooperative detection; the selection of the model division point in the detection mode is based on the transmission delay and the energy consumption of the whole system, and when the task is executed, the processing delay is as follows:
Figure BDA0003622003110000042
wherein U represents the amount of task data, p represents the number of CPU cycles required to process each bit task, f c Representing the computing power of the edge device;
data transmission rate r for sending task to cloud end by edge device LU And the data transmission rate r of the cloud end returning the result to the edge device LD Comprises the following steps:
Figure BDA0003622003110000051
Figure BDA0003622003110000052
where B represents the bandwidth between the edge device and the cloud server, t -r Representing the channel coefficient between the edge device and the cloud server, d representing the distance between the edge device and the cloud server, r representing the fading factor of the channel, and σ representing 2 Representing the noise power of the channel;
time delay t for uploading task to cloud end by edge device U The cloud end returns the calculation result to the edge endTime delay d D And processing time of task t E Comprises the following steps:
Figure BDA0003622003110000053
Figure BDA0003622003110000054
Figure BDA0003622003110000055
the total time for task completion is:
t=t U +t D +t E
the total energy consumption generated by the whole system for processing the user task is as follows:
E=E L +E U
wherein E is L Energy consumption generated for the CPU of the edge device; e U Energy consumption generated when the task is unloaded to the cloud end for the edge device;
setting a problem optimization target as the weighted sum of task completion time and energy consumption to be minimum to obtain the following optimization problem, and selecting a proper model segmentation node according to the result of the optimization problem:
Figure BDA0003622003110000056
wherein t is less than or equal to t max ,E≤E max ,0≤ω≤1。
As a preferred embodiment, the method further comprises:
and model optimization, wherein the cloud server receives local data of the plurality of edge devices from which the user privacy information is removed, continuously optimizes the lesion detection model based on the local data, analyzes and learns all characteristics of the retinopathy data through the cloud, constructs the lesion detection model facing all regional people, and then performs self-adaptation and self-optimization adjustment according to a general model.
A third aspect of the invention provides an electronic device comprising a processor and a memory, the memory storing a plurality of instructions, the processor being configured to read the instructions and to perform the method according to the first aspect.
A fourth aspect of the invention provides a computer readable storage medium storing a plurality of instructions readable by a processor and performing the method of the first aspect.
The method, the device, the electronic equipment and the computer readable storage medium provided by the invention have the following beneficial technical effects:
according to the system and the method for detecting the retinopathy of prematurity with the cloud coordination, provided by the invention, a cloud coordination framework mode for diagnosing and treating the retinopathy of prematurity is constructed, and the self-adaptive capacity of the system is improved; the user can communicate with the detection personnel through the lesion detection application terminal; and the personal information of the user is removed by uploading the data in the cloud, so that the privacy of the user is protected to the maximum extent.
Drawings
FIG. 1 is a block diagram of a schematic diagram of a border cloud coordinated retinopathy of prematurity detection system according to a preferred embodiment of the present invention;
FIG. 2 is a flow chart illustrating an implementation of a side-cloud coordinated retinopathy of prematurity detection system in accordance with a preferred embodiment of the present invention;
FIG. 3 is a block diagram of the image acquisition modules of the edge cloud-coordinated retinopathy of prematurity detection system according to the preferred embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating the cooperative detection of a Border cloud cooperative retinopathy of prematurity detection system in accordance with a preferred embodiment of the present invention;
FIG. 5 is a TensorFlow model transition flow diagram illustrating a side-cloud coordinated retinopathy of prematurity detection system in accordance with a preferred embodiment of the present invention;
fig. 6 is a schematic structural diagram of an embodiment of an electronic device provided in the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Example one
A cloud-edge coordinated retinopathy of prematurity detection system comprising: the system comprises a retina image acquisition device for premature infants, a detection application terminal, a cloud server and a case report output module; wherein:
the premature infant retina image acquisition equipment is used for acquiring retina images of premature infants from multiple visual angles and sending the retina images to a detection application end; forming a data set from a plurality of views of the retinal image of the premature infant acquired over a period of time;
the detection application end is installed on a marginal device, and the detection application end carries out preprocessing operation on the retina image of the premature infant through the marginal device, wherein the preprocessing operation comprises geometric transformation and/or image enhancement; the detection application end comprises an image detection module, a medical science popularization module and an information communication module;
the cloud server comprises a user information database and a lesion detection model, namely the cloud server deploys the user information database and the trained lesion detection model, and the content in the user information database can be transmitted to the detection application terminal through data transmission and rendered on the detection application terminal page.
As a preferred embodiment, the image detection module comprises:
the marginal lesion detection model is used for locally executing a detection task of the retina image;
a sending unit, configured to transmit data to the cloud server, where the data includes data of the retina image and an intermediate result in a detection process;
and the receiving unit is used for receiving the detection result of the retina image transmitted back from the cloud.
As a preferred embodiment, the image detection module includes three main detection modes, namely an edge detection mode, a cooperative detection mode and a cloud detection mode, and an auxiliary retrieval mode, wherein: selecting one of three main detection modes according to the requirements of a user, and inputting the retina image into a trained neural network model for detection;
in the edge detection mode, the retina image is only processed on edge equipment and is not uploaded to the cloud server, and the mode is used for meeting the privacy requirement of a user; the edge detection mode is used for periodically acquiring required data and models from cloud resources to update and optimize the local models;
under a cooperative detection mode, dividing a trained neural network model into a first part and a second part according to the current network condition and the current task amount, processing the retina image by the first part model on edge equipment to obtain an intermediate result, uploading the intermediate result to the cloud server to complete the detection, and returning the detection result back, wherein the mode takes time delay and minimum energy consumption as targets, meets the requirement of a user on the detection speed, and reduces the energy consumption of a system; in the cooperative detection mode, the position of lesion detection model segmentation is determined by calculating the task execution delay, so that edge cloud cooperation is realized;
in a cloud detection mode, the retina image is uploaded to the cloud server for processing, and the mode can make up for the deficiency of computing capacity of edge equipment and reduce energy consumption of the edge equipment; the cloud detection mode encrypts and uploads the local data after user information is removed, the lesion detection model of the cloud detects the image, and meanwhile, the cloud model is updated in time, so that the data are effectively processed, and the privacy safety of the local data is protected;
and in the auxiliary retrieval mode, aiming at the retinal image with higher detection difficulty, an expert logs in a special account of the detection application terminal to manually detect the retinal image, and sends a detection result to a user corresponding to the retinal image.
The system also comprises a case report output module which is used for forming an auxiliary detection result according to the retinopathy analysis result of the retinopathy of prematurity analysis module, and forming a detection report through confirmation, modification and/or order input of a doctor.
As a preferred embodiment, the information exchange module is connected to the cloud server, and the user issues a content through the information exchange module, and the content is stored in a user information database on the cloud server, thereby implementing information exchange and sharing among different users.
In a preferred embodiment, the lesion detection model is obtained by processing and analyzing a data set, in this embodiment, the data set is formed by acquiring retinopathy of prematurity images of premature infants of last five years from a plurality of hospital databases, and the establishing of the lesion detection model includes:
dividing the data set into a training set, a verification set and a test set;
inputting the images in the training set into a neural network model, and adjusting a first parameter of the neural network model;
inputting the verification set into a neural network model, and adjusting second parameters of the neural network model;
inputting the test set into a neural network model, and evaluating the neural network model to finally obtain the lesion detection model aiming at the retina of the premature infant.
As a preferred embodiment, the lesion detection model is built by using a tensrflow Lite, which is a framework for a mobile device of the tensrflow, and provides a transition tensrflow model, and all tools required for operating the tensrflow model on edge-end (mobile), embedded (embedded) and internet of things (IoT) devices, allowing a user to operate the tensrflow model on various devices, and the tensrflow Lite adopts a special format for efficiently executing the model on various devices, and the tensrflow model must be converted into the format before being used by the tensrflow Lite, and the tensrflow Lite converts the format of the lesion detection model and then loads the lesion detection model to the edge-end, and can locally operate the lesion detection model to execute an image processing task.
Example two
As shown in fig. 2, a cloud-edge cooperative retinopathy of prematurity detection method includes:
constructing a model, and taking a retina image of the premature infant acquired by image acquisition equipment as a data set; data preprocessing is carried out on the data in the data set; solving the problem of data imbalance by performing data amplification operation on the preprocessed data; inputting the data subjected to data amplification into a neural network model for training to obtain a lesion detection model of the retina of the premature infant, and deploying the lesion detection model into edge equipment and a cloud server; data amplification in order to prevent overfitting of the model and enhance the generalization capability of the model, data is amplified offline in a training process to increase the diversity of the data, and a data amplification method comprises random inversion of all pixels, random up-down/left-right turning, random Gaussian blur, random translation, random rotation, random contrast enhancement and application of a mixup data enhancement algorithm.
And (3) lesion detection, wherein a detection time delay, energy consumption and user privacy requirements are considered, a proper detection mode is selected according to the user requirements, a proper model segmentation point is selected for the edge cloud cooperative detection, the retina image of the premature infant to be detected is input into the trained lesion detection model for detection, and for the case with high detection difficulty, the detection application end is manually processed by an expert.
As a preferred embodiment, the method further comprises:
and model optimization, wherein the cloud server receives the local data from which the user privacy information is removed, continuously optimizes the lesion detection model based on the local data, and the edge device performs self-adaption and self-optimization adjustment according to a general model after acquiring the lesion detection model on the cloud service.
In a preferred embodiment of this embodiment, the lesion detection model is obtained by processing and analyzing a data set, in this embodiment, a data set is obtained from a hospital database, images in recent years of retinopathy are divided into a training set, a verification set and a test set, the images in the training set are input into a neural network model to adjust parameters of the model, the verification set is input into the neural network model to adjust hyper-parameters of the model, and the test set is input into the neural network model to evaluate the model, and in order to quantitatively evaluate the performance of the neural network model, the following four evaluation indexes, namely Accuracy (Accuracy), Precision (Precision), Recall (Recall) and comprehensive evaluation index (F1-Measure), are specifically defined as follows:
Figure BDA0003622003110000101
Figure BDA0003622003110000102
Figure BDA0003622003110000103
Figure BDA0003622003110000104
among them, TP (True Positive): predicting the positive class as a positive class number; TN (True Negative): predicting a negative class as a negative class number; FP (False Positive): predicting the negative class as a positive class number, and carrying out false alarm; FN (False Negative): and predicting the positive class as a negative class number, and missing report.
As a preferred embodiment, a model with good performance index is used as the retinopathy of prematurity detection model.
As a preferred implementation manner of this embodiment, the image capturing device is used for acquiring retinal images of premature infants from multiple visual angles, and the data set of the detection application end is acquired from the device; the detection application end installed on the edge equipment comprises an image detection module, a medical science popularization module and an information exchange module; the cloud server deploys a user information database and a trained lesion detection model.
As a preferred implementation manner of this embodiment, as shown in fig. 3, the image detection module includes: the edge lesion detection model is used for locally executing an image detection task; the sending unit is used for transmitting data to the cloud, and the data comprises image data and an intermediate result; and the receiving unit is used for receiving the detection result transmitted back from the cloud.
In the cooperative detection mode detection process, as shown in fig. 4, different model segmentation points generate different time delays and energy consumptions, so that an optimal model segmentation point needs to be selected, thereby maximizing the advantage of cooperative detection.
As a preferred implementation manner of this embodiment, the selection of the model division point in the cooperative detection mode considers transmission delay and energy consumption of the entire system, and when the task is executed, the processing delay is:
Figure BDA0003622003110000111
wherein U represents the amount of task data, p represents the number of CPU cycles required to process each bit task, f c Representing the computing power of the edge device.
Data transmission rate r for sending task to cloud end by edge device LU And the data transmission rate r of the cloud end returning the result to the edge device LD Comprises the following steps:
Figure BDA0003622003110000112
Figure BDA0003622003110000113
where B represents the bandwidth between the edge device and the cloud server, d -r Representing the channel coefficient between the edge device and the cloud server, d representing the distance between the edge device and the cloud server, r representing the fading factor of the channel, σ 2 Representing noise of a channelAnd (4) power.
Therefore, the time delay t of the edge device for uploading the task to the cloud U And the cloud end returns the calculation result to the time delay t of the edge end D And processing time of task t E Comprises the following steps:
Figure BDA0003622003110000114
Figure BDA0003622003110000121
Figure BDA0003622003110000122
the total time for task completion is:
t=t U +t D +t E
the total energy consumption generated by the whole system for processing the user task is as follows:
E=E L +E U
wherein E is L Energy consumption generated for the CPU of the edge device; e U Energy consumption generated when the task is unloaded to the cloud end for the edge device.
As a preferred embodiment, the problem optimization objective may be set to be the minimum of the weighted sum of the task completion time and the energy consumption, so as to obtain the following optimization problem, and select an appropriate model segmentation node according to the result of the optimization problem:
Figure BDA0003622003110000123
wherein t is less than or equal to t max ,E≤E max ,0≤ω≤1。
In a preferred embodiment, the lesion detection model in the edge detection mode is obtained by converting a tensrflow Lite converter on the basis of a trained tensrflow model. The TensorFlow Lite is a framework for TensorFlow for mobile devices, provides all the tools needed to convert the TensorFlow model and run the TensorFlow model on edge-end (mobile), embedded (embedded) and Internet of things (IoT) devices, allowing users to run the TensorFlow model on a variety of devices. TensorFlow Lite employs a special format for efficiently executing the model on various devices, into which the TensorFlow model must be converted before it can be used by the TensorFlow Lite.
As shown in fig. 5, the tensrflow Lite provides a C + + application interface on the Android, IOS, Linux systems, a Java application interface on the Android and Linux systems, and a Python application interface on the Linux operating system. In the model conversion process of the marginal lesion diagnosis and treatment model, firstly, a trained TensorFlow model is converted into a tflite format available for TensorFlow Lite, and then a file in the TensorFlow Lite format is deployed into edge equipment through an application program interface.
As a preferred embodiment, the edge device needs to perform self-adaptive and self-optimization adjustment on cloud resources and a detection model to better improve the utilization rate and universality of the cloud resources and the detection model, in order to further improve the sensitivity and specificity of intelligent detection of the edge device, local data of a plurality of edge devices are uploaded to the cloud, all characteristics of retinopathy data are analyzed and learned by the cloud, a retinopathy of prematurity detection model for people in each region is constructed, and optimization improvement of system performance is realized.
The invention also provides a memory storing a plurality of instructions for implementing the method according to embodiment two.
As shown in fig. 6, the present invention further provides an electronic device, which includes a processor 301 and a memory 302 connected to the processor 301, where the memory 302 stores a plurality of instructions, and the instructions can be loaded and executed by the processor, so that the processor can execute the method according to the second embodiment.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A cloud-edge coordinated retinopathy of prematurity detection system, comprising: the system comprises a premature infant retina image acquisition device, a detection application end and a cloud server; wherein:
the premature infant retina image acquisition equipment is used for acquiring retina images of premature infants from multiple visual angles and sending the retina images to a detection application end; the retinal images of the premature infant from a plurality of perspectives constituting a dataset;
the detection application end is installed on a marginal device, and the detection application end carries out preprocessing operation on the retina image of the premature infant through the marginal device, wherein the preprocessing operation comprises geometric transformation and/or image enhancement; the detection application end comprises an image detection module, a medical science popularization module and an information communication module;
the cloud server comprises a user information database and a lesion detection model, and the content in the user information database can be sent to the detection application terminal through data transmission and rendered on the detection application terminal page.
2. The cloud-edge cooperative retinopathy of prematurity detection system of claim 1, wherein the image detection module comprises:
the marginal lesion detection model is used for locally executing a detection task of the retina image;
the sending unit is used for transmitting data to the cloud server, and the data comprises the data of the retina image and an intermediate result in the detection process;
and the receiving unit is used for receiving the detection result of the retina image transmitted back from the cloud.
3. The cloud-edge cooperative retinopathy of prematurity detection system of claim 2, wherein the image detection module comprises three main detection modes of an edge detection mode, a cooperative detection mode and a cloud detection mode, and an auxiliary retrieval mode, wherein: selecting one of three main detection modes according to the requirements of a user, and inputting the retina image into a trained neural network model for detection;
in the edge terminal detection mode, the retina image is only processed on edge equipment and is not uploaded to the cloud server;
under a cooperative detection mode, dividing a trained neural network model into a first part and a second part according to the current network condition and the current task amount, processing the retinal image by the first part model on edge equipment to obtain an intermediate result, uploading the intermediate result to the cloud server to complete the detection, returning the detection result, and determining the position of the lesion detection model by calculating the task execution time delay;
in a cloud detection mode, uploading the retina image to the cloud server for processing, removing user information from local data, encrypting and uploading the local data to a cloud, detecting the image by a lesion detection model of the cloud, and updating the cloud model in time;
and in the auxiliary retrieval mode, aiming at the retina image with higher detection difficulty, an expert logs in a special account of the detection application end to perform manual detection on the retina image, and sends a detection result to a user corresponding to the retina image.
4. The cloud-edge coordinated retinopathy of prematurity system of claim 1, further comprising a case report output module for forming an auxiliary test result according to the retinopathy of prematurity analysis module, and forming a test report by confirming, modifying and/or inputting a medical order by a doctor.
5. The cloud-side cooperative retinopathy of prematurity detection system of claim 1, wherein the information communication module is connected to the cloud server, and a user issues a content through the information communication module, and the content is stored in a user information database on the cloud server, so as to realize information communication and sharing among different users.
6. The cloud-edge cooperative retinopathy of prematurity detection system of claim 1, wherein the lesion detection model is obtained by processing and analyzing a data set, and the establishment of the lesion detection model comprises:
dividing the data set into a training set, a verification set and a test set;
inputting the images in the training set into a neural network model, and adjusting a first parameter of the neural network model;
inputting the verification set into a neural network model, and adjusting second parameters of the neural network model;
inputting the test set into a neural network model, and evaluating the neural network model to finally obtain the lesion detection model aiming at the retina of the premature infant.
The four performance evaluation indexes of Accuracy (Accuracy), Precision (Precision), Recall (Recall) and comprehensive evaluation index (F1-Measure) are adopted for evaluating the neural network model, and are specifically defined as follows:
Figure FDA0003622003100000031
Figure FDA0003622003100000032
Figure FDA0003622003100000033
Figure FDA0003622003100000034
among them, TP (True Positive): predicting the positive class as a positive class number; TN (True Negative): predicting a negative class as a negative class number; FP (False Positive): predicting the negative class as a positive class number, and carrying out false alarm; FN (False Negative): predicting the positive class as a negative class number, and missing report;
and applying a neural network model with good performance evaluation indexes as the retinopathy of prematurity detection model.
7. A cloud-edge cooperative retinopathy of prematurity detection method realized based on the system of any one of claims 1 to 6, comprising:
constructing a model, and taking a retina image of the premature infant acquired by image acquisition equipment as a data set; data preprocessing is carried out on the data in the data set; solving the problem of data imbalance by performing data amplification operation on the preprocessed data; inputting the data subjected to data amplification into a neural network model for training to obtain a lesion detection model of the retina of the premature infant, and deploying the lesion detection model into edge equipment and a cloud server;
detecting the pathological changes, namely considering detection time delay, energy consumption and user privacy requirements, selecting a proper detection mode according to the user requirements, selecting a proper model segmentation point for the edge cloud cooperative detection, inputting the retina image of the premature infant to be detected into the trained pathological change detection model for detection, and for the case with higher detection difficulty, processing the retina image of the premature infant at a detection application end by an expert through manual work; the method comprises the following steps:
selecting the optimal model segmentation point, thereby maximally exerting the advantage of cooperative detection; the selection of the model division point in the detection mode is based on the transmission delay and the energy consumption of the whole system, and when the task is executed, the processing delay is as follows:
Figure FDA0003622003100000041
wherein U represents the amount of task data, p represents the number of CPU cycles required to process each bit task, f c Representing the computing power of the edge device;
data transmission rate r for sending task to cloud end by edge device LU And the data transmission rate r of the cloud end returning the result to the edge device LD Comprises the following steps:
Figure FDA0003622003100000042
Figure FDA0003622003100000043
where B represents the bandwidth between the edge device and the cloud server, d -r Representing the channel coefficient between the edge device and the cloud server, d representing the distance between the edge device and the cloud server, r representing the fading factor of the channel, σ 2 Representing the noise power of the channel;
time delay t for uploading task to cloud end by edge device U And the cloud returns the calculation result to the time delay d of the edge end D And processing time of task t E Comprises the following steps:
Figure FDA0003622003100000044
Figure FDA0003622003100000045
Figure FDA0003622003100000046
the total time for task completion is:
t=t U +t D +t E
the total energy consumption generated by the whole system for processing the user task is as follows:
E=E L +E U
wherein E is L Energy consumption generated for the CPU of the edge device; e U Energy consumption generated when the task is unloaded to the cloud end for the edge device;
setting a problem optimization target as the weighted sum of task completion time and energy consumption to be minimum to obtain the following optimization problem, and selecting a proper model segmentation node according to the result of the optimization problem:
Figure FDA0003622003100000051
wherein t is less than or equal to t max ,E≤E max ,0≤ω≤1。
8. The detection method according to claim 7, further comprising:
and model optimization, wherein the cloud server receives local data of the plurality of edge devices from which the user privacy information is removed, continuously optimizes the lesion detection model based on the local data, analyzes and learns all characteristics of the retinopathy data through the cloud, constructs the lesion detection model facing all regional people, and then performs self-adaptation and self-optimization adjustment according to a general model.
9. An electronic device comprising a processor and a memory, the memory storing a plurality of instructions, the processor being configured to read the instructions and perform the method of any of claims 7-8.
10. A computer-readable storage medium storing a plurality of instructions readable by a processor and performing the method of any one of claims 7-8.
CN202210460497.XA 2022-04-28 2022-04-28 Cloud-edge cooperative retinopathy of prematurity detection system and detection method Active CN114841952B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210460497.XA CN114841952B (en) 2022-04-28 2022-04-28 Cloud-edge cooperative retinopathy of prematurity detection system and detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210460497.XA CN114841952B (en) 2022-04-28 2022-04-28 Cloud-edge cooperative retinopathy of prematurity detection system and detection method

Publications (2)

Publication Number Publication Date
CN114841952A true CN114841952A (en) 2022-08-02
CN114841952B CN114841952B (en) 2024-05-03

Family

ID=82567391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210460497.XA Active CN114841952B (en) 2022-04-28 2022-04-28 Cloud-edge cooperative retinopathy of prematurity detection system and detection method

Country Status (1)

Country Link
CN (1) CN114841952B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110010219A (en) * 2019-03-13 2019-07-12 杭州电子科技大学 Optical coherence tomography image retinopathy intelligent checking system and detection method
CN111127425A (en) * 2019-12-23 2020-05-08 北京至真互联网技术有限公司 Target detection positioning method and device based on retina fundus image
CN111585916A (en) * 2019-12-26 2020-08-25 国网辽宁省电力有限公司电力科学研究院 LTE electric power wireless private network task unloading and resource allocation method based on cloud edge cooperation
CN112465789A (en) * 2020-12-02 2021-03-09 智程工场(佛山)科技有限公司 Detection method and system for retinopathy plus disease of premature infant
CN112801959A (en) * 2021-01-18 2021-05-14 华南理工大学 Auxiliary assembly system based on visual feature recognition
CN113067873A (en) * 2021-03-19 2021-07-02 北京邮电大学 Edge cloud collaborative optimization method based on deep reinforcement learning
CN113282348A (en) * 2021-05-26 2021-08-20 浙江理工大学 Edge calculation task unloading system and method based on block chain
CN113419867A (en) * 2021-08-23 2021-09-21 浙大城市学院 Energy-saving service supply method in edge-oriented cloud collaborative computing environment
CN114022723A (en) * 2021-09-18 2022-02-08 华南理工大学 Data set generation method and device for neural network training
CN114093505A (en) * 2021-11-17 2022-02-25 山东省计算中心(国家超级计算济南中心) Cloud-edge-end-architecture-based pathological detection system and method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110010219A (en) * 2019-03-13 2019-07-12 杭州电子科技大学 Optical coherence tomography image retinopathy intelligent checking system and detection method
CN111127425A (en) * 2019-12-23 2020-05-08 北京至真互联网技术有限公司 Target detection positioning method and device based on retina fundus image
CN111585916A (en) * 2019-12-26 2020-08-25 国网辽宁省电力有限公司电力科学研究院 LTE electric power wireless private network task unloading and resource allocation method based on cloud edge cooperation
CN112465789A (en) * 2020-12-02 2021-03-09 智程工场(佛山)科技有限公司 Detection method and system for retinopathy plus disease of premature infant
CN112801959A (en) * 2021-01-18 2021-05-14 华南理工大学 Auxiliary assembly system based on visual feature recognition
CN113067873A (en) * 2021-03-19 2021-07-02 北京邮电大学 Edge cloud collaborative optimization method based on deep reinforcement learning
CN113282348A (en) * 2021-05-26 2021-08-20 浙江理工大学 Edge calculation task unloading system and method based on block chain
CN113419867A (en) * 2021-08-23 2021-09-21 浙大城市学院 Energy-saving service supply method in edge-oriented cloud collaborative computing environment
CN114022723A (en) * 2021-09-18 2022-02-08 华南理工大学 Data set generation method and device for neural network training
CN114093505A (en) * 2021-11-17 2022-02-25 山东省计算中心(国家超级计算济南中心) Cloud-edge-end-architecture-based pathological detection system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
THIPPA REDDY GADEKALLU 等: "Deep neural networks to predict diabetic retinopathy", 《SPRINGER》, 24 April 2020 (2020-04-24), pages 5407 *
ZHENTAO GAO等: "Diagnosis of Diabetic Retinopathy Using Deep Neural Networks", 《IEEE》, 11 January 2019 (2019-01-11), pages 3360 *

Also Published As

Publication number Publication date
CN114841952B (en) 2024-05-03

Similar Documents

Publication Publication Date Title
Yu et al. Deep-learning-empowered breast cancer auxiliary diagnosis for 5GB remote E-health
EP3933551A1 (en) Motor imagery electroencephalogram signal processing method, device, and storage medium
CN106709254B (en) A kind of medical diagnosis robot system
KR20200005986A (en) System and method for diagnosing cognitive impairment using face recognization
WO2021180244A1 (en) Disease risk prediction system, method and apparatus, device and medium
WO2020148992A1 (en) Model generation device, model generation method, model generation program, model generation system, inspection system, and monitoring system
CN116631564B (en) Emergency electronic medical record management system and management method
CN110085288A (en) A kind of liver and gall surgical department Internet-based treatment information sharing system and sharing method
CN116664930A (en) Personalized federal learning image classification method and system based on self-supervision contrast learning
CN115168669A (en) Infectious disease screening method and device, terminal equipment and medium
Chen et al. A new optimal diagnosis system for coronavirus (COVID-19) diagnosis based on Archimedes optimization algorithm on chest X-ray images
JP6468576B1 (en) Image diagnostic system for fertilized egg, image diagnostic program for fertilized egg, and method for creating classifier for image diagnosis of fertilized egg.
CN114841952A (en) Cloud-edge cooperative detection system and detection method for retinopathy of prematurity
WO2024028196A1 (en) Methods for training models in a federated system
Gronowski et al. Rényi fair information bottleneck for image classification
Sandhya et al. An optimized elman neural network for contactless palm-vein recognition framework
CN115171896A (en) System and method for predicting long-term death risk of critically ill patient
Bhattarai et al. An integrated secure efficient computing architecture for embedded and remote ECG diagnosis
JP2021068336A (en) Image diagnostic system for embryo, image diagnosis program for embryo, and method of creating diagnostic imaging classifier for embryo
Karling et al. Prediction of Breast Cancer Using Machine Learning Techniques for Health Data
Vavekanand SUBMIP: Smart Human Body Health Prediction Application System Based on Medical Image Processing
CN117617921B (en) Intelligent blood pressure monitoring system and method based on Internet of things
Li et al. Privacy Preserved Federated Learning for Skin Cancer Diagnosis
Shah et al. Prognosis of Supervised Machine Learning Algorithms in Healthcare Sector
US20230409422A1 (en) Systems and Methods for Anomaly Detection in Multi-Modal Data Streams

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant