CN116738352B - Method and device for classifying abnormal rod cells of retinal vascular occlusion disease - Google Patents

Method and device for classifying abnormal rod cells of retinal vascular occlusion disease Download PDF

Info

Publication number
CN116738352B
CN116738352B CN202311019231.2A CN202311019231A CN116738352B CN 116738352 B CN116738352 B CN 116738352B CN 202311019231 A CN202311019231 A CN 202311019231A CN 116738352 B CN116738352 B CN 116738352B
Authority
CN
China
Prior art keywords
data
information
fusion
rod cells
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311019231.2A
Other languages
Chinese (zh)
Other versions
CN116738352A (en
Inventor
肖璇
陈婷
李莹
高翔
王发席
谢浩
李硕
刘航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Renmin Hospital of Wuhan University
Original Assignee
Renmin Hospital of Wuhan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Renmin Hospital of Wuhan University filed Critical Renmin Hospital of Wuhan University
Priority to CN202311019231.2A priority Critical patent/CN116738352B/en
Publication of CN116738352A publication Critical patent/CN116738352A/en
Application granted granted Critical
Publication of CN116738352B publication Critical patent/CN116738352B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2433Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2323Non-hierarchical techniques based on graph theory, e.g. minimum spanning trees [MST] or graph cuts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention provides a method and a device for classifying abnormal rod cells of retinal vascular occlusion diseases, which relate to the technical field of artificial intelligence and comprise the following steps: acquiring first information and second information, wherein the first information and the second information comprise ophthalmic clinical examination data of a patient to be detected and ophthalmic clinical examination data of a patient with a historical retinal vascular occlusion disease; performing feature fusion processing according to the optical perception signal record and the dynamic response data to obtain a fusion data set; constructing and obtaining a visual rod cell abnormal classification model according to the fusion data set and a preset machine learning mathematical model; and classifying the first information according to the visual rod cell abnormality classification model to obtain a classification result. According to the invention, by combining the frequency spectrum characteristic and the time variation characteristic of the video rod cells, the variation rule of the video rod cells in the disease development process is considered, and the fusion data set is classified by constructing the machine learning model, so that the abnormal type and the abnormal degree of the video rod cells can be more accurately judged, and the classification accuracy is improved.

Description

Method and device for classifying abnormal rod cells of retinal vascular occlusion disease
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a device for classifying abnormal rods of retinal vascular occlusion diseases.
Background
Currently, retinal vascular occlusion disease is a common ophthalmic disease that can lead to abnormalities in rod cells and impaired vision. Under the current state of the art, diagnosis and classification of retinal vascular occlusion diseases is largely dependent on analysis and interpretation of ophthalmic clinical examination data. However, the prior art methods have some problems in classifying rod cell abnormalities. First, existing methods tend to use subjective determinations based on qualitative analysis to assess the type and extent of rod cell abnormalities, both subjectivity and subjective bias. Second, these methods generally rely on only a single ophthalmic clinical examination data and do not fully account for the characteristics of rod cells and their changes during the disease progression. Thus, the prior art does not provide an accurate and comprehensive method for classifying rod cell abnormalities.
Based on the shortcomings of the prior art, there is a need for a method for classifying abnormal rod cells of retinal vascular occlusion diseases.
Disclosure of Invention
The invention aims to provide a method and a device for classifying abnormal cells of a visual rod of a retinal vascular occlusion disease, so as to solve the problems. In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
In a first aspect, the present application provides a method for classifying a rod cell abnormality of a retinal vascular occlusion disease, comprising:
acquiring first information and second information, wherein the first information comprises ophthalmic clinical examination data of a patient to be detected, and the second information comprises ophthalmic clinical examination data of a patient with historical retinal vascular occlusion disease;
performing feature fusion processing according to the optical perception signal record and the dynamic response data in the second information to obtain a fusion data set, wherein the fusion data set comprises frequency spectrum features of the video rod cells and time variation features in the disease development process;
constructing a visual rod cell abnormal classification model according to the fusion data set and a preset machine learning mathematical model;
and classifying the first information according to the visual rod cell abnormality classification model to obtain a classification result, wherein the classification result comprises the abnormality type and the corresponding abnormality degree of the visual rod cell.
In a second aspect, the present application also provides a rod cell abnormality classification device for retinal vascular occlusion diseases, comprising:
an acquisition module for acquiring first information including ophthalmic clinical examination data of a patient to be detected and second information including ophthalmic clinical examination data of a patient with a historic retinal vascular occlusion disease;
The fusion module is used for carrying out characteristic fusion processing according to the optical perception signal record and the dynamic response data in the second information to obtain a fusion data set, wherein the fusion data set comprises frequency spectrum characteristics of the video rod cells and time variation characteristics in the disease development process;
the construction module is used for constructing and obtaining a visual rod cell abnormal classification model according to the fusion data set and a preset machine learning mathematical model;
the classification module is used for carrying out classification processing on the first information according to the visual rod cell abnormality classification model to obtain a classification result, wherein the classification result comprises the abnormality type and the corresponding abnormality degree of the visual rod cell.
The beneficial effects of the invention are as follows:
according to the invention, by combining the frequency spectrum characteristics and the time variation characteristics of the video rod cells, the variation rule of the video rod cells in the disease development process is fully considered, and the fusion data set is classified by constructing a machine learning model, so that the abnormal type and degree of the video rod cells can be more accurately judged, and the classification accuracy is improved; the ophthalmic clinical examination data and the historical patient data are comprehensively utilized, and the distinguishing characteristics comprehensively considering the brightness distribution characteristics, the morphological characteristics and the response amplitude characteristics of the video rod cells are obtained through the characteristic fusion treatment, so that the classification result is more comprehensive and comprehensive, and the abnormal state of the video rod cells can be better described.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for classifying abnormalities of rod cells of retinal vascular occlusion disease according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a device for classifying abnormal rod cells of retinal vascular occlusion disease according to an embodiment of the present invention;
FIG. 3 is a graph showing the change in fluorescence intensity of an abnormality of a rod cell according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Example 1:
the embodiment provides a method for classifying abnormal rod cells of retinal vascular occlusion diseases.
Referring to fig. 1, the method is shown to include step S100, step S200, step S300, and step S400.
Step S100, acquiring first information and second information, wherein the first information comprises ophthalmic clinical examination data of a patient to be detected, and the second information comprises ophthalmic clinical examination data of a patient with historical retinal vascular occlusion disease.
It can be understood that ophthalmic clinical examination data of the patient to be examined, including fundus images, visual field examination results, intraocular pressure, and other relevant indexes, need to be collected in this step. These data are used to obtain the actual condition of the patient's eyes to be examined for subsequent rod cell abnormality classification. Meanwhile, ophthalmic clinical examination data of the patient with the historic retinal vascular occlusion disease are collected as second information, and the data are used for constructing a benchmark based on the historic data so as to compare and analyze the data of the patient to be detected with the benchmark, so that abnormal situations of the rod cells of the patient to be detected can be better known. By acquiring the two parts of information, necessary data support and reference can be provided for a subsequent visual rod cell abnormality classification method, so that the ophthalmic clinical examination data of the patient to be detected can be accurately classified and evaluated.
And step 200, performing feature fusion processing according to the optical sensing signal record and the dynamic response data in the second information to obtain a fusion data set, wherein the fusion data set comprises the frequency spectrum features of the video rod cells and the time variation features in the disease development process.
The method comprises the steps of obtaining the spectrum characteristics of the video rod cell from the light perception signal record in the second information, quantifying the light reflection characteristics, and describing the spectrum characteristics of the video rod cell by analyzing the brightness distribution characteristics and the morphological characteristics of the video rod cell. Through extraction and analysis of the characteristics, the reflection characteristics of the video rod cells in different frequency ranges can be captured, so that the abnormal conditions of the video rod cells are reflected; extracting the time-varying characteristics of the rod cells from the dynamic response data in the second information involves performing a time-series analysis on the dynamic response data, and obtaining the response amplitude characteristics and the time-varying patterns of the rod cells through a characteristic extraction and a pattern recognition process. As shown in fig. 3, fig. 3 is a fluorescence intensity change image of a rod cell abnormality. The image can be used for reflecting the change condition of fluorescence intensity of abnormal areas of the rod cells along with time, so that a specific change mode in the development process of retinal vascular occlusion diseases is revealed, and the image has important significance for understanding the correlation between the abnormal conditions of the rod cells and vascular occlusion pathology and the effect in the development of the diseases.
Furthermore, the dynamic change condition of the rod cells in the disease development process can be revealed by analyzing the response amplitude and time change rule of the rod cells. The finally obtained fusion data set comprehensively considers the frequency spectrum characteristics of the video rod cells and the time variation characteristics in the disease development process, has the capability of describing the video rod cell abnormality more comprehensively and accurately, is beneficial to accurately judging the type and the degree of abnormality of the video rod cells of a patient to be detected, and provides a reference basis for further disease diagnosis and treatment.
Further, step S200 includes step S210, step S220, and step S230.
Step S210, fundus image analysis is carried out according to the optical perception signal record in the second information, and the first characteristic data is obtained by quantifying the optical reflection characteristics of the rod cells, wherein the first characteristic data comprises brightness distribution characteristics and morphological characteristics of the rod cells.
The fundus image is acquired by a fundus imaging apparatus, and image information of the retina and rod cells can be provided. The analysis of the fundus image mainly focuses on the light reflection characteristics of the rod cells, and the brightness change condition of the rod cells can be measured by quantifying the brightness distribution characteristics of the rod cells. Preferably, this can be achieved by calculating average, maximum, minimum, etc. statistics of pixel values. Wherein, the brightness distribution characteristic can reveal the position and brightness variation range of the video rod cell in the image, and provides the spatial information of the video rod cell. Further, it is also necessary to analyze morphological characteristics of rod cells, such as shape, size, contour, etc. This may be achieved by image processing and segmentation algorithms, describing morphological features of the rod cells by detecting edges, extracting feature points or applying morphological operations.
Morphological features can provide information about the structure of the rod cell, such as its shape change, eccentricity, etc. The first characteristic data of the rod cells can reflect brightness change and morphological characteristics of the rod cells in the fundus image, and a basis is provided for subsequent abnormal classification of the rod cells.
Step S220, performing time series analysis according to the dynamic response data in the second information, and obtaining second characteristic data through characteristic extraction and pattern recognition processing, wherein the second characteristic data comprises response amplitude characteristics and time variation patterns of the video rod cells.
Dynamic response data in the step are obtained by stimulating or triggering the rod cells, and the response of the rod cells is recorded. Such data may include electrical activity, light response intensity, or other relevant response indicators of rod cells. Feature extraction is a method for extracting useful information from original data, and features of the data are revealed through methods such as statistic calculation, spectrum analysis, wavelet transformation and the like.
Preferably, suitable feature extraction methods, such as mean, standard deviation, peak, waveform parameters, etc., may be employed in this step to obtain rod cell response amplitude features. The time change mode refers to the change trend and mode of the response of the video rod cells along with time, and is realized by analyzing the characteristics of waveform, periodicity, trend and the like of time series data. The pattern recognition process is a method for classifying and recognizing data, and can analyze and recognize time-varying patterns of rod cells by using techniques such as machine learning, pattern matching, clustering, etc. The response amplitude characteristic and the time change mode of the rod cells obtained in the step can reflect the response intensity of the rod cells to the stimulus and the change trend with time.
And step S230, carrying out fusion analysis on the first characteristic data, the second characteristic data and a preset ophthalmic data processing mathematical model to obtain a fusion data set.
The mathematical model of the ophthalmic data process in this step is used to fuse the first feature data with the second feature data and provide a comprehensive representation of the features. The purpose of this model is to better characterize rod cells, providing more accurate, comprehensive and useful information for subsequent ophthalmic analysis and applications.
The step S230 includes a step S231, a step S232, a step S233, a step S234, and a step S235.
And S231, performing super-pixel clustering processing according to the brightness distribution characteristics and the morphological characteristics in the first characteristic data, and obtaining a first clustering result by locating and grouping the areas of the video rod cells.
Super-pixel clustering is an image segmentation technique aimed at dividing an image into successive regions with similar features, thereby achieving accurate localization and grouping of objects of interest. Retinal vascular occlusion disease typically involves analysis of a large number of fundus image data, which contains rich information, and rod cells, which are one of the critical structures. It is difficult to accurately locate and analyze rod cells using conventional pixel-level processing methods because pixel-level processing fails to take into account the overall characteristics and contextual information of rod cells.
Further, in this embodiment, the super-pixel clustering process is adopted to effectively identify and locate the rod cell regions and group them, so that the super-pixel clustering process is performed according to the brightness distribution features and the morphological features in the first feature data, and includes the following steps:
step S2311, extracting luminance distribution characteristics and morphological characteristics of the rod cells from the first feature data.
Step S2312, the fundus image is divided into a set of super-pixel regions having similar characteristics using a super-pixel algorithm.
Step S2313, performing cluster analysis on the generated super-pixel areas, and classifying similar super-pixels into the same category.
Step S2314, determining a super-pixel area containing the video rod cells according to the clustering result, and positioning and grouping the areas.
And S232, performing time sequence clustering analysis according to the response amplitude characteristic and the time change mode in the second characteristic data, and obtaining a second clustering result by calculating the similarity between the response amplitude characteristic and the time change mode and screening out similar time sequence sub-groups.
In this step, first, the response amplitude characteristic and the time variation pattern of the rod cell are extracted from the second characteristic data, and the response intensity and the time variation trend of the rod cell at different time points are described respectively. And then performing time sequence cluster analysis on the extracted features by using a clustering algorithm. The clustering algorithm can classify rod cells with similar response characteristics and time varying patterns into the same class.
Preferably, a similarity calculation method is used in the clustering process to evaluate the degree of similarity between two time series, such as euclidean distance or correlation coefficient. Time-series sub-populations with similar response patterns are selected based on the results of the similarity calculation, and these sub-populations represent the aggregation of rod cells in similar response patterns. Through this step, the particular response pattern of the rod cells can be identified and understood and provide a basis for further analysis and diagnosis.
Meanwhile, the time sequence cluster analysis can find out the response mode of the video rod cells hidden in a large amount of data, so that more accurate information is provided, and the method has important significance for in-depth research on retinal vascular occlusion diseases. Specifically, the time-series cluster analysis includes the following steps:
first, for the response amplitude feature, the pearson correlation coefficient is used to measure the strength of the linear relationship between the two time series. The calculation formula of the pearson correlation coefficient is as follows:
wherein i represents an index in the time series;and->Response amplitude feature vectors respectively representing two time series; n represents the dimension of the feature vector; />And->Respectively indicate- >And->Is a mean value of (c).
Then, for the time variation pattern, a dynamic time warping algorithm is used to obtain the time offset and the rate of change between sequences. The dynamic time warping algorithm considers the temporal flexibility between sequences by calculating the optimal time alignment path between two sequences.
The pearson correlation coefficient and the dynamic time warping algorithm are combined by comprehensively considering the response amplitude characteristics and the time variation mode, and the following similarity calculation formula is obtained:
wherein,representing a distance or similarity measure between the two time series x and y; />Pearson correlation coefficients of the response amplitude characteristics; />A dynamic time warping distance representing a time variation pattern; />And->Is a weight coefficient used to balance the importance of two features in the similarity calculation.
By using this improved similarity calculation formula, the degree of similarity of the rod cell response amplitude characteristics and the time-varying pattern can be more comprehensively evaluated. The comprehensive consideration method can more accurately capture the specific mode and the change trend in the retinal vascular occlusion disease, provide more accurate clustering results and provide more reliable basis for diagnosis and treatment of the disease.
And step S233, performing graph construction processing according to the first clustering result and the second clustering result to obtain a node graph model, wherein the node graph model comprises nodes and edges, the nodes represent clustered areas or time sequences, and the edges represent the relevance between the clustered areas or time sequences.
It can be appreciated that in this step, by constructing a node map model, a correlation network between the region and the time series can be formed, so that the correlation between different nodes can be represented and analyzed. Helping to further understand the development process of retinal vascular occlusion disease and identify relevant characteristic patterns. Step S233 includes step S2331, step S2332, and step S2333.
Step S2331, based on the first clustering result, the area obtained by each clustering is used as a node to obtain a node set, and each node in the node set represents a specific area with abnormal rod cells.
Thus, according to the first clustering result, the area obtained by each clustering is added into a node set as an independent node, and the nodes correspond to specific areas of abnormal rod cells identified in the clustering process. By taking the areas obtained by each cluster as nodes, the independence and the specificity among the areas can be reserved. Each node in the set of nodes represents a particular region, and its characteristics and properties are related to rod cell abnormalities in that region.
Step S2332, based on the second aggregation result, the similarity between different time sequences is calculated, and a time sequence larger than the preset similarity is selected and connected to obtain an edge set, wherein each edge in the edge set represents the relevance between the different time sequences.
First, for each time series in the second subclass result, the similarity between the second subclass result and other time series is calculated. Preferably, the calculation is performed using a similarity measurement method, such as euclidean distance, correlation coefficient, and the like. The result of the calculation is represented by a similarity matrix. A predetermined similarity threshold is then set for screening time series pairs having a similarity greater than the threshold. Only when the similarity of two time series is greater than a preset similarity threshold, the two time series are connected. And obtaining the edge set by screening out time sequence pairs with similarity larger than a preset similarity threshold. Wherein each edge is connected with two time sequences with high similarity, which indicates that a certain relativity exists between the two time sequences.
Step S2333, combining the node set and the edge set to obtain a node diagram model, wherein the node diagram model is represented by a data structure based on a graph.
A node graph model is a graph data structure, consisting of nodes and edges. Each node represents a particular region of rod cell abnormalities, while the edges represent the association between the different regions. The node graph model reveals the topological structure and interrelationship between abnormal areas of the rod cells through the connection relation of the nodes and the edges. The construction of the node diagram model is beneficial to more comprehensively and accurately analyzing the abnormal condition of the rod cells in the retinal vascular occlusion disease, provides an intuitive way for visualizing the relevance between abnormal areas of the rod cells, and provides a basis for subsequent diagram analysis, pattern discovery and data mining. The role and pattern of changes in the progression of disease in rod cells can be better understood by node map modeling.
Step S234, embedding a mathematical model according to a preset random walk graph, performing embedding processing on nodes and edges in the node graph model, and obtaining low-dimensional vector representation of the nodes and the edges by minimizing differences between similarity matrixes in an original space and Euclidean distance matrixes in a low-dimensional space.
The random walk-embedding mathematical model captures structural information in the node graph model by taking into account similarities and correlations between nodes. This model may embed nodes and edges from the original high-dimensional space into the low-dimensional space such that the relative positions of the nodes and edges in the low-dimensional space can reflect their relationships in the original space.
Specifically, similarity between nodes is measured using a similarity matrix, where elements of the similarity matrix represent the similarity between nodes. And simultaneously, calculating Euclidean distance matrix in the low-dimensional space, and representing the distance relation between the nodes and the edges in the low-dimensional space. The low-dimensional vector representation of the nodes and edges is then determined by minimizing the difference between the similarity matrix in the original space and the euclidean distance matrix in the low-dimensional space.
Preferably, this process can be implemented by an optimization algorithm, such as gradient descent. The complex structure information in the node diagram model is converted into coordinate points in a low-dimensional space through low-dimensional vector representation of the nodes and the edges, so that subsequent data analysis and visualization are convenient. The low-dimensional vector representation preserves the relative relationships between nodes and edges to better understand structural features in node graph models and further performs tasks such as data mining, graph analysis, pattern discovery, and the like.
Step S235, obtaining a fusion data set through weighted fusion processing of a graph attention mechanism according to nodes and edges in the low-dimensional vector representation, wherein the fusion data set comprises distinguishing features comprehensively considering brightness distribution features, morphological features and response amplitude features of the video rod cells.
In the step, a graph annotation force mechanism is used for carrying out weighted fusion on brightness distribution characteristics and morphological characteristics of the nodes. The attention weight between each node and the adjacent node is calculated, and then the node characteristics are weighted and summed, so that comprehensive node characteristic representation is obtained; similarly, the edge features are weighted and summed by calculating the attention weight between each edge and its adjacent edge, resulting in a comprehensive edge feature representation. Each sample in the fusion dataset contains important information of the rod cells in multiple aspects, and more comprehensive and accurate characteristics are provided for subsequent analysis and discrimination tasks.
And step 300, constructing a visual rod cell abnormal classification model according to the fusion data set and a preset machine learning mathematical model.
In this step, the classification model is constructed by combining the processed data sets and a suitable machine learning algorithm. The constructed rod cell abnormality classification model can be applied to the actual retinal vascular occlusion disease analysis and discrimination task. The step S300 includes a step S310, a step S320, and a step S330.
And step S310, cleaning, normalizing and enhancing according to the fusion data set to obtain a preprocessing data set.
The cleaning step comprises removing noise, abnormal values and missing values in the data to ensure the quality and accuracy of the data. The data is standardized to unify the numerical ranges among different features, and deviation caused by scale difference is eliminated. The standardization can lead the influence of different characteristics on the model to be more balanced, and improves the training effect of the model.
The data enhancement processing increases the diversity and richness of the data by performing operations such as rotation, translation, scaling and the like on the data. The data enhancement is beneficial to better learning and generalization of the model, and the adaptability of the model to different samples is improved. The pretreatment data set which is cleaned, standardized and enhanced is obtained through the treatment steps, and high-quality input data is provided for subsequent classification model construction.
Step S320, carrying out shape analysis processing on the preprocessed data set according to a preset image processing mathematical model and a preset detection rule, extracting features corresponding to the development process of the retinal vascular occlusion disease, and obtaining a representative feature set through feature selection and feature dimension reduction processing on the extracted features.
In this step, for the development process of retinal vascular occlusion disease, features corresponding to the development process of the disease are extracted by image processing techniques. Further, these features include association of rod cell density in ischemic areas of blood vessels, branch abnormalities of blood vessels with rod cell abnormalities, association of blood vessel engorgement-oedema with rod cell abnormalities, and association of blood vessel leakage with rod cell abnormalities. Specifically, ischemia caused by vessel occlusion causes changes in rod cell density, which is significantly reduced in the ischemic region compared to the rarefaction of the normal vessel region; vascular occlusion may result in abnormal changes in the vascular branches, and rod cell abnormalities may occur with a high probability in areas where the vascular branches are missing or abnormally dilated; in areas of congestion and edema, the morphology and arrangement of rod cells may be distorted and altered; vascular occlusion can lead to damage to the vessel wall and leakage of blood to form bleeds or hemorrhages, and these areas of vascular leakage can overlap with abnormal areas of rod cells, manifesting as abnormal light reflection or morphological changes.
By performing shape analysis processing on the pre-processed dataset, we can identify features associated with retinal vascular occlusion disease. And then, performing feature selection and feature dimension reduction processing on the extracted features. The purpose of feature selection is to select the most representative and distinguishing features from all extracted features, reducing the effects of redundancy and noise. Feature subsets closely related to the progression of retinal vascular occlusion disease can be screened by feature selection. In the feature dimension reduction process, we perform dimension reduction on selected features to reduce the dimensions of the features and preserve most of the information. Preferably, the dimension reduction method includes principal component analysis, linear discriminant analysis, and the like. Feature representation can be further simplified through feature dimension reduction, calculation complexity is reduced, and efficiency and performance of the classification model are improved.
And step S330, carrying out model construction and training treatment on a preset convolutional neural network model according to the representative feature set, and obtaining a visual pole cell abnormal classification model through back propagation and gradient descent optimization treatment.
Convolutional neural networks are a powerful deep learning model with the ability to adapt to image data processing. By adding components such as a convolution layer, a pooling layer, a full-connection layer and the like in the network, a convolution neural network model suitable for processing the task of classifying the abnormal video rod cells can be constructed. And then training the model by using the representative feature set, wherein the training process uses the representative feature set as input data to match the representative feature set with a corresponding abnormal label of the rod cell, and the weight and bias of the model are adjusted by using a counter-propagation and gradient descent optimization algorithm, so that the model can accurately learn and predict the abnormal condition of the rod cell.
Specifically, by inputting a representative feature set into a convolutional neural network model, the model will gradually extract features and classify through a series of convolutional operations, nonlinear activation functions, pooling operations, and the like.
In the training process, the model calculates a loss function according to the difference between the real label in the training data and the model prediction, and transmits the error back to the network layer by using a back propagation algorithm, so that the parameters of the model are updated. Through repeated iterative training, parameters of the model are continuously optimized, so that the abnormal rod cells can be more accurately classified. The gradient descent algorithm in the optimization process can help the model find a better combination of weights and biases in the parameter space to minimize the loss function. The obtained abnormal classification model of the video rod cells has better performance after model construction and training treatment, and can be used for classifying and diagnosing new retina images.
Step S400, classifying the first information according to the abnormal classification model of the rod cells to obtain a classification result, wherein the classification result comprises the abnormal types of the rod cells and the corresponding abnormal degrees.
The type of abnormality in this step indicates a specific abnormality of the rod cells, such as blood vessel occlusion, blood vessel inflammation, and the like. The degree of abnormality reflects the severity of the abnormality and may be expressed in a quantitative or qualitative manner, such as a mild, moderate, severe grade or related numerical indicator. By classifying the first information, the abnormal video rod cells can be rapidly and accurately identified from a large amount of data, and reliable reference and decision support can be provided for doctors or related professionals. The step S400 includes a step S410, a step S420, a step S430, and a step S430.
Step S410, classifying the input first information according to the abnormal classification model of the rod cells, classifying the first information into corresponding abnormal types of the rod cells according to the learned characteristics and modes of the model, and obtaining a preliminary classification result.
In this step, the input first information is initially classified into a specific type related to the abnormal rod cell, and a basis is provided for subsequent further analysis and diagnosis.
And step S420, constructing and obtaining a lesion model according to the ophthalmic clinical examination data of the patient to be detected in the first information and the pathological basis of the retinal vascular occlusion disease.
The ophthalmic clinical examination data in this step may include image data acquired by various examination means such as fundus photography, fundus imaging, OCT (optical coherence tomography), and the like. These data provide visual information of rod cell abnormalities such as areas of vascular occlusion, vasodilation, bleeding, etc. In the construction of the pathological basis, the pathological processes of retinal vascular occlusion diseases including the aspects of vascular occlusion mechanism, pathological changes of vascular walls, thrombosis, ischemia and reperfusion injury and the like need to be deeply studied, and the pathological features of blood flow change, tissue hypoxia, cell injury and the like caused by vascular occlusion removal are needed.
In particular, rod cell abnormalities may lead to injury and inflammatory response of the vascular intima, which in turn contributes to thrombosis or pathological changes in the vessel wall; reperfusion injury may occur when blood flow is reperfusion into the ischemic area, resulting in additional injury and inflammatory response to rod cells; certain rod cell abnormalities may lead to vasodilation, bending and branching abnormalities, and these vascular changes may be associated with hemodynamic changes and metabolic disturbances caused by rod cell abnormalities; rod cell abnormalities may increase the vulnerability of the vessel wall, leading to vessel rupture and bleeding. Based on ophthalmic clinical examination data and pathological basis, a pathological model is constructed to describe the characteristics of retinal vascular occlusion diseases and simulate the change rule and characteristics of abnormal types of rod cells in the disease development process.
By constructing the lesion model, the pathological changes of the eyes of the patient to be detected can be more accurately understood and described. This provides an important basis for further diagnostic and therapeutic decisions. Meanwhile, the analysis result based on the lesion model can be combined with clinical observation and medical knowledge, so that more comprehensive information is provided for disease evaluation and prognosis of patients.
Step S430, inputting the abnormal type of the rod cell into the lesion model, and verifying the abnormal type result to obtain a verification result, wherein the verification result comprises abnormality degree information corresponding to the abnormal type of the rod cell.
In the step, the lesion model verifies the input preliminary classification result, and compares the matching degree between the input result and the pathological features and modes learned by the model. If the preliminary classification results are consistent with the pathological process simulated in the lesion model, the results may be considered to have higher reliability and accuracy. The verification result will provide information about the correctness and credibility of the anomaly type.
During the verification process, the lesion model also provides abnormality degree information corresponding to the abnormality type. These information reflect the severity or extent of progression of rod cell abnormalities. By simulation and analysis of the lesion model, a quantitative or qualitative indicator of the degree of abnormality can be obtained, thereby evaluating and characterizing the rod cell abnormality in more detail.
Step S440, the preliminary classification result and the verification result are combined to obtain a final classification result.
The final classification result in this step provides a comprehensive assessment and determination as to the type and extent of abnormalities of the rod cells of the patient to be tested. Preferably, feedback iterations are performed based on differences and inconsistencies between the preliminary classification results and the validation results. By analyzing the inconsistent condition, the defects of the model or the problems of verification data can be checked, and corresponding adjustment and correction can be performed. This iterative process may improve the accuracy and reliability of the final classification result.
Example 2:
as shown in fig. 2, the present embodiment provides a rod cell abnormality classification device for retinal vascular occlusion diseases, the device comprising:
an acquisition module 1 for acquiring first information including ophthalmic clinical examination data of a patient to be examined and second information including ophthalmic clinical examination data of a patient suffering from a historic retinal vascular occlusion disease.
And the fusion module 2 is used for carrying out characteristic fusion processing according to the optical perception signal record and the dynamic response data in the second information to obtain a fusion data set, wherein the fusion data set comprises the frequency spectrum characteristics of the video rod cells and the time variation characteristics in the disease development process.
And the construction module 3 is used for constructing and obtaining a rod cell abnormality classification model according to the fusion data set and a preset machine learning mathematical model.
The classification module 4 is configured to perform classification processing on the first information according to the rod cell abnormality classification model to obtain a classification result, where the classification result includes an abnormality type of the rod cell and a corresponding abnormality degree.
In one embodiment of the present disclosure, the fusion module 2 includes:
the first processing unit 21 is configured to perform fundus image analysis according to the optical sensing signal record in the second information, and obtain first feature data by performing quantification processing on the optical reflection feature of the rod cell, where the first feature data includes a brightness distribution feature and a morphological feature of the rod cell.
The second processing unit 22 is configured to perform time-series analysis according to the dynamic response data in the second information, and obtain second feature data through feature extraction and pattern recognition processing, where the second feature data includes a response amplitude feature and a time variation pattern of the rod cell.
And a third processing unit 23, configured to perform fusion analysis on the first feature data, the second feature data, and a preset ophthalmic data processing mathematical model, to obtain a fusion data set.
In one embodiment of the present disclosure, the third processing unit 23 includes:
the first clustering unit 231 is configured to perform super-pixel clustering according to the brightness distribution feature and the morphological feature in the first feature data, and obtain a first clustering result by locating and grouping the regions of the rod cells.
And the second clustering unit 232 is configured to perform a time sequence clustering analysis according to the response amplitude feature and the time variation pattern in the second feature data, and obtain a second clustering result by calculating the similarity between the two time sequences of the response amplitude feature and the time variation pattern and screening out similar time sequence sub-groups.
The first construction unit 233 is configured to perform a graph construction process according to the first clustering result and the second clustering result to obtain a node graph model, where the node graph model includes nodes and edges, the nodes represent clustered regions or time sequences, and the edges represent correlations between clustered regions or time sequences.
A fourth processing unit 234, configured to embed a mathematical model according to a preset random walk graph, perform an embedding process on the nodes and edges in the node graph model, and obtain a low-dimensional vector representation of the nodes and edges by minimizing a difference between a similarity matrix in the original space and an euclidean distance matrix in the low-dimensional space.
And a fifth processing unit 235, configured to obtain a fused dataset according to the nodes and edges in the low-dimensional vector representation through weighted fusion processing of the graph attention mechanism, where the fused dataset includes discriminant features that comprehensively consider brightness distribution features, morphological features and response amplitude features of the rod cells.
In one embodiment of the present disclosure, the first construction unit 233 includes:
the sixth processing unit 2331 obtains a node set by using the region obtained by each cluster as a node based on the first clustering result, and each node in the node set represents a specific region of abnormal rod cells.
And a seventh processing unit 2332, configured to calculate the similarity between the different time sequences based on the second classification result, and select and connect the time sequences with the similarity greater than the preset similarity to obtain an edge set, where each edge in the edge set represents the relevance between the different time sequences.
And a second construction unit 2333, configured to combine the node set and the edge set to construct a node graph model, where the node graph model is a graph-based data structure representation.
In one embodiment of the present disclosure, the build module 3 includes:
an eighth processing unit 31 is configured to perform cleaning, normalization and enhancement processing according to the fused data set, so as to obtain a preprocessed data set.
A ninth processing unit 32, configured to perform shape analysis processing on the preprocessed data set according to a preset image processing mathematical model and a preset detection rule, extract features corresponding to the progression of the retinal vascular occlusion disease, and obtain a representative feature set from the extracted features through feature selection and feature dimension reduction processing.
And a third construction unit 33, configured to perform model construction and training processing on a preset convolutional neural network model according to the representative feature set, and obtain a rod cell abnormality classification model through back propagation and gradient descent optimization processing.
In one embodiment of the present disclosure, the classification module 4 includes:
the first classification unit 41 is configured to perform classification processing on the input first information according to the abnormal classification model of the rod cells, and classify the first information into corresponding abnormal types of the rod cells according to the learned features and modes of the model to obtain a preliminary classification result.
A fourth construction unit 42 for constructing a lesion model based on the ophthalmic clinical examination data of the patient to be examined and the pathological basis of the retinal vascular occlusion disease in the first information.
The first verification unit 43 is configured to input the abnormal type of the rod cell into the lesion model, and verify the abnormal type result to obtain a verification result, where the verification result includes abnormality degree information corresponding to the abnormal type of the rod cell.
A tenth processing unit 44, configured to integrate the preliminary classification result and the verification result to obtain a final classification result.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (4)

1. A method for classifying abnormalities in rod cells in retinal vascular occlusion disease comprising:
acquiring first information and second information, wherein the first information comprises ophthalmic clinical examination data of a patient to be detected, and the second information comprises ophthalmic clinical examination data of a patient with historical retinal vascular occlusion disease;
performing feature fusion processing according to the optical perception signal record and the dynamic response data in the second information to obtain a fusion data set, wherein the fusion data set comprises frequency spectrum features of the video rod cells and time variation features in the disease development process;
constructing a visual rod cell abnormal classification model according to the fusion data set and a preset machine learning mathematical model;
classifying the first information according to the abnormal classification model of the video rod cells to obtain a classification result, wherein the classification result comprises the abnormal type of the video rod cells and the corresponding abnormal degree;
and performing signal processing and feature extraction according to the optical sensing signal record and dynamic response data in the second information to obtain a fusion data set, wherein the fusion data set comprises:
performing fundus image analysis according to the optical perception signal record in the second information, and quantitatively processing the optical reflection characteristics of the video rod cells to obtain first characteristic data, wherein the first characteristic data comprises brightness distribution characteristics and morphological characteristics of the video rod cells;
Performing time sequence analysis according to the dynamic response data in the second information, and obtaining second characteristic data through characteristic extraction and pattern recognition processing, wherein the second characteristic data comprises response amplitude characteristics and time change patterns of the video rod cells;
performing fusion analysis on the first characteristic data, the second characteristic data and a preset ophthalmic data processing mathematical model to obtain a fusion data set;
the method for processing the first characteristic data, the second characteristic data and the preset ophthalmic data processing mathematical model to obtain a fusion data set includes:
performing super-pixel clustering according to brightness distribution features and morphological features in the first feature data, and obtaining a first clustering result by locating and grouping areas of the video rod cells;
performing time sequence clustering analysis according to the response amplitude characteristic and the time change mode in the second characteristic data, and obtaining a second clustering result by calculating the similarity between the response amplitude characteristic and the time change mode and screening similar time sequence sub-groups;
performing graph construction processing according to the first clustering result and the second clustering result to obtain a node graph model, wherein the node graph model comprises nodes and edges, the nodes represent clustered areas or time sequences, and the edges represent correlations among the clustered areas or time sequences;
Embedding a mathematical model according to a preset random walk graph, performing embedding processing on nodes and edges in the node graph model, and obtaining low-dimensional vector representation of the nodes and the edges by minimizing the difference between a similarity matrix in an original space and an Euclidean distance matrix in a low-dimensional space;
according to the nodes and edges in the low-dimensional vector representation, a fusion data set is obtained through weighted fusion processing of a graph attention mechanism, and the fusion data set comprises distinguishing features comprehensively considering brightness distribution features, morphological features and response amplitude features of the video rod cells;
the super-pixel clustering processing is performed according to the brightness distribution characteristics and the morphological characteristics in the first characteristic data, and the method comprises the following steps:
extracting brightness distribution characteristics and morphological characteristics of the video rod cells from the first characteristic data;
dividing the fundus image into a group of super-pixel areas with similar characteristics by using a super-pixel algorithm;
performing cluster analysis on the generated super-pixel area, and classifying similar super-pixels into the same category;
determining a super-pixel region containing the video rod cells according to the clustering result, and positioning and grouping the regions;
wherein, the time series cluster analysis comprises the following steps:
For the response amplitude feature, the pearson correlation coefficient is used to measure the linear relationship strength between two time series, and the calculation formula of the pearson correlation coefficient is as follows:
wherein i represents an index in the time series;and->Response amplitude feature vectors respectively representing two time series; n represents the dimension of the feature vector; />And->Respectively indicate->And->Is the average value of (2);
for the time variation mode, a dynamic time warping algorithm is adopted to obtain the time offset and the variation rate between sequences, and the similarity calculation formula is as follows:
wherein,representing a distance or similarity measure between the two time series x and y; />Pearson correlation coefficients of the response amplitude characteristics; />A dynamic time warping distance representing a time variation pattern;and->Is a weight coefficient.
2. The method of claim 1, wherein constructing a rod cell abnormality classification model from the fused dataset and a pre-set machine learning mathematical model comprises:
cleaning, normalizing and enhancing according to the fusion data set to obtain a preprocessing data set;
performing shape analysis processing on the preprocessing data set according to a preset image processing mathematical model and a preset detection rule, extracting features corresponding to the development process of the retinal vascular occlusion disease, and obtaining a representative feature set through feature selection and feature dimension reduction processing on the extracted features;
And carrying out model construction and training treatment on a preset convolutional neural network model according to the representative feature set, and obtaining a visual rod cell abnormality classification model through back propagation and gradient descent optimization treatment.
3. The method according to claim 1, wherein classifying the first information according to the rod cell abnormality classification model to obtain a classification result comprises:
classifying the input first information according to the abnormal classification model of the video rod cells, classifying the first information into corresponding abnormal types of the video rod cells through the learned characteristics and modes of the model to obtain a preliminary classification result;
constructing a pathological model according to the ophthalmic clinical examination data of the patient to be detected in the first information and the pathological basis of the retinal vascular occlusion disease;
inputting the abnormal type of the video rod cell into the pathological change model, and verifying the abnormal type result to obtain a verification result, wherein the verification result comprises abnormal degree information corresponding to the abnormal type of the video rod cell;
and integrating the preliminary classification result and the verification result to obtain a final classification result.
4. A rod cell abnormality classification device for retinal vascular occlusion disease, comprising:
an acquisition module for acquiring first information including ophthalmic clinical examination data of a patient to be detected and second information including ophthalmic clinical examination data of a patient with a historic retinal vascular occlusion disease;
the fusion module is used for carrying out characteristic fusion processing according to the optical perception signal record and the dynamic response data in the second information to obtain a fusion data set, wherein the fusion data set comprises frequency spectrum characteristics of the video rod cells and time variation characteristics in the disease development process;
the construction module is used for constructing and obtaining a visual rod cell abnormal classification model according to the fusion data set and a preset machine learning mathematical model;
the classification module is used for carrying out classification processing on the first information according to the visual rod cell abnormality classification model to obtain a classification result, wherein the classification result comprises the abnormality type of the visual rod cell and the corresponding abnormality degree;
wherein, the fusion module includes:
the first processing unit is used for analyzing fundus images according to the optical perception signal record in the second information, and obtaining first characteristic data through quantifying the optical reflection characteristics of the video rod cells, wherein the first characteristic data comprises brightness distribution characteristics and morphological characteristics of the video rod cells;
The second processing unit is used for carrying out time sequence analysis according to the dynamic response data in the second information, and obtaining second characteristic data through characteristic extraction and pattern recognition processing, wherein the second characteristic data comprises response amplitude characteristics and time change patterns of the video rod cells;
the third processing unit is used for carrying out fusion analysis on the first characteristic data, the second characteristic data and a preset ophthalmic data processing mathematical model to obtain a fusion data set;
wherein the third processing unit includes:
the first clustering unit is used for carrying out super-pixel clustering processing according to the brightness distribution characteristics and the morphological characteristics in the first characteristic data, and obtaining a first clustering result by locating and grouping the areas of the video rod cells;
the second clustering unit is used for carrying out time sequence clustering analysis according to the response amplitude characteristics and the time change modes in the second characteristic data, and obtaining a second clustering result by calculating the similarity between the response amplitude characteristics and the time change modes and screening similar time sequence sub-groups;
the first construction unit is used for carrying out graph construction processing according to the first clustering result and the second clustering result to obtain a node graph model, wherein the node graph model comprises nodes and edges, the nodes represent clustered areas or time sequences, and the edges represent correlations among the clustered areas or time sequences;
A fourth processing unit, configured to embed a mathematical model according to a preset random walk graph, perform embedding processing on nodes and edges in the node graph model, and obtain a low-dimensional vector representation of the nodes and edges by minimizing a difference between a similarity matrix in an original space and an euclidean distance matrix in a low-dimensional space;
a fifth processing unit, configured to obtain a fusion dataset through weighted fusion processing of a graph attention mechanism according to nodes and edges in the low-dimensional vector representation, where the fusion dataset includes discriminant features that comprehensively consider brightness distribution features, morphological features and response amplitude features of the rod cells;
wherein the first clustering unit includes:
extracting brightness distribution characteristics and morphological characteristics of the video rod cells from the first characteristic data;
dividing the fundus image into a group of super-pixel areas with similar characteristics by using a super-pixel algorithm;
performing cluster analysis on the generated super-pixel area, and classifying similar super-pixels into the same category;
determining a super-pixel region containing the video rod cells according to the clustering result, and positioning and grouping the regions;
wherein, the time series cluster analysis comprises the following steps:
For the response amplitude feature, the pearson correlation coefficient is used to measure the linear relationship strength between two time series, and the calculation formula of the pearson correlation coefficient is as follows:
wherein i represents an index in the time series;and->Response amplitude feature vectors respectively representing two time series; n represents the dimension of the feature vector; />And->Respectively indicate->And->Is the average value of (2);
for the time variation mode, a dynamic time warping algorithm is adopted to obtain the time offset and the variation rate between sequences, and the similarity calculation formula is as follows:
wherein,representing a distance or similarity measure between the two time series x and y; />Pearson correlation coefficients of the response amplitude characteristics; />A dynamic time warping distance representing a time variation pattern; />And->Is a weight coefficient.
CN202311019231.2A 2023-08-14 2023-08-14 Method and device for classifying abnormal rod cells of retinal vascular occlusion disease Active CN116738352B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311019231.2A CN116738352B (en) 2023-08-14 2023-08-14 Method and device for classifying abnormal rod cells of retinal vascular occlusion disease

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311019231.2A CN116738352B (en) 2023-08-14 2023-08-14 Method and device for classifying abnormal rod cells of retinal vascular occlusion disease

Publications (2)

Publication Number Publication Date
CN116738352A CN116738352A (en) 2023-09-12
CN116738352B true CN116738352B (en) 2023-12-22

Family

ID=87908330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311019231.2A Active CN116738352B (en) 2023-08-14 2023-08-14 Method and device for classifying abnormal rod cells of retinal vascular occlusion disease

Country Status (1)

Country Link
CN (1) CN116738352B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111630170A (en) * 2017-07-31 2020-09-04 映像生物有限公司 Cellular models of ocular diseases and therapies for ocular diseases
CA3145239A1 (en) * 2019-08-06 2021-02-11 Hugh LIN Personalized treatment of ophthalmologic diseases
CN114548239A (en) * 2022-01-28 2022-05-27 大连理工大学 Image identification and classification method based on artificial neural network of mammal-like retina structure
CN114561408A (en) * 2022-01-24 2022-05-31 四川省医学科学院·四川省人民医院 Construction method and application of retinal pigment degeneration disease model
CN114916502A (en) * 2022-07-07 2022-08-19 电子科技大学 Construction method and application of retinal pigment degeneration disease model
CN115176760A (en) * 2022-07-07 2022-10-14 电子科技大学 Method for constructing retinal pigment degeneration disease model, application and breeding method
CN115243766A (en) * 2020-02-28 2022-10-25 宾夕法尼亚州大学信托人 Treatment of autosomal dominant bestrol disease and methods for evaluating the same

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2239675A1 (en) * 2009-04-07 2010-10-13 BIOCRATES Life Sciences AG Method for in vitro diagnosing a complex disease
GB201718238D0 (en) * 2017-11-03 2017-12-20 Univ Oxford Innovation Ltd Method and system for determining the disease status of a subject
CN108029638B (en) * 2017-12-04 2020-04-21 四川省人民医院 Construction method and application of animal model of retinitis pigmentosa disease
EP3809948A4 (en) * 2018-06-20 2022-03-16 Acucela Inc. Miniaturized mobile, low cost optical coherence tomography system for home based ophthalmic applications
US20210090736A1 (en) * 2019-09-24 2021-03-25 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for anomaly detection for a medical procedure
US20230181258A1 (en) * 2021-12-10 2023-06-15 Ix Innovation Llc Robotic surgical system for insertion of surgical implants

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111630170A (en) * 2017-07-31 2020-09-04 映像生物有限公司 Cellular models of ocular diseases and therapies for ocular diseases
CA3145239A1 (en) * 2019-08-06 2021-02-11 Hugh LIN Personalized treatment of ophthalmologic diseases
CN115243766A (en) * 2020-02-28 2022-10-25 宾夕法尼亚州大学信托人 Treatment of autosomal dominant bestrol disease and methods for evaluating the same
CN114561408A (en) * 2022-01-24 2022-05-31 四川省医学科学院·四川省人民医院 Construction method and application of retinal pigment degeneration disease model
CN114548239A (en) * 2022-01-28 2022-05-27 大连理工大学 Image identification and classification method based on artificial neural network of mammal-like retina structure
CN114916502A (en) * 2022-07-07 2022-08-19 电子科技大学 Construction method and application of retinal pigment degeneration disease model
CN115176760A (en) * 2022-07-07 2022-10-14 电子科技大学 Method for constructing retinal pigment degeneration disease model, application and breeding method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Anomaly detection for machine learning redshifts applied to SDSS galaxies;Ben Hoyle等;《arXiv.org》;第1-12页 *
基于深度学习的视网膜眼底图像分割技术研究;赵健康;《中国优秀硕士学位论文全文数据库医药卫生科技辑》(第9期);第E065-11页 *
花青苷应用于眼科常见疾病研究进展;龚帅等;《广东化工》;第46卷(第1期);第71-73页 *

Also Published As

Publication number Publication date
CN116738352A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN108389201B (en) Lung nodule benign and malignant classification method based on 3D convolutional neural network and deep learning
Li et al. Deep learning-based automated detection of glaucomatous optic neuropathy on color fundus photographs
CN108464840B (en) Automatic detection method and system for breast lumps
CN110188836B (en) Brain function network classification method based on variational self-encoder
Song et al. Deep relation transformer for diagnosing glaucoma with optical coherence tomography and visual field function
WO2012078636A1 (en) Optimal, user-friendly, object background separation
CN104424386A (en) Multi-parameter magnetic resonance image based prostate cancer computer auxiliary identification system
CN114999629B (en) AD early prediction method, system and device based on multi-feature fusion
Sharma et al. Machine learning approach for detection of diabetic retinopathy with improved pre-processing
Kang et al. Automatic detection of diabetic retinopathy with statistical method and Bayesian classifier
Yugha et al. An automated glaucoma detection from fundus images based on deep learning network
Reddy et al. Discovering optimal algorithm to predict diabetic retinopathy using novel assessment methods
WO2014158345A1 (en) Methods and systems for vessel bifurcation detection
Nagadeepa et al. Artificial Intelligence based Cervical Cancer Risk Prediction Using M1 Algorithms
Elayaraja et al. An efficient approach for detection and classification of cancer regions in cervical images using optimization based CNN classification approach
Miao et al. Classification of Diabetic Retinopathy Based on Multiscale Hybrid Attention Mechanism and Residual Algorithm
CN117352164A (en) Multi-mode tumor detection and diagnosis platform based on artificial intelligence and processing method thereof
CN116740426A (en) Classification prediction system for functional magnetic resonance images
Lestari et al. Liver detection based on iridology using local binary pattern extraction
CN116738352B (en) Method and device for classifying abnormal rod cells of retinal vascular occlusion disease
CN114305387A (en) Magnetic resonance imaging-based method, equipment and medium for classifying small cerebral vascular lesion images
KR102400568B1 (en) Method and apparatus for identifying anomaly area of image using encoder
Naz et al. An automated unsupervised deep learning–based approach for diabetic retinopathy detection
Abinaya et al. Detection and Classification Of Diabetic Retinopathy Using Machine Learning–A Survey
Nagesh et al. A spatiotemporal approach to predicting glaucoma progression using a ct-hmm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant