CN113129267A - OCT image detection method and system based on retina hierarchical data - Google Patents
OCT image detection method and system based on retina hierarchical data Download PDFInfo
- Publication number
- CN113129267A CN113129267A CN202110301302.2A CN202110301302A CN113129267A CN 113129267 A CN113129267 A CN 113129267A CN 202110301302 A CN202110301302 A CN 202110301302A CN 113129267 A CN113129267 A CN 113129267A
- Authority
- CN
- China
- Prior art keywords
- hypergraph
- data
- oct image
- network
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000001525 retina Anatomy 0.000 title claims abstract description 27
- 238000001514 detection method Methods 0.000 title claims abstract description 14
- 239000011159 matrix material Substances 0.000 claims abstract description 41
- 238000000605 extraction Methods 0.000 claims abstract description 25
- 201000010099 disease Diseases 0.000 claims abstract description 19
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims abstract description 19
- 238000000034 method Methods 0.000 claims abstract description 16
- 230000002207 retinal effect Effects 0.000 claims description 25
- 210000004126 nerve fiber Anatomy 0.000 claims description 23
- 238000010276 construction Methods 0.000 claims description 18
- 238000007781 pre-processing Methods 0.000 claims description 15
- 238000012360 testing method Methods 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 9
- 230000000877 morphologic effect Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 230000004931 aggregating effect Effects 0.000 claims description 5
- 230000002708 enhancing effect Effects 0.000 claims description 5
- 238000003064 k means clustering Methods 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 5
- 210000000981 epithelium Anatomy 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 4
- 239000000049 pigment Substances 0.000 claims description 4
- 238000002310 reflectometry Methods 0.000 claims description 4
- 210000003583 retinal pigment epithelium Anatomy 0.000 claims description 4
- 210000001519 tissue Anatomy 0.000 claims description 4
- 201000009310 astigmatism Diseases 0.000 claims description 2
- 230000036772 blood pressure Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 claims description 2
- 239000007788 liquid Substances 0.000 claims description 2
- 208000001491 myopia Diseases 0.000 claims description 2
- 230000004379 myopia Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 claims description 2
- 230000002123 temporal effect Effects 0.000 claims description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 4
- 238000005457 optimization Methods 0.000 abstract description 2
- 238000013527 convolutional neural network Methods 0.000 abstract 1
- 238000012014 optical coherence tomography Methods 0.000 description 42
- 238000010586 diagram Methods 0.000 description 5
- 238000003745 diagnosis Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000001427 coherent effect Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 206010003210 Arteriosclerosis Diseases 0.000 description 1
- 208000024172 Cardiovascular disease Diseases 0.000 description 1
- 208000005590 Choroidal Neovascularization Diseases 0.000 description 1
- 206010060823 Choroidal neovascularisation Diseases 0.000 description 1
- 206010058202 Cystoid macular oedema Diseases 0.000 description 1
- 206010012689 Diabetic retinopathy Diseases 0.000 description 1
- 208000010412 Glaucoma Diseases 0.000 description 1
- 206010020772 Hypertension Diseases 0.000 description 1
- 208000001344 Macular Edema Diseases 0.000 description 1
- 206010025421 Macule Diseases 0.000 description 1
- 208000002367 Retinal Perforations Diseases 0.000 description 1
- 206010064930 age-related macular degeneration Diseases 0.000 description 1
- 208000011775 arteriosclerosis disease Diseases 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 201000010206 cystoid macular edema Diseases 0.000 description 1
- 206010012601 diabetes mellitus Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 208000030533 eye disease Diseases 0.000 description 1
- 238000001727 in vivo Methods 0.000 description 1
- 208000002780 macular degeneration Diseases 0.000 description 1
- 208000029233 macular holes Diseases 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 231100000915 pathological change Toxicity 0.000 description 1
- 230000036285 pathological change Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0025—Operational features thereof characterised by electronic signal processing, e.g. eye models
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/102—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/12—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Abstract
The invention discloses an OCT image detection method and system based on retina hierarchical data. Firstly, an OCT image is segmented by using a convolutional neural network, namely an OCT image feature extraction network, feature parameters are extracted, a hypergraph is constructed by using the parameters, and finally, the fundus disease of a patient is diagnosed by using a dynamic hypergraph convolutional network. The invention combines the image data and the physiological data characteristics by using the hypergraph, which is beneficial to the discovery of the relationship among multi-modal data; the method is beneficial to discovering the deep level relation between the characteristics and the labels of the data in a certain mode. The hypergraph matrix is constructed by a dynamic method, and automatic optimization of network parameters can be realized.
Description
Technical Field
The invention belongs to the field of image processing, and particularly relates to a retina layered data fundus image detection method and system based on Optical Coherence Tomography (OCT).
Background
Optical Coherence Tomography (OCT) is an imaging technique rapidly developed in the last decade, which uses the basic principle of weak coherent Optical interferometer to detect back-reflected or several scattered signals of incident weak coherent light at different depth levels of biological tissue, and then scans to obtain two-dimensional or three-dimensional structural images of biological tissue. It can be used for in vivo viewing, axial sectioning, and measurement of posterior segment structures of the eye, including the retina, retinal nerve fiber layers, macula, and optic disc, and is particularly useful as a diagnostic device to aid in the detection and management of eye diseases, including but not limited to macular holes, cystoid macular edema, diabetic retinopathy, age-related macular degeneration, and glaucoma.
In daily life, when an ophthalmologist diagnoses fundus diseases for a patient, the ophthalmologist often performs fundus examination for the patient first, obtains fundus images of the patient and knows the fundus pathological changes of the patient. The fundus image is an important tool for diagnosing the ophthalmic diseases by ophthalmologists, and the change of the structural characteristics of the fundus image can well reflect the disease conditions of patients. Among them, the fundus blood vessel is the most important and stable structure in the fundus image. It has the morphological properties of artery-vein ratio, width, length, branching mode, bending degree and the like, and can be used for diagnosing various ophthalmic diseases and cardiovascular diseases such as hypertension, diabetes, arteriosclerosis, choroidal neovascularization and the like. Currently, an ophthalmologist usually empirically and directly observes an eye fundus image of a patient to diagnose a disease in the patient. However, fundus images of patients are complex, doctors cannot easily observe the images, some key detailed information is easy to miss, and diagnosis of the patients is affected. Meanwhile, a great deal of effort is required to be invested by an ophthalmologist, and the workload of the ophthalmologist is increased.
Dynamic hypergravity Construction (Dynamic hypergravity matrix building network): jiang, j., Wei, Y., Feng, Y., Cao, j., & Gao, Y. (2019, August).
Disclosure of Invention
The method has the technical key points that a hypergraph is constructed by utilizing optical coherence tomography imaging and physiological index baseline data, a hypergraph neural network is trained, and automatic detection of fundus images is realized.
An OCT image detection method based on retina layering data comprises the following steps:
s1: data set acquisition and preprocessing.
S2: and (4) extracting the characteristics of the OCT image.
S3: and constructing a dynamic hypergraph matrix.
S4: a hypergraph convolutional network is established and trained.
S5: and automatically detecting multi-modal data.
An OCT image detection system based on retina hierarchical data comprises a data preprocessing module, a feature extraction module, a hypergraph matrix construction module and a hypergraph convolution module.
The data preprocessing module is used for normalizing the local contrast and brightness of the OCT image and then cutting the OCT image into uniform size. And enhancing the image data through operations of rotation, translation and horizontal overturning.
The characteristic extraction module performs retina layering segmentation on the OCT image through the trained characteristic extraction network, performs morphological operation on the segmented OCT image, and extracts structural characteristic parameters.
The hypergraph matrix Construction module is used for performing KNN operation once by taking each non-binary characteristic parameter as a center through a Dynamic hypergraph matrix Construction network (Dynamic hypergraph Construction), selecting K objects closest to the center, and enabling the K +1 objects to be located on the same hyperedge so as to generate a basic hyperedge; expanding adjacent super edges through a K-means clustering algorithm; the characteristic parameters of the second classification directly use 0 and 1 to form corresponding super edges; the data of all characteristic parameters are spliced to form a hypergraph matrix, wherein the last column is an input fundus disease label.
The hypergraph convolution module adopts a hypergraph convolution network comprising vertex convolution and hypergraph convolution, the hypergraph convolution network is respectively used for aggregating the characteristics between the vertexes and the hypergraph, and the trained hypergraph convolution network is used for automatically detecting new data.
The invention has the following beneficial effects:
the invention firstly uses a convolution neural network, namely an OCT image characteristic extraction network to segment an OCT image, extracts characteristic parameters and uses the parameters to construct a hypergraph, and finally uses a dynamic hypergraph convolution network to diagnose fundus diseases of a patient. The characteristics of the image data and the physiological data are combined together by utilizing the hypergraph, so that the relationship among multi-modal data can be conveniently discovered; the method is beneficial to discovering the deep level relation between the characteristics and the labels of the data in a certain mode. The hypergraph matrix is constructed by a dynamic method, and automatic optimization of network parameters can be realized.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a dynamic hypergraph matrix construction according to an embodiment of the invention;
FIG. 3 is a diagram of an OCT retinal segmentation in accordance with an embodiment of the invention;
fig. 4 is a schematic diagram of an OCT image feature extraction network according to an embodiment of the present invention.
Detailed Description
Aiming at the problems existing in the current diagnosis of the fundus diseases, the invention provides a fundus disease detection technology based on OCT images, which has high accuracy and low labor cost. The problems that the traditional diagnosis method is difficult to observe, high in labor cost and low in accuracy are solved.
The hypergraph is composed of nodes and hypergraph edges, the hypergraph edges represent a relationship, and the hypergraph matrix can represent the relationship between the nodes. In the hypergraph structure, each patient is set as a node, the characteristic is set as a hyperedge, and the corresponding position of the patient with the characteristic in the matrix is 1, otherwise, the corresponding position is 0. In the above feature, the binary property feature directly represents the super-edge by 0 and 1, and the rest of the features are subjected to super-edge establishment by using a KNN algorithm. After the hypergraph is built by using the OCT image with the eyeground disease label, the hypergraph is input into a dynamic hypergraph convolution network, so that the optimal hypergraph model can be updated in real time when a new patient is input.
As shown in fig. 1, in order to achieve the object of the present invention, the technical scheme adopted comprises the following steps:
s1: data set acquisition and preprocessing.
S1-1: m OCT images of each fundus disease patient and normal person, and their physiological data are acquired as a data set.
S1-2: and performing image preprocessing operation on the OCT image acquired in the S1-1 through a data preprocessing module, specifically performing normalization operation on the local contrast and brightness of the OCT image, and then cutting the OCT image into uniform size. And enhancing the image data through operations of rotation, translation and horizontal overturning. The data set of the image and the physiological pairing is divided into a training set and a testing set.
S2: and (4) extracting the characteristics of the OCT image.
S2-1: and constructing an OCT image feature extraction network. The network contains 5 convolutional layers, 3 pooling layers, and 2 local response normalization processes. Sequentially carrying out operations of convolution twice, local response normalization and pooling, and then generating a feature vector through three convolution layers and one pooling layer;
fig. 4 is a schematic diagram of an OCT image feature extraction network according to an embodiment of the present invention.
S2-2: and inputting the OCT image data of the training set in the S1 into an OCT image feature extraction network of an image feature extraction module for retina hierarchical segmentation.
S2-3: and performing morphological operation on the segmented OCT image, and extracting structural characteristic parameters, wherein the specific characteristic parameters are as follows.
After obtaining the fundus OCT image (as shown in fig. 3), extracting structural information thereon; the method specifically comprises the following steps:
(1) thickness S above the optic papillary nerve fiber layer;
(2) thickness I below the optic papillary nerve fiber layer;
(3) the thickness N of the optic papillary nerve fiber layer on the nasal side;
(4) optic papillary nerve fiber layer temporal thickness T;
(5)0 ° retinal nerve fiber layer thickness h 0;
(6)30 ° retinal nerve fiber layer thickness h 1;
(7)60 ° retinal nerve fiber layer thickness h 2;
(8)90 ° retinal nerve fiber layer thickness h 3;
(9)120 ° retinal nerve fiber layer thickness h 4;
(10)150 ° retinal nerve fiber layer thickness h 5;
(11) foveal retinal thickness h 6;
(12) retinal nerve fiber layer reflectivity f 0;
(13) retinal pigment epithelium thickness h 7;
(14) retinal pigment epithelium layer reflectivity f 1;
(15) degree of retinal surface smoothness d;
(16) a disengagement angle α;
(17) whether the neurosensory layer is elevated;
(18) whether the pigment epithelium layer is raised;
(19) a liquid cavity location;
(20) whether there is a shadowing effect on the underlying tissue;
(21) a low reflection cavity position;
(22) whether a columnar connection exists in the cavity;
(23) whether or not there is a high reflectance signal in front of the retina;
(24) whether there is a high reflectance signal within the retina;
(25) whether there is a high reflectance signal under the retina;
(26) whether a high-reflection signal exists under pigment epithelium;
(27) whether a high reflection signal exists in the deep retina;
in addition to the above-mentioned 27 characteristic parameters, there are 8 physiological data parameters, each being
(1) Age: age (age)
(2) And (3) myopia degree: degree
(3) Astigmatism degree: degree2
(4) Height: height
(5) Weight: weight
(6) Blood pressure: pressure of
(7) Sex: sex
(8) The medical history: history
S3: and constructing a dynamic hypergraph matrix.
The hypergraph matrix is constructed by a dynamic hypergraph matrix construction network in a hypergraph matrix construction module by using the structural feature parameters obtained in S2 and the physiological data parameters in the corresponding training set, and the specific process is as follows:
s3-1: each fundus disease patient and normal person respectively take n OCT images and obtain corresponding structural and physiological characteristic parameters;
s3-2: taking each non-binary characteristic parameter as a center, performing KNN operation once, selecting K objects closest to the center, and enabling the K +1 objects to be located on the same super edge so as to generate a basic super edge; expanding adjacent super edges through a K-means clustering algorithm;
s3-3: the characteristic parameters of the second classification directly use 0 and 1 to form corresponding super edges;
s3-4: and splicing the data of all the characteristic parameters to form a hypergraph matrix, wherein the last column is an input fundus disease label.
FIG. 2 is a schematic diagram of a dynamic hypergraph matrix construction according to an embodiment of the invention;
s4: a hypergraph convolutional network is established and trained.
And constructing a hypergraph convolution network comprising vertex convolution and hypergraph convolution, wherein the hypergraph convolution network is used as a hypergraph convolution module and is respectively used for aggregating the characteristics between the vertex and the hypergraph. The hypergraph convolutional network is trained by using the hypergraph matrix constructed in S3.
Segmenting retina and extracting characteristics of the test set data through an OCT image characteristic extraction network to obtain test set structure characteristic parameters; and constructing a hypergraph matrix through the structural characteristic parameters of the OCT images of the test set and the corresponding physiological data parameters, testing the trained hypergraph convolution network, and debugging the hypergraph convolution network according to the test result to obtain the final hypergraph convolution network.
S5: and automatically detecting multi-modal data.
And automatically detecting new data by using the trained OCT image feature extraction network and the final hypergraph convolution network, wherein the specific operations are as follows:
s5-1: acquiring a new OCT image and corresponding physiological data parameters;
s5-2: preprocessing the new OCT image in the same way as S1-2, and segmenting and extracting the characteristics of the retina through a characteristic extraction module to obtain structural characteristic parameters;
s5-3: and splicing new data (namely the structural characteristic parameters of the new OCT image and corresponding physiological data parameters) serving as a new node into the original hypergraph matrix, reconstructing the new hypergraph matrix through a dynamic hypergraph matrix construction network in a hypergraph matrix construction module, and inputting the reconstructed hypergraph matrix into a trained final hypergraph convolution network to obtain a detection result of the new data.
An OCT image detection system based on retina hierarchical data comprises a data preprocessing module, a feature extraction module, a hypergraph matrix construction module and a hypergraph convolution module.
The data preprocessing module is used for normalizing the local contrast and brightness of the OCT image and then cutting the OCT image into uniform size. And enhancing the image data through operations of rotation, translation and horizontal overturning.
The characteristic extraction module performs retina layering segmentation on the OCT image through the trained characteristic extraction network, performs morphological operation on the segmented OCT image, and extracts structural characteristic parameters.
The hypergraph matrix construction module performs KNN operation once by taking each non-binary characteristic parameter as a center, selects K objects closest to the center, and the K +1 objects are positioned on the same hyperedge so as to generate a basic hyperedge; expanding adjacent super edges through a K-means clustering algorithm; the characteristic parameters of the second classification directly use 0 and 1 to form corresponding super edges; the data of all characteristic parameters are spliced to form a hypergraph matrix, wherein the last column is an input fundus disease label.
The hypergraph convolution module adopts a hypergraph convolution network comprising vertex convolution and hypergraph convolution, the hypergraph convolution network is respectively used for aggregating the characteristics between the vertexes and the hypergraph, and the trained hypergraph convolution network is used for automatically detecting new data.
Claims (7)
1. An OCT image detection method based on retina hierarchical data is characterized by comprising the following steps:
s1: acquiring and preprocessing a data set;
s2: extracting OCT image features;
s3: constructing a dynamic hypergraph matrix;
s4: building and training a hypergraph convolution network;
s5: and (5) automatically detecting and testing multi-modal data.
2. The OCT image detecting method based on the retinal layering data of claim 1, wherein S1 specifically operates as follows:
s1-1: acquiring m OCT images of each fundus disease patient and normal person and physiological data of the patients and the normal persons as a data set;
s1-2: performing image preprocessing operation on the OCT image acquired in the S1-1 through a data preprocessing module, specifically performing normalization operation on the local contrast and brightness of the OCT image, and then cutting the OCT image into uniform size; enhancing image data through operations of rotation, translation and horizontal overturning; the data set of the image and the physiological pairing is divided into a training set and a testing set.
3. The OCT image detecting method based on the retinal layering data of claim 2, wherein S2 specifically operates as follows:
s2-1: constructing an OCT image feature extraction network; the network comprises 5 convolutional layers, 3 pooling layers and 2 times of local response normalization processing; sequentially carrying out operations of convolution twice, local response normalization and pooling, and then generating a feature vector through three convolution layers and one pooling layer;
s2-2: inputting the OCT image data of the training set in the S1 into an OCT image feature extraction network of an image feature extraction module for retina layered segmentation;
s2-3: performing morphological operation on the segmented OCT image, and extracting structural characteristic parameters, wherein the specific characteristic parameters are as follows;
after obtaining the fundus OCT image, extracting structural information of the fundus OCT image; the method specifically comprises the following steps:
(1) thickness S above the optic papillary nerve fiber layer;
(2) thickness I below the optic papillary nerve fiber layer;
(3) the thickness N of the optic papillary nerve fiber layer on the nasal side;
(4) optic papillary nerve fiber layer temporal thickness T;
(5)0 ° retinal nerve fiber layer thickness h 0;
(6)30 ° retinal nerve fiber layer thickness h 1;
(7)60 ° retinal nerve fiber layer thickness h 2;
(8)90 ° retinal nerve fiber layer thickness h 3;
(9)120 ° retinal nerve fiber layer thickness h 4;
(10)150 ° retinal nerve fiber layer thickness h 5;
(11) foveal retinal thickness h 6;
(12) retinal nerve fiber layer reflectivity f 0;
(13) retinal pigment epithelium thickness h 7;
(14) retinal pigment epithelium layer reflectivity f 1;
(15) degree of retinal surface smoothness d;
(16) a disengagement angle α;
(17) whether the neurosensory layer is elevated;
(18) whether the pigment epithelium layer is raised;
(19) a liquid cavity location;
(20) whether there is a shadowing effect on the underlying tissue;
(21) a low reflection cavity position;
(22) whether a columnar connection exists in the cavity;
(23) whether or not there is a high reflectance signal in front of the retina;
(24) whether there is a high reflectance signal within the retina;
(25) whether there is a high reflectance signal under the retina;
(26) whether a high-reflection signal exists under pigment epithelium;
(27) whether a high reflection signal exists in the deep retina;
in addition to the above 27 characteristic parameters, there are 8 physiological data parameters, which are (1) age: age;
(2) and (3) myopia degree: degree;
(3) astigmatism degree: degree 2;
(4) height: height;
(5) weight: weight;
(6) blood pressure: pressure;
(7) sex: sex;
(8) the medical history: history.
4. The OCT image detecting method based on the retinal layering data of claim 3, wherein S3 specifically operates as follows:
the hypergraph matrix is constructed by the hypergraph matrix construction module by using the structural feature parameters acquired in the step S2 and the physiological data parameters in the corresponding training set, and the specific process is as follows:
s3-1: each fundus disease patient and normal person respectively take n OCT images and obtain corresponding structural and physiological characteristic parameters;
s3-2: taking each non-binary characteristic parameter as a center, performing KNN operation once, selecting K objects closest to the center, and enabling the K +1 objects to be located on the same super edge so as to generate a basic super edge; expanding adjacent super edges through a K-means clustering algorithm;
s3-3: the characteristic parameters of the second classification directly use 0 and 1 to form corresponding super edges;
s3-4: and splicing the data of all the characteristic parameters to form a hypergraph matrix, wherein the last column is an input fundus disease label.
5. The OCT image detecting method based on the retinal layering data of claim 4, wherein S4 specifically operates as follows:
constructing a hypergraph convolution network containing vertex convolution and hypergraph convolution, wherein the hypergraph convolution network is used as a hypergraph convolution module and is respectively used for aggregating the characteristics between the vertex and the hypergraph; training the hypergraph convolution network by using the hypergraph matrix constructed by the training set data in the S3;
segmenting retina and extracting characteristics of the test set data through an OCT image characteristic extraction network to obtain test set structure characteristic parameters; and constructing a hypergraph matrix through the structural characteristic parameters of the OCT images of the test set and the corresponding physiological data parameters, testing the trained hypergraph convolution network, and debugging the hypergraph convolution network according to the test result to obtain the final hypergraph convolution network.
6. The OCT image testing method based on the retinal layering data of claim 5, wherein S5 specifically operates as follows:
and automatically detecting new data by using the trained OCT image feature extraction network and the final hypergraph convolution network, wherein the specific operations are as follows:
s5-1: acquiring a new OCT image and corresponding physiological data parameters;
s5-2: preprocessing the new OCT image in the same way as S1-2, and segmenting and extracting the characteristics of the retina through a characteristic extraction module to obtain structural characteristic parameters;
s5-3: splicing the new data serving as a new node into the original hypergraph matrix, reconstructing a new hypergraph matrix through a dynamic hypergraph matrix construction network in a hypergraph matrix construction module, and inputting the reconstructed hypergraph matrix into a trained final hypergraph convolution network to obtain a detection result of the new data.
7. An OCT image detection system based on retina hierarchical data is characterized by comprising a data preprocessing module, a feature extraction module, a hypergraph matrix construction module and a hypergraph convolution module;
the data preprocessing module is used for normalizing the local contrast and brightness of the OCT image and then cutting the OCT image into uniform size; enhancing image data through operations of rotation, translation and horizontal overturning;
the characteristic extraction module performs retina layering segmentation on the OCT image through a trained characteristic extraction network, performs morphological operation on the segmented OCT image and extracts structural characteristic parameters;
the hypergraph matrix construction module performs KNN operation once by taking each non-binary characteristic parameter as a center, selects K objects closest to the center, and the K +1 objects are positioned on the same hyperedge so as to generate a basic hyperedge; expanding adjacent super edges through a K-means clustering algorithm; the characteristic parameters of the second classification directly use 0 and 1 to form corresponding super edges; the data of all characteristic parameters are spliced to form a hypergraph matrix, wherein the last column is an input fundus disease label;
the hypergraph convolution module adopts a hypergraph convolution network comprising vertex convolution and hypergraph convolution, the hypergraph convolution network is respectively used for aggregating the characteristics between the vertexes and the hypergraph, and the trained hypergraph convolution network is used for automatically detecting new data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110301302.2A CN113129267A (en) | 2021-03-22 | 2021-03-22 | OCT image detection method and system based on retina hierarchical data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110301302.2A CN113129267A (en) | 2021-03-22 | 2021-03-22 | OCT image detection method and system based on retina hierarchical data |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113129267A true CN113129267A (en) | 2021-07-16 |
Family
ID=76773686
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110301302.2A Pending CN113129267A (en) | 2021-03-22 | 2021-03-22 | OCT image detection method and system based on retina hierarchical data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113129267A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113724857A (en) * | 2021-08-27 | 2021-11-30 | 清华大学深圳国际研究生院 | Automatic diagnosis device for eye ground disease based on eye ground image retina blood vessel |
CN116958412A (en) * | 2023-06-16 | 2023-10-27 | 北京至真互联网技术有限公司 | OCT image-based three-dimensional eye reconstruction method and system |
CN116958412B (en) * | 2023-06-16 | 2024-05-14 | 北京至真互联网技术有限公司 | OCT image-based three-dimensional eye reconstruction method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150031064A1 (en) * | 2009-04-01 | 2015-01-29 | Ridge Diagnostics, Inc. | Multiple biomarker panels to stratify disease severity and monitor treatment of depression |
CN104899921A (en) * | 2015-06-04 | 2015-09-09 | 杭州电子科技大学 | Single-view video human body posture recovery method based on multi-mode self-coding model |
CN106776554A (en) * | 2016-12-09 | 2017-05-31 | 厦门大学 | A kind of microblog emotional Forecasting Methodology based on the study of multi-modal hypergraph |
CN111062928A (en) * | 2019-12-19 | 2020-04-24 | 安徽威奥曼机器人有限公司 | Method for identifying lesion in medical CT image |
CN111695011A (en) * | 2020-06-16 | 2020-09-22 | 清华大学 | Tensor expression-based dynamic hypergraph structure learning classification method and system |
CN112101152A (en) * | 2020-09-01 | 2020-12-18 | 西安电子科技大学 | Electroencephalogram emotion recognition method and system, computer equipment and wearable equipment |
-
2021
- 2021-03-22 CN CN202110301302.2A patent/CN113129267A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150031064A1 (en) * | 2009-04-01 | 2015-01-29 | Ridge Diagnostics, Inc. | Multiple biomarker panels to stratify disease severity and monitor treatment of depression |
CN104899921A (en) * | 2015-06-04 | 2015-09-09 | 杭州电子科技大学 | Single-view video human body posture recovery method based on multi-mode self-coding model |
CN106776554A (en) * | 2016-12-09 | 2017-05-31 | 厦门大学 | A kind of microblog emotional Forecasting Methodology based on the study of multi-modal hypergraph |
CN111062928A (en) * | 2019-12-19 | 2020-04-24 | 安徽威奥曼机器人有限公司 | Method for identifying lesion in medical CT image |
CN111695011A (en) * | 2020-06-16 | 2020-09-22 | 清华大学 | Tensor expression-based dynamic hypergraph structure learning classification method and system |
CN112101152A (en) * | 2020-09-01 | 2020-12-18 | 西安电子科技大学 | Electroencephalogram emotion recognition method and system, computer equipment and wearable equipment |
Non-Patent Citations (3)
Title |
---|
JIANWEN JIANG 等,: "Dynamic Hypergraph Neural Networks", 《PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-19)》 * |
SARAH PARISOT 等,: "Disease prediction using graph convolutional networks: Application to Autism Spectrum Disorder and Alzheimer’s disease", 《MEDICAL IMAGE ANALYSIS》 * |
梁丽娜 等,: "视网膜色素变性患者视网膜光学相干断层扫描观察", 《中国中医眼科杂志》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113724857A (en) * | 2021-08-27 | 2021-11-30 | 清华大学深圳国际研究生院 | Automatic diagnosis device for eye ground disease based on eye ground image retina blood vessel |
CN116958412A (en) * | 2023-06-16 | 2023-10-27 | 北京至真互联网技术有限公司 | OCT image-based three-dimensional eye reconstruction method and system |
CN116958412B (en) * | 2023-06-16 | 2024-05-14 | 北京至真互联网技术有限公司 | OCT image-based three-dimensional eye reconstruction method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gholami et al. | OCTID: Optical coherence tomography image database | |
Shoji et al. | Progressive macula vessel density loss in primary open-angle glaucoma: a longitudinal study | |
Zhang et al. | A survey on computer aided diagnosis for ocular diseases | |
US20170156582A1 (en) | Automated clinical evaluation of the eye | |
CN104398234A (en) | Comprehensive ocular surface analyzer based on expert system | |
DK2482711T3 (en) | DIAGNOSIS PROCEDURE AND APPARATUS FOR PREDICTING POSSIBLY CONSERVED VISIBILITY | |
CN106028921A (en) | Optical coherence tomography system for health characterization of an eye | |
KR102071774B1 (en) | Method for predicting cardio-cerebrovascular disease using eye image | |
Shanthi et al. | Artificial intelligence applications in different imaging modalities for corneal topography | |
Zhang et al. | In vivo measurements of prelamina and lamina cribrosa biomechanical properties in humans | |
Zheng et al. | Research on an intelligent lightweight-assisted pterygium diagnosis model based on anterior segment images | |
Ajaz et al. | A review of methods for automatic detection of macular edema | |
CN113129267A (en) | OCT image detection method and system based on retina hierarchical data | |
Giancardo | Automated fundus images analysis techniques to screen retinal diseases in diabetic patients | |
Ghazal et al. | Early detection of diabetics using retinal OCT images | |
KR20200011530A (en) | Method for predicting cardio-cerebrovascular disease using eye image | |
Eladawi et al. | Diabetic retinopathy early detection based on OCT and OCTA feature fusion | |
CN113781381B (en) | System for discernment chronic kidney disease image | |
Liu et al. | A curriculum learning-based fully automated system for quantification of the choroidal structure in highly myopic patients | |
Palaniappan et al. | Image analysis for ophthalmology: Segmentation and quantification of retinal vascular systems | |
CN115836838A (en) | Diopter accurate evaluation method and application | |
Wang et al. | Primary acute angle-closure glaucoma: three-dimensional reconstruction imaging of optic nerve heard structure in based on optical coherence tomography (OCT) | |
CN113781380B (en) | System for distinguishing neuromyelitis optica and primary open angle glaucoma | |
Tian et al. | Auto-Grading OCT Images Diagnostic Tool for Retinal Diseases | |
dos Santos Samagaio | Automatic Macular Edema Identification and Characterization Using OCT Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210716 |
|
RJ01 | Rejection of invention patent application after publication |