CN112634221A - Image and depth-based cornea level identification and lesion positioning method and system - Google Patents
Image and depth-based cornea level identification and lesion positioning method and system Download PDFInfo
- Publication number
- CN112634221A CN112634221A CN202011498043.9A CN202011498043A CN112634221A CN 112634221 A CN112634221 A CN 112634221A CN 202011498043 A CN202011498043 A CN 202011498043A CN 112634221 A CN112634221 A CN 112634221A
- Authority
- CN
- China
- Prior art keywords
- image
- corneal
- cornea
- layer
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000004087 cornea Anatomy 0.000 title claims abstract description 166
- 230000003902 lesion Effects 0.000 title claims abstract description 88
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000036285 pathological change Effects 0.000 claims description 30
- 231100000915 pathological change Toxicity 0.000 claims description 30
- 230000005856 abnormality Effects 0.000 claims description 28
- 230000002159 abnormal effect Effects 0.000 claims description 27
- 238000004422 calculation algorithm Methods 0.000 claims description 24
- 230000004807 localization Effects 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 14
- 238000012937 correction Methods 0.000 claims description 13
- 238000010586 diagram Methods 0.000 claims description 13
- 238000012216 screening Methods 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 11
- 210000003683 corneal stroma Anatomy 0.000 claims description 9
- 230000000007 visual effect Effects 0.000 claims description 9
- 210000002919 epithelial cell Anatomy 0.000 claims description 8
- 210000002889 endothelial cell Anatomy 0.000 claims description 7
- 238000003384 imaging method Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 4
- 210000004126 nerve fiber Anatomy 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 238000005096 rolling process Methods 0.000 claims description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 4
- 238000004458 analytical method Methods 0.000 description 10
- 238000010801 machine learning Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 7
- 230000001575 pathological effect Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 206010063385 Intellectualisation Diseases 0.000 description 3
- 206010030113 Oedema Diseases 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000017074 necrotic cell death Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 208000021921 corneal disease Diseases 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 210000005081 epithelial layer Anatomy 0.000 description 2
- 238000001727 in vivo Methods 0.000 description 2
- 238000001000 micrograph Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 241000224489 Amoeba Species 0.000 description 1
- 201000004569 Blindness Diseases 0.000 description 1
- 206010011033 Corneal oedema Diseases 0.000 description 1
- 206010029113 Neovascularisation Diseases 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000004624 confocal microscopy Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 201000004778 corneal edema Diseases 0.000 description 1
- 208000031513 cyst Diseases 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 210000004443 dendritic cell Anatomy 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000002538 fungal effect Effects 0.000 description 1
- 210000004969 inflammatory cell Anatomy 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 206010023365 keratopathy Diseases 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000004660 morphological change Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention relates to the field of medical artificial intelligence image recognition, and particularly discloses a cornea level recognition and lesion positioning method and system based on images and depth.
Description
Technical Field
The invention relates to the field of medical artificial intelligence image recognition, in particular to a cornea level recognition and lesion positioning method and system based on images and depth.
Background
The keratopathy can seriously threaten the vision, and is the second cause of blindness and low vision in China. Confocal microscopy can scan living cornea, detect corneal ultrastructure, display morphological change of cell level under normal and pathological states, and provide important information for diagnosis of corneal diseases. By identifying the focus and analyzing the affected anatomical levels, the severity of the corneal disease can be assessed, and an appropriate treatment plan can be selected. According to the anatomical structure and the confocal microscope image morphology, the cornea can be divided into five layers: epithelial cell layer, anterior elastic layer, corneal stroma layer, posterior elastic layer, and endothelial cell layer. Wherein, the nerve fibers are distributed in the upper and lower skin and the front elastic layer. However, the manual interpretation of the confocal cornea image requires specialized training of the interpreter, the reading work is time-consuming and labor-consuming, and sufficient ophthalmologists are not available in clinic to perform the work; moreover, manual film reading depends on personal experience of doctors and is influenced by subjectivity. The application of artificial intelligence can greatly improve the efficiency and accuracy of image interpretation of the confocal microscope, and greatly improve the value of the medical examination means in clinic.
At present, the artificial intelligence is used for carrying out automatic analysis on the corneal confocal image, so that the multiple layers of the cornea cannot be accurately identified, particularly, the identification accuracy is insufficient under the condition of pathological changes or abnormity, and the pathological change range cannot be positioned.
Disclosure of Invention
In order to overcome the problems that the existing corneal level identification accuracy is insufficient and pathological changes cannot be intelligently positioned, the invention provides a corneal level identification and pathological change positioning method and system based on images and depth.
In order to solve the technical problems, the invention provides a technical scheme as follows: the image and depth-based cornea level identification and lesion location method comprises an epithelial cell layer, a sub-epithelial nerve fiber plexus, a front elastic layer, a corneal stroma layer, a back elastic layer and an endothelial cell layer; comprising the steps of step S1: acquiring patient information and a plurality of corresponding first cornea images; step S2: performing definition detection on the first cornea image, and selecting a plurality of second cornea images with the definition meeting the requirement; step S3: judging whether the corneal layer of the current second corneal image is identifiable or not based on the image characteristics, if so, entering step S4, otherwise, entering step S6; step S4: identifying a corneal layer of a current second corneal image; step S5: identifying whether the current second cornea image has a lesion; and step S6: acquiring a depth value of the current second cornea image, and judging the cornea level of the current second cornea image; and step S7: identifying whether an anomaly exists in the current second corneal image.
Preferably, the step S6 specifically includes the following steps: step S61: acquiring a corresponding depth value in a specific area of the current second cornea image by using a template matching algorithm; and step S62: and predicting the corneal layer of the current second corneal image based on the depth value.
Preferably, the step S6 further includes the following steps: step S63: sorting the second corneal image and the current second corneal image, for which the corneal layer has been identified in step S4, based on the corresponding depth values; step S64: calculating the confidence degree of the current second cornea image corresponding to the cornea level based on the depth values of the plurality of sequenced second cornea images; and step S65: and judging the corneal layer of the current second corneal image based on the confidence coefficient.
Preferably, the corneal layer comprises a front elastic layer and a back elastic layer; the step S7 specifically includes the following steps: step S71: judging whether the current second cornea image is positioned on the front elastic layer or the rear elastic layer, if so, entering step S72, otherwise, entering step S73; step S72: identifying an abnormal state of a corneal layer of the current second corneal image adjacent to the corneal layer to determine the abnormal state of the current second corneal image; and step S73: and continuously scanning, judging the number of the images with the abnormality, judging the pathological changes if the number of the images with the abnormality exceeds 3, and prompting the abnormality if the number of the images with the abnormality is less than 3.
Preferably, the method further comprises the following steps: step S8: and performing three-dimensional imaging display based on the plurality of identified cornea layers and lesions, and displaying the abnormality and the lesion in the corresponding area.
Preferably, the step S8 specifically includes the following steps: step S81: mapping a plurality of second cornea images to a cornea sagittal section schematic diagram, and displaying the three-dimensional depth and the layer positioning of the second cornea images in a rolling and real-time manner; and step S82: and constructing a three-dimensional thermodynamic diagram according to the depth coordinate with the abnormal cornea level, and displaying the distribution probability of each level of lesion in a colorizing manner.
Preferably, the method further comprises the following steps: step S9: and deriving a tree-type organizational chart of the corneal layers based on the plurality of identified corneal layers, and deriving the original pictures in batches according to the categories.
The invention also provides a cornea level recognition and lesion positioning system based on the image and the depth, which comprises: the information and image acquisition unit is used for acquiring the patient information and a plurality of corresponding first cornea images; the image definition detection unit is used for performing definition detection on the first cornea image and selecting a plurality of second cornea images with the definition meeting the requirement; the layer characteristic primary screening unit is used for judging whether the cornea layer of the current second cornea image can be identified or not based on the image characteristics; the layer feature identification unit is used for identifying the corneal layer of the second corneal image judged to be identifiable in the layer feature primary screening unit; a lesion recognizing unit for recognizing whether the second cornea image in the gradation feature recognizing unit has a lesion; the depth identification unit is used for acquiring the depth value of the second cornea image which is judged to be unidentifiable in the layer characteristic primary screening unit and judging the cornea layer of the current second cornea image; an abnormality determination unit for identifying whether the second cornea image in the depth identification unit has a lesion; the visual reconstruction unit is used for carrying out three-dimensional imaging display on the basis of the identified multiple corneal layers and lesions and displaying the abnormity and the lesions in corresponding areas; and the previewing and deriving unit is used for deriving a tree-type organizational chart of the corneal layers based on the plurality of identified corneal layers and deriving the original pictures in batches according to the types.
Preferably, the depth recognition unit further includes: the depth information extraction unit is used for acquiring a corresponding depth value in a specific area of the current second cornea image by using a template matching algorithm; a layer prediction unit for predicting a corneal layer of the current second corneal image based on the depth value; or the depth recognition unit may further include: an image sorting unit for sorting the second cornea image in which the corneal layer has been identified in step S4 and the current second cornea image based on the corresponding depth value; the confidence coefficient calculation unit is used for calculating the confidence coefficient of the corneal layer corresponding to the current second corneal image based on the depth values of the plurality of sequenced second corneal images; and the layer correction unit is used for judging the corneal layer of the current second corneal image based on the confidence coefficient.
Compared with the prior art, the cornea level identification and lesion positioning method and system based on the image and the depth, provided by the invention, have the following advantages:
1. after the cornea level image is obtained, a depth learning algorithm is used for image recognition, depth numerical analysis is carried out by combining a machine learning algorithm to automatically detect the anatomical level of the cornea in vivo scanning, the level of a pathological change or abnormal area can be accurately recognized, visual reconstruction is carried out, full-automatic level marking is realized, manual intervention is not needed, the labor cost is reduced, meanwhile, the cornea level is recognized by using an analysis method integrated by various machine learning algorithms, the accuracy rate is high, and the effect is stable.
2. By using a template matching algorithm to acquire a corresponding depth value aiming at a specific area, the invalid calculation amount of the whole image for acquiring the depth value is reduced, and the efficiency of corneal level identification is improved.
3. After the corneal layer is predicted based on the depth value, a positioning result of the corneal layer is further accurately obtained by means of confidence correction on the prediction result, and a front elastic layer and a rear elastic layer which cannot be identified by a common image identification method are accurately positioned on a plurality of sequenced second corneal images by means of confidence correction, so that the identification accuracy of the corneal layer is improved.
4. When the abnormal lesion recognition is carried out at the position positioned on the elastic layer, the elastic layer is comprehensively judged by being adjacent to the corneal layer, so that the lesion recognition accuracy of the elastic layer is improved, and the problems of large calculation amount and large training amount caused by direct recognition are reduced. And the abnormal state or pathological change of the current cornea level is comprehensively judged in a mode of continuously scanning a plurality of second cornea images so as to improve the accuracy of judging the pathological change, avoid the non-pathological change abnormal state which is discretely appeared from being wrongly judged as the wrong identification of the pathological change, and further improve the accuracy of identifying the cornea level.
5. The plurality of second cornea images are sequenced and then three-dimensionally displayed, and the abnormity and the lesion are displayed in the corresponding area so as to output the corresponding diagnostic image, obtain the integral depth range of the lesion and visually reconstruct the integral depth range, so that a user can conveniently check the output result, the automation and the intellectualization of inputting, calculating, identifying and outputting are realized, and the user can conveniently diagnose the output result.
6. Different pathological changes or pathological changes are displayed in a three-dimensional colorization mode, the pathological change range is visually displayed, and the user can check the pathological change range conveniently.
7. And displaying the classified levels and sequence relations after sequencing the plurality of identified second cornea images, classifying and previewing the pictures in each class, and deriving the original pictures in batches according to the classes. The tree-type classification preview is carried out on all the second cornea images, and the second cornea images can be classified and exported in batches, so that the technology is favorably applied to more scenes such as medical teaching.
Drawings
Fig. 1 is a flowchart of a method for identifying and locating a lesion based on image and depth in a corneal layer according to a first embodiment of the present invention.
Fig. 2 is a detailed flowchart of step S6 in the method for identifying and locating a lesion based on an image and depth according to the first embodiment of the present invention.
Fig. 3 is a flowchart illustrating further details of step S6 in the method for identifying and locating a lesion based on an image and depth according to the first embodiment of the present invention.
Fig. 4 is a detailed flowchart of step S7 in the method for identifying and locating a lesion based on an image and depth according to the first embodiment of the present invention.
Fig. 5 is a flowchart of steps S8 and S9 in the method for identifying and locating a lesion based on an image and depth according to the first embodiment of the present invention.
Fig. 6 is a detailed flowchart of step S8 in the method for identifying and locating a lesion based on an image and depth according to the first embodiment of the present invention.
Fig. 7 is a block diagram of a system for image and depth-based corneal layer identification and lesion localization according to a second embodiment of the present invention.
Fig. 8 is a block diagram of a depth recognition unit in the image and depth-based corneal layer recognition and lesion localization system according to the second embodiment of the present invention.
Fig. 9 is a block diagram of an apparatus according to a third embodiment of the present invention.
Description of reference numerals:
1-information and image acquisition unit, 2-image definition detection unit, 3-hierarchical feature identification unit, 4-image identification unit, 5-lesion identification unit, 6-depth identification unit, 7-abnormity judgment unit, 8-visual reconstruction unit, 9-preview and derivation unit,
61-depth information extraction unit, 62-level prediction unit, 63-image sorting unit, 64-confidence calculation unit, 65-level correction unit,
10-memory, 20-processor.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, a first embodiment of the present invention provides a method for image and depth-based corneal layer identification and lesion localization, which includes the following steps:
step S1: acquiring patient information and a plurality of corresponding first cornea images;
step S2: performing definition detection on the first cornea image, and selecting a plurality of second cornea images with the definition meeting the requirement;
step S3: judging whether the corneal layer of the current second corneal image is identifiable or not based on the image characteristics, if so, entering step S4, otherwise, entering step S6;
step S4: identifying a corneal layer of a current second corneal image;
step S5: identifying whether the current second cornea image has a lesion;
step S6: acquiring a depth value of the current second cornea image, and judging the cornea level of the current second cornea image; and
step S7: identifying whether an anomaly exists in the current second corneal image.
It is understood that in step S1, the basic information of the patient and the plurality of first cornea images are acquired as input values of the system. The patient information comprises corresponding content retrieved from a numerical database associated with the examination instrument and stored to the input end of the system or input in the front-end interface of the system. The record content includes basic information (name, ID, age, sex), course information (course of disease, previous corneal history), physical examination sign, treatment condition, and examination information (examination number, examination date, and eye type). The first corneal images are confocal corneal microscope images including a plurality of corneal layer-by-layer scanning images and a scanning depth corresponding to each image.
It is understood that the cornea can be divided into five layers in sequence: epithelial cell layer, anterior elastic layer, corneal stroma layer, posterior elastic layer, and endothelial cell layer. The nerve fibers are distributed in the epithelial cell layer and the front elastic layer, and the front elastic layer and the rear elastic layer (both elastic layers) are of acellular structures.
It is understood that in step S2, the quality determination and the preliminary screening are performed on the input plurality of first cornea images, and the captured blurred images are removed from the subsequent analysis. The method can perform automatic identification of a computer through an image blur detection method, including but not limited to an algorithm of a Support Vector Machine (SVM), so as to perform blur detection on a plurality of first cornea images. Of course, the screening can also be performed by manual screening to obtain a plurality of second cornea images meeting the definition requirement.
It can be understood that, in step S3, image recognition is performed on the plurality of second corneal images preliminarily by means of image recognition, and corneal layers are recognized based on a pre-trained neural network, at this time, there are still unrecognizable layers in the preliminary recognition process, where the unrecognizable corneal layers include image regions where abnormalities or lesions occur (pathological forms such as necrosis in the diseased cornea, structural disorder blur such as severe edema, etc. cannot be recognized accurately), or images located in the front elastic layer or the rear elastic layer.
It can be understood that due to the cell-free structure of the front elastic layer or the rear elastic layer, the identification cannot be performed through a simple image recognition network, that is, the front elastic layer or the rear elastic layer cannot be recognized through the image recognition method of step S3, and whether an abnormality or a lesion occurs in the front elastic layer or the rear elastic layer cannot be recognized.
It is understood that in step S4, based on the corneal layer where no abnormality or lesion has occurred, the corneal layer of the layer can be directly identified, for example, to characterize the normal epithelial cell layer, corneal stroma layer, or endothelial cell layer. In a pathological or abnormal corneal layer, for example, pathological forms such as necrosis, structural disorder blur such as severe edema, etc. are usually displayed as irregular images, cannot be distinguished by image recognition, and cannot accurately locate the corneal layer corresponding to the current second corneal image.
It is understood that in step S5, it is determined whether a lesion exists in each picture. Specifically, two classification judgments are performed on each picture: normal, or abnormal. The abnormal group includes but is not limited to the fuzzy hierarchical features (including necrosis, severe edema, etc.), fungal hyphae, amoeba cysts, intracorneal neovascularization, inflammatory cells, activated dendritic cells, etc. and various recognizable pathological morphological features. And establishing a binary model by using a convolutional neural network algorithm, including but not limited to deep learning algorithms such as VGG, ResNet, inclusion, Xception, inclusion-ResNet and the like. Thereby judging whether the scanned layer has abnormal forms.
It is understood that, in step S7, by combining the depth values, corneal layers that cannot be identified are further identified by the depth values to determine accurate corneal layers, and lesion screening is performed in step S7 based on the currently identified corneal layers.
The method has the advantages that after the cornea level image is obtained, the image is identified by using a depth learning algorithm, the depth numerical analysis is carried out by combining a machine learning algorithm, the anatomical level of the cornea is automatically detected in the cornea living body scanning process, the level of a pathological change or abnormal area can be accurately identified, the visual reconstruction is carried out, the full-automatic level marking is realized, the manual intervention is not needed, the labor cost is reduced, and meanwhile, the accuracy rate of identifying the cornea level by using an analysis method integrated by various machine learning algorithms is high, and the effect is stable.
Referring to fig. 2, step S6: and acquiring the depth value of the current second cornea image, and judging the cornea level of the current second cornea image. The step S6 specifically includes steps S61 to S62:
step S61: acquiring a corresponding depth value in a specific area of the current second cornea image by using a template matching algorithm; and
step S62: and predicting the corneal layer of the current second corneal image based on the depth value.
It is understood that in step S61, since the corneal thickness increases in the pathological state such as corneal edema, the absolute value of each scanning depth increases, and therefore the absolute value of each scanning depth is converted into the relative depth, that is, the relative depth of each scanning image layer in the current cornea. Using machine learning algorithms including but not limited to K-nearest neighbor classification models, light gbm (light Gradient Boosting machine), decision trees, GBDT, etc., inputting relative depth values and image layering results (epithelial cell layer, front elastic layer, corneal stroma layer, rear elastic layer, endothelial cell layer), obtaining probabilities of each relative depth value being judged as each layer by pre-training, and calculating relative depth ranges of each layer including the front elastic layer and the rear elastic layer.
It can be understood that by using the template matching algorithm to obtain the corresponding depth value for the specific region, the invalid calculation amount of the whole image for collecting the depth value is reduced, and the efficiency of corneal level identification is improved.
It is understood that steps S61 to S62 are merely embodiments of this example, and embodiments thereof are not limited to steps S61 to S62.
Referring to fig. 3, the step S6 further includes steps S63 to S66:
step S63: sorting the second corneal image and the current second corneal image, for which the corneal layer has been identified in step S4, based on the corresponding depth values;
step S64: calculating the confidence degree of the current second cornea image corresponding to the cornea level based on the depth values of the plurality of sequenced second cornea images; and
step S65: and judging the corneal layer of the current second corneal image based on the confidence coefficient.
It is to be understood that in step S63, the second corneal image and the current second corneal image, for which the corneal layer has been identified in step S4, are sorted based on the corresponding depth values.
It is understood that, in step S64, based on the depth values of the sorted second corneal images, the correction confidence of the target image classified into each layer is calculated according to the following confidence correction function:
Fk=Pk×αk
and judging the layer with the maximum correction confidence coefficient as the layer of the target picture, namely judging the layer sequence number k of the target picture as:
k=argmax(Fk)
k represents a hierarchical sequence number(ii) a Where k is 1, 2, 3, 4, and 5, each represents a classification into an epithelial cell layer, a pro-elastic layer, a corneal stroma layer, a post-elastic layer, and an endothelial cell layer. FkAnd representing the correction confidence of the target picture classified in k level. PkThis indicates the confidence level of classifying the target picture into k levels according to the scan depth value in step S62. Alpha is alphakAnd a correction coefficient indicating that the target picture is classified into k layers. The calculation process is as follows: the second cornea image of which the cornea layer has been identified in step S4 and the current second cornea image are incrementally sorted based on the corresponding depth values, obtaining a sequence L (L)1,l2,...,ln) I denotes the serial number of the second cornea picture in order, liAnd showing the level of the ith picture. For each value of k, it is calculated as follows:
where e is a natural logarithm, n is a length of the sequence L, t is a sequence number of the target picture, and k represents a layer to which the target picture belongs.
Therefore, after the correction confidence is calculated by the above formula, the corneal layer corresponding to the correction confidence having the highest value is determined.
The method can be understood that after the corneal layer is predicted based on the depth value, the predicted result is corrected by confidence, the positioning result of the corneal layer is further accurately predicted, and the front elastic layer and the rear elastic layer which cannot be identified by a common image identification method are accurately positioned on the sequenced second corneal images by the confidence correction, so that the identification accuracy of the corneal layer is improved.
It is understood that steps S63 to S65 are merely embodiments of this example, and embodiments thereof are not limited to steps S53 to S55.
Referring to fig. 4, step S7: identifying whether an anomaly exists in the current second corneal image. The step S7 specifically includes steps S71 to S74:
step S71: and judging whether the current second cornea image is positioned on the front elastic layer or the rear elastic layer, if so, entering the step S72, and if not, entering the step S73.
Step S72: an abnormal state of a corneal layer adjacent to the corneal layer of the current second corneal image is identified to determine an abnormal state of the current second corneal image. And
step S73: and continuously scanning, judging the number of the images with the abnormality, judging the pathological changes if the number of the images with the abnormality exceeds 3, and prompting the abnormality if the number of the images with the abnormality is less than 3.
It is understood that, in step S71, based on the corneal layer identified in step S65, it is determined whether the current second corneal image is located on an elastic layer (including a front elastic layer and a rear elastic layer) so as to determine whether there is an abnormality in the current second corneal image in the subsequent steps. For example, in step S72, by performing anomaly detection on the adjacent corneal layers of the elastic layer, for example, in the anterior elastic layer, which is the corneal epithelial layer and the corneal stroma layer, the detected corneal epithelial layer or corneal stroma layer shallow image determines that a lesion occurs in the anterior elastic layer region if a lesion exists. For another example, in step S73, in the non-elastic layer, other second cornea images are obtained by continuous scanning (i.e., scanning is continued in sequence) to comprehensively determine whether the current cornea level is diseased or abnormal according to the results of the continuous scanning.
It can be understood that, in step S72, when the second cornea image located on the elastic layer is subjected to abnormal lesion recognition, it cannot be directly recognized, and it is necessary to comprehensively determine whether there is a lesion in the elastic layer by abutting against the cornea layer, so that the accuracy of lesion recognition on the elastic layer is improved, and the problems of large calculation amount and training amount caused by direct recognition are reduced.
It can be understood that, in step S73, the abnormality or the lesion of the current corneal layer is comprehensively determined by continuously scanning the plurality of second corneal images, so as to improve the accuracy of determining the lesion. The reason is that: in practical situations, the actual existing lesions do not appear only in the discontinuous individual layers, but appear in an aggregate manner, so that the unrecognizable layers appearing in the aggregate manner are judged as the lesions; and the discrete unrecognizable layer is only prompted to be abnormal, so that the possibly-occurring special conditions (such as discontinuous abnormality of scanning layers caused by the change of the eye position of a patient) are filtered, and the pathological changes are not easily judged, and the accuracy of pathological change diagnosis is improved.
It is understood that steps S71 to S73 are merely embodiments of this example, and embodiments thereof are not limited to steps S71 to S73.
Referring to fig. 5, the method for identifying and locating pathological changes based on image and depth according to the first embodiment of the present invention further includes:
step S8: and performing three-dimensional imaging display based on the plurality of identified cornea layers and lesions, and displaying the abnormality and the lesion in the corresponding area. And
step S9: and deriving a tree-type organizational chart of the corneal layers based on the plurality of identified corneal layers, and deriving the original pictures in batches according to the categories.
It can be understood that, in step S8, the plurality of identified second cornea images are sorted and then displayed in a three-dimensional image, and the abnormality and the lesion are displayed in the corresponding region, so as to output the corresponding diagnostic image, obtain the overall depth range of the lesion and perform visual reconstruction, so that the user can conveniently view the output result, implement automation and intellectualization of input, calculation, identification and output, and facilitate the user to perform diagnosis and the like on the output result.
It is understood that, in step S9, the pictures in each category are previewed by sorting the identified second cornea images to display the hierarchical and sequence relationships of the categories, and the original pictures are derived in batches according to the categories. The tree-type classification preview is carried out on all the second cornea images, and the second cornea images can be classified and exported in batches, so that the technology is favorably applied to more scenes such as medical teaching.
Referring to fig. 6, the step S8 further includes the following steps:
step S81: mapping a plurality of second cornea images to a cornea sagittal section schematic diagram, and displaying the three-dimensional depth and the layer positioning of the second cornea images in a rolling and real-time manner; and
step S82: and constructing a three-dimensional thermodynamic diagram according to the depth coordinate with the abnormal cornea level, and displaying the distribution probability of each level of lesion in a colorizing manner.
It is understood that in step S81, the position of the corneal layer is displayed in real time by scrolling, so that the user can conveniently view the stereo positioning where the currently output corneal image is located.
It is understood that in step S82, the lesion range is visually displayed by displaying different lesion degrees or lesion conditions in color, which is further convenient for the user to view.
It is understood that steps S81 to S82 are merely embodiments of this example, and embodiments thereof are not limited to steps S81 to S82.
Referring to fig. 7, a second embodiment of the present invention provides an image and depth based cornea level identification and lesion localization system, comprising:
the information and image acquisition unit 1 is used for acquiring patient information and a plurality of corresponding first cornea images.
And the image definition detection unit 2 is used for performing definition detection on the first cornea image and selecting a plurality of second cornea images with the definition meeting the requirement.
And the layer characteristic primary screening unit 3 is used for judging whether the cornea layer of the current second cornea image can be identified or not based on the image characteristics.
The layer feature identification unit 4 is used for identifying the corneal layer of the second corneal image judged to be identifiable in the layer feature primary screening unit;
and a lesion recognizing unit 5 for recognizing whether the second cornea image in the hierarchical feature recognizing unit has a lesion.
And the depth identification unit 6 is used for acquiring the depth value of the second cornea image which is judged to be unrecognizable in the layer characteristic primary screening unit and judging the cornea layer of the current second cornea image.
And an abnormality determination unit 7 for identifying whether the second cornea image in the depth recognition unit has a lesion.
And the visual reconstruction unit 8 is used for carrying out three-dimensional imaging display on the basis of the plurality of identified cornea layers and lesions and displaying the abnormity and the lesions in corresponding areas. And
and a preview and derivation unit 9, configured to derive a tree-type organizational chart of corneal layers based on the identified plurality of corneal layers, and derive raw pictures in batches according to the categories.
Referring to fig. 8, the depth recognition unit 6 further includes:
and the depth information extraction unit 61 is used for acquiring a corresponding depth value in a specific area of the current second cornea image by using a template matching algorithm.
And a layer prediction unit 62 for predicting the corneal layer of the current second corneal image based on the depth value. Or
The depth recognition unit may further include:
an image sorting unit 63 for sorting the second cornea image and the current second cornea image for which the corneal level has been identified by the level feature identifying unit, based on the corresponding depth value.
The confidence calculating unit 64 is configured to calculate a confidence of the corneal layer corresponding to the current second corneal image based on the depth values of the sorted second corneal images. And
and a layer modifying unit 65 for discriminating the corneal layer of the current second corneal image based on the confidence.
It can be understood that the image and depth based cornea level recognition and lesion localization system according to the second embodiment of the present invention is particularly suitable for an image and depth based cornea level recognition and lesion localization system, and the system can perform preliminary image recognition based on a plurality of acquired cornea images, accurately recognize the cornea level by combining image features and depth values, automatically detect the anatomical level in which the cornea is located in the cornea living body scanning, accurately recognize the level in which the lesion or abnormal region is located, and perform visual reconstruction on the lesion depth range, thereby realizing full-automatic level labeling without manual intervention, reducing labor cost, and simultaneously recognizing the cornea level by using an analysis method integrated by a plurality of machine learning algorithms, and having high accuracy and stable effect.
Referring to fig. 9, a device according to a third embodiment of the present invention includes an electronic device, which includes a memory 10 and a processor 20. The memory 10 has stored therein a computer program arranged to execute the steps of any of the above embodiments of the image and depth based corneal layer identification and lesion localization methods when executed. The processor 20 is configured to execute the steps of any of the above embodiments of the image and depth based corneal layer identification and lesion localization method via the computer program.
Optionally, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of an operating machine network.
The electronic equipment is particularly suitable for image and depth-based cornea level recognition and lesion positioning equipment, can perform preliminary image recognition based on a plurality of acquired cornea images, accurately recognize the cornea level by combining image characteristics and depth values, automatically detect the anatomical level in cornea living body scanning, accurately recognize the level of a lesion or abnormal area, perform visual reconstruction, realize full-automatic level marking without manual intervention, reduce labor cost, and recognize the cornea level by using an analysis method integrated by a plurality of machine learning algorithms, and has high accuracy and stable effect.
Compared with the prior art, the cornea level identification and lesion positioning method and system based on the image and the depth, provided by the invention, have the following advantages:
1. after the cornea level image is obtained, a depth learning algorithm is used for image recognition, depth numerical analysis is carried out by combining a machine learning algorithm to automatically detect the anatomical level of the cornea in vivo scanning, the level of a pathological change or abnormal area can be accurately recognized, visual reconstruction is carried out, full-automatic level marking is realized, manual intervention is not needed, the labor cost is reduced, meanwhile, the cornea level is recognized by using an analysis method integrated by various machine learning algorithms, the accuracy rate is high, and the effect is stable.
2. By using a template matching algorithm to scan and acquire the depth value corresponding to the corneal layer aiming at a specific area, the invalid calculation amount of the whole image for acquiring the depth value is reduced, and the corneal layer identification efficiency is improved.
3. After the corneal layer is predicted based on the depth value, a positioning result of the corneal layer is further accurately predicted by correcting the confidence of the prediction result, and a front elastic layer and a rear elastic layer which cannot be recognized by a common image recognition method are accurately positioned on a plurality of sequenced second corneal images by correcting the confidence, so that the recognition accuracy of the corneal layer is improved.
4. When the abnormal lesion recognition is carried out at the position positioned on the elastic layer, the elastic layer is comprehensively judged by being adjacent to the corneal layer, so that the lesion recognition accuracy of the elastic layer is improved, and the problems of large calculation amount and large training amount caused by direct recognition are reduced. And the abnormal state or pathological change of the current cornea level is comprehensively judged in a mode of continuously scanning a plurality of second cornea images so as to improve the accuracy of judging the pathological change, avoid the non-pathological change abnormal state which is discretely appeared from being wrongly judged as the wrong identification of the pathological change, and further improve the accuracy of identifying the cornea level.
5. The plurality of second cornea images are sequenced and then three-dimensionally displayed, and the abnormity and the lesion are displayed in the corresponding area so as to output the corresponding diagnostic image, obtain the integral depth range of the lesion and visually reconstruct the integral depth range, so that a user can conveniently check the output result, the automation and the intellectualization of inputting, calculating, identifying and outputting are realized, and the user can conveniently diagnose the output result.
6. Different pathological changes degrees or pathological changes are displayed in a colorized mode, the pathological change range is visually displayed, and the user can check the pathological change range conveniently.
7. And displaying the classified levels and sequence relations after sequencing the plurality of identified second cornea images, classifying and previewing the pictures in each class, and deriving the original pictures in batches according to the classes. The tree-type classification preview is carried out on all the second cornea images, and the second cornea images can be classified and exported in batches, so that the technology is favorably applied to more scenes such as medical teaching.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart.
Which when executed by a processor performs the above-described functions defined in the method of the present application. It should be noted that the computer memory described herein may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer memory may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof.
More specific examples of computer memory may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable signal medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated numerical signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: the processor comprises an information and image acquisition unit, an image definition detection unit, a hierarchical feature learning unit, an image identification unit, a depth identification unit and an abnormality identification unit. Here, the names of the units do not constitute a limitation to the unit itself in some cases, and for example, the abnormality identification unit may also be described as a "unit that identifies whether there is an abnormality in the current second cornea image".
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent alterations and improvements made within the spirit of the present invention should be included in the scope of the present invention.
Claims (9)
1. The cornea level recognition and lesion positioning method based on the image and the depth is characterized in that: the corneal layer comprises an epithelial cell layer, a sub-epithelial nerve fiber plexus, a front elastic layer, a corneal stroma layer, a back elastic layer and an endothelial cell layer;
the method comprises the following steps:
step S1: acquiring patient information and a plurality of corresponding first cornea images;
step S2: performing definition detection on the first cornea image, and selecting a plurality of second cornea images with the definition meeting the requirement;
step S3: judging whether the corneal layer of the current second corneal image is identifiable or not based on the image characteristics, if so, entering step S4, otherwise, entering step S6;
step S4: identifying a corneal layer of a current second corneal image; and
step S5: identifying whether the current second cornea image has a lesion;
step S6: acquiring a depth value of the current second cornea image, and judging the cornea level of the current second cornea image; and
step S7: identifying whether an anomaly exists in the current second corneal image.
2. The image and depth based corneal layer recognition and lesion localization method of claim 1, wherein: the step S6 specifically includes the following steps:
step S61: acquiring a corresponding depth value in a specific area of the current second cornea image by using a template matching algorithm; and
step S62: and predicting the corneal layer of the current second corneal image based on the depth value.
3. The image and depth based corneal layer recognition and lesion localization method of claim 2, wherein: the step S6 further includes the following steps:
step S63: sorting the second corneal image and the current second corneal image, for which the corneal layer has been identified in step S4, based on the corresponding depth values;
step S64: calculating the confidence degree of the current second cornea image corresponding to the cornea level based on the depth values of the plurality of sequenced second cornea images; and
step S65: and judging the corneal layer of the current second corneal image based on the confidence coefficient.
4. The image and depth based corneal layer recognition and lesion localization method of claim 3, wherein: the corneal layer comprises a front elastic layer and a back elastic layer;
the step S7 specifically includes the following steps:
step S71: judging whether the current second cornea image is positioned on the front elastic layer or the rear elastic layer, if so, entering step S72, otherwise, entering step S73;
step S72: identifying an abnormal state of a corneal layer of the current second corneal image adjacent to the corneal layer to determine the abnormal state of the current second corneal image; and
step S73: and continuously scanning, judging the number of the images with the abnormality, judging the pathological changes if the number of the images with the abnormality exceeds 3, and prompting the abnormality if the number of the images with the abnormality is less than 3.
5. The image and depth based corneal layer recognition and lesion localization method of claim 1, wherein: further comprising:
step S8: and performing three-dimensional imaging display based on the plurality of identified cornea layers and lesions, and displaying the abnormality and the lesion in the corresponding area.
6. The image and depth based corneal layer recognition and lesion localization method of claim 5, wherein: the step S8 specifically includes the following steps:
step S81: mapping a plurality of second cornea images to a cornea sagittal section schematic diagram, and displaying the three-dimensional depth and the layer positioning of the second cornea images in a rolling and real-time manner; and
step S82: and constructing a three-dimensional thermodynamic diagram according to the depth coordinate with the abnormal cornea level, and displaying the distribution probability of each level of lesion in a colorizing manner.
7. The image and depth based corneal layer recognition and lesion localization method of claim 5, wherein: further comprising:
step S9: and deriving a tree-type organizational chart of the corneal layers based on the plurality of identified corneal layers, and deriving the original pictures in batches according to the categories.
8. Cornea level discernment and pathological change positioning system based on image and degree of depth, its characterized in that: the method comprises the following steps:
the information and image acquisition unit is used for acquiring the patient information and a plurality of corresponding first cornea images;
the image definition detection unit is used for performing definition detection on the first cornea image and selecting a plurality of second cornea images with the definition meeting the requirement;
the layer characteristic primary screening unit is used for judging whether the cornea layer of the current second cornea image can be identified or not based on the image characteristics;
the layer feature identification unit is used for identifying the corneal layer of the second corneal image judged to be identifiable in the layer feature primary screening unit;
a lesion recognizing unit for recognizing whether the second cornea image in the gradation feature recognizing unit has a lesion;
the depth identification unit is used for acquiring the depth value of the second cornea image which is judged to be unidentifiable in the layer characteristic primary screening unit and judging the cornea layer of the current second cornea image;
an abnormality determination unit for identifying whether the second cornea image in the depth identification unit has a lesion;
the visual reconstruction unit is used for carrying out three-dimensional imaging display on the basis of the identified multiple corneal layers and lesions and displaying the abnormity and the lesions in corresponding areas; and
and the previewing and deriving unit is used for deriving a tree-type organizational chart of the corneal layers based on the plurality of identified corneal layers and deriving the original pictures in batches according to the types.
9. The image and depth based corneal layer recognition and lesion localization system of claim 8, wherein: the depth recognition unit further includes:
the depth information extraction unit is used for acquiring a corresponding depth value in a specific area of the current second cornea image by using a template matching algorithm;
a layer prediction unit for predicting a corneal layer of the current second corneal image based on the depth value; or
The depth recognition unit may further include:
an image sorting unit for sorting the second cornea image in which the cornea level has been identified and the current second cornea image based on the corresponding depth value;
the confidence coefficient calculation unit is used for calculating the confidence coefficient of the corneal layer corresponding to the current second corneal image based on the depth values of the plurality of sequenced second corneal images; and
and the layer correction unit is used for judging the corneal layer of the current second corneal image based on the confidence coefficient.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011498043.9A CN112634221A (en) | 2020-12-17 | 2020-12-17 | Image and depth-based cornea level identification and lesion positioning method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011498043.9A CN112634221A (en) | 2020-12-17 | 2020-12-17 | Image and depth-based cornea level identification and lesion positioning method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112634221A true CN112634221A (en) | 2021-04-09 |
Family
ID=75316497
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011498043.9A Pending CN112634221A (en) | 2020-12-17 | 2020-12-17 | Image and depth-based cornea level identification and lesion positioning method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112634221A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113962995A (en) * | 2021-12-21 | 2022-01-21 | 北京鹰瞳科技发展股份有限公司 | Cataract model training method and cataract identification method |
CN115690092A (en) * | 2022-12-08 | 2023-02-03 | 中国科学院自动化研究所 | Method and device for identifying and counting amoeba cysts in corneal confocal image |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102567737A (en) * | 2011-12-28 | 2012-07-11 | 华南理工大学 | Method for locating eyeball cornea |
CN102884551A (en) * | 2010-05-06 | 2013-01-16 | 爱尔康研究有限公司 | Devices and methods for assessing changes in corneal health |
US20180192866A1 (en) * | 2017-01-11 | 2018-07-12 | University Of Miami | Method and system for three-dimensional thickness mapping of corneal micro-layers and corneal diagnoses |
CN110384582A (en) * | 2019-07-17 | 2019-10-29 | 温州医科大学附属眼视光医院 | A kind of air pocket method Deep liminal keratoplasty device and its application method |
-
2020
- 2020-12-17 CN CN202011498043.9A patent/CN112634221A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102884551A (en) * | 2010-05-06 | 2013-01-16 | 爱尔康研究有限公司 | Devices and methods for assessing changes in corneal health |
CN102567737A (en) * | 2011-12-28 | 2012-07-11 | 华南理工大学 | Method for locating eyeball cornea |
US20180192866A1 (en) * | 2017-01-11 | 2018-07-12 | University Of Miami | Method and system for three-dimensional thickness mapping of corneal micro-layers and corneal diagnoses |
CN110384582A (en) * | 2019-07-17 | 2019-10-29 | 温州医科大学附属眼视光医院 | A kind of air pocket method Deep liminal keratoplasty device and its application method |
Non-Patent Citations (1)
Title |
---|
吴艳;杨丽萍;薛春燕;黄振平;石尧: "《激光共焦显微镜对近视激光术后haze结构的研究》", 《激光共焦显微镜对近视激光术后HAZE结构的研究》, pages 1 - 3 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113962995A (en) * | 2021-12-21 | 2022-01-21 | 北京鹰瞳科技发展股份有限公司 | Cataract model training method and cataract identification method |
CN115690092A (en) * | 2022-12-08 | 2023-02-03 | 中国科学院自动化研究所 | Method and device for identifying and counting amoeba cysts in corneal confocal image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110390351B (en) | Epileptic focus three-dimensional automatic positioning system based on deep learning | |
Zhang et al. | Automatic cataract grading methods based on deep learning | |
CN111862044B (en) | Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium | |
US20060257031A1 (en) | Automatic detection of red lesions in digital color fundus photographs | |
JP6842481B2 (en) | 3D quantitative analysis of the retinal layer using deep learning | |
CN111667490B (en) | Fundus picture cup optic disc segmentation method | |
CN111986211A (en) | Deep learning-based ophthalmic ultrasonic automatic screening method and system | |
CN113962311A (en) | Knowledge data and artificial intelligence driven ophthalmic multi-disease identification system | |
KR102162683B1 (en) | Reading aid using atypical skin disease image data | |
CN111461218B (en) | Sample data labeling system for fundus image of diabetes mellitus | |
CN112634221A (en) | Image and depth-based cornea level identification and lesion positioning method and system | |
CN113158821B (en) | Method and device for processing eye detection data based on multiple modes and terminal equipment | |
CN113782184A (en) | Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning | |
CN113012093B (en) | Training method and training system for glaucoma image feature extraction | |
CN110766665A (en) | Tongue picture data analysis method based on strong supervision algorithm and deep learning network | |
CN117338234A (en) | Diopter and vision joint detection method | |
CN115880266B (en) | Intestinal polyp detection system and method based on deep learning | |
Boone et al. | Image processing and hierarchical temporal memories for automated retina analysis | |
Lotlekar et al. | Multilevel classification model for diabetic retinopathy | |
CN114330484A (en) | Method and system for classification and focus identification of diabetic retinopathy through weak supervision learning | |
Haja et al. | Advancing glaucoma detection with convolutional neural networks: a paradigm shift in ophthalmology | |
Kumari et al. | Automated process for retinal image segmentation and classification via deep learning based cnn model | |
Mostafa et al. | Diagnosis of Glaucoma from Retinal Fundus Image Using Deep Transfer Learning | |
Azeroual et al. | Convolutional Neural Network for Segmentation and Classification of Glaucoma. | |
Cheng et al. | Research on Feature Extraction Method of Fundus Image Based on Deep Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |