CN114782855A - Cataract surgery evaluation method, system and medium based on deep learning - Google Patents
Cataract surgery evaluation method, system and medium based on deep learning Download PDFInfo
- Publication number
- CN114782855A CN114782855A CN202210229362.2A CN202210229362A CN114782855A CN 114782855 A CN114782855 A CN 114782855A CN 202210229362 A CN202210229362 A CN 202210229362A CN 114782855 A CN114782855 A CN 114782855A
- Authority
- CN
- China
- Prior art keywords
- evaluation
- link
- deep learning
- cataract surgery
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000011156 evaluation Methods 0.000 title claims abstract description 73
- 238000001356 surgical procedure Methods 0.000 title claims abstract description 46
- 208000002177 Cataract Diseases 0.000 title claims abstract description 39
- 238000013135 deep learning Methods 0.000 title claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 19
- 210000001232 limbus corneae Anatomy 0.000 claims description 25
- 238000002513 implantation Methods 0.000 claims description 20
- 230000003287 optical effect Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 9
- 238000003860 storage Methods 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 7
- 208000002847 Surgical Wound Diseases 0.000 claims description 6
- 238000005520 cutting process Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 4
- 230000000877 morphologic effect Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 5
- 230000000694 effects Effects 0.000 abstract description 4
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 abstract description 3
- 230000004044 response Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 9
- 238000000605 extraction Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 201000004569 Blindness Diseases 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004660 morphological change Effects 0.000 description 1
- 230000002980 postoperative effect Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of deep learning, and provides a cataract surgery evaluation method, a cataract surgery evaluation system and a cataract surgery evaluation medium based on deep learning, which comprise the following steps: s1, classifying the operation stage of the video frame by combining the characteristics of the surgical instruments in the video frame with the characteristics of the eye background; s2, training a universal network according to the initial feature labels of all links of the operation to extract evaluation features for operation evaluation in video frames corresponding to all links; and S3, acquiring the quantitative information of each link of the operation according to the extracted evaluation characteristics and the preset labels, and inputting the quantitative information into the trained preset classification evaluation network to perform classification evaluation on each link of the operation. The method has the advantages that the quantitative relation between the descriptive evaluation index in the ICO-OSCAR standard and the learnable operation characteristics of the deep learning network is established, so that the artificial intelligence technology is used for replacing the whole process of an expert doctor to participate in the operation training, and the objectivity, the reliability and the response rate of the training effect are improved.
Description
Technical Field
The invention relates to the technical field of deep learning, in particular to a cataract surgery evaluation method, a cataract surgery evaluation system and a cataract surgery evaluation medium based on deep learning.
Background
Cataract is the first disease of blindness, and in China, cataract patients account for about half of the total number of blindness. The operation is a main way for helping patients to regain the eyesight, and at present, the operation rate of China is still at a low level, so that the operation rate of cataract is improved, the operation effect is ensured, and the operation is one of the main problems to be solved urgently in the current work of blindness prevention and treatment. The cataract surgery has entered the refractive age, the individualized requirements of patients and the pursuit of postoperative visual effect put higher requirements on the incision position, the capsulorhexis size, the neutrality of the intraocular lens after implantation and the like of the cataract surgery, the evaluation feedback information of the incision, the capsulorhexis and the intraocular lens implantation links in the surgery can be timely acquired, and the method is particularly important in the cataract surgery training for improving the surgical skill of new surgeons.
The cataract surgery standardized training aims at shortening the learning curve, standardizing the surgery process and reducing the surgery complication. The traditional cataract surgery training feedback usually adopts the international eye surgery ability evaluation standard (ICO-OSCAR) to track and evaluate doctors, and expert doctors score each link of cataract surgery one by one according to the standard, so that the work is long in time consumption and large in subjective difference, and the clinical requirements are difficult to meet through training in a short time.
Disclosure of Invention
The invention aims to provide a cataract surgery evaluation method, a cataract surgery evaluation system and a cataract surgery evaluation medium based on deep learning, which are used for solving the problems.
In order to achieve the purpose, the invention adopts the technical scheme that:
a cataract surgery evaluation method based on deep learning comprises the following steps:
s1, classifying the operation stage of the video frame through a preset classification network by using the characteristics of the surgical instruments in the video frame and the characteristics of the eye background; the operation stage comprises an incision link, a capsulorhexis link and an artificial lens implantation link;
s2, training a universal network according to the initial feature labels of all links of the operation to extract evaluation features for operation evaluation in video frames corresponding to all links;
and S3, acquiring quantitative information of each link of the operation according to the extracted evaluation features and inputting the quantitative information into a trained preset classification evaluation network to perform classification evaluation on each link of the operation.
Further, the step of classifying the operation stage of the video frame comprises:
s11, carrying out hierarchical sampling processing on the video frame;
s12, acquiring surgical instruments and eye background areas in the video frame through the trained preset target detection model, and cutting the surgical instruments and the eye background areas in batches;
and S13, respectively sending the cut surgical instruments, the eye background area and the corresponding video frame into a trained classifier, and outputting the classification result of the surgical stage of the video frame through the classifier.
Further, the step of training the generic network comprises:
t1, randomly cutting an image block from the video frame, inputting the image block and the previous and next frame images into a spatial feature encoder to respectively calculate corresponding spatial features;
t2, calculating the positioning parameters corresponding to the image block which is most matched with the image block in the corresponding spatial features of the front and rear frame images through a micro tracker, and performing bilinear sampling through a bilinear sampler to obtain the spatial features of the most matched image block;
t3, the spatial signature encoder and the micro-trackable device are end-to-end trained through steps T1 and T2, resulting in a trained universal network.
Further, the evaluation characteristics for surgical evaluation include surgical instrument position information, optical flow field information, limbus morphology and position characteristics, and intraocular lens position characteristics.
Further, the step of classifying and evaluating the incision link in the operation comprises the following steps:
a1, fitting the corneal limbus according to the corneal limbus position characteristics to establish a corneal limbus center;
a2, obtaining the relative motion track of the surgical instrument according to the position information of the surgical instrument by taking the center of the corneal limbus as a reference point;
a3, inputting the relative motion track of the surgical instrument and the morphological characteristics of the corneal limbus into a preset classification evaluation network, and evaluating the surgical incision link according to ICO-OSCAR standard to obtain the operation score of the surgical incision link.
Further, the step of classifying and evaluating the capsulorhexis link in the operation comprises the following steps:
and inputting the optical flow field information into a preset classification evaluation network to evaluate the capsulorhexis link in the operation according to ICO-OSCAR standard so as to obtain the operation score of the capsulorhexis link in the operation.
Further, the step of classifying and evaluating the intraocular lens implantation link in the operation is as follows:
b1, fitting the corneal limbus position feature artificial lens position features extracted through the universal network to obtain respective central point position information of the corneal limbus position feature artificial lens and the corneal limbus position feature artificial lens;
and B2, inputting the central point position information of the two into a preset classification evaluation network, and evaluating the intraocular lens implantation link in the operation according to an ICO-OSCAR standard to obtain an operation score of the intraocular lens implantation link in the operation.
In a second aspect of the present invention, there is provided a cataract surgery evaluation system based on deep learning, which includes at least one processor and at least one memory, wherein the memory stores a computer program, and when the program is executed by the processor, the processor is enabled to execute the cataract surgery evaluation method based on deep learning.
In a third aspect of the invention, a computer-readable storage medium is provided, wherein the instructions of the storage medium, when executed by a processor in a device, enable the device to perform the method for evaluating cataract surgery based on deep learning.
Compared with the prior art, the invention at least comprises the following beneficial effects:
(1) according to the invention, the accurate classification of video frames is realized by combining eye background information and a trained accurate classifier according to the main learning characteristics of surgical instruments related to different surgical links, and an algorithm support is provided for data processing before surgical evaluation;
(2) the invention realizes the same network extraction of the quantifiable operation characteristics of different operation ring segments by developing a universal multi-characteristic extraction network, and realizes the skill evaluation of the operation incision, capsulorhexis and intraocular lens implantation links by combining information such as the position of an operation instrument, optical flow, eye deformation characteristics and the like acquired by the universal network and classification labels;
(3) the quantitative relation between descriptive evaluation indexes in the ICO-OSCAR standard and surgical paths, corneal limbus morphology, surgical instrument optical flow information and the like which can be learnt by a deep learning network is established, so that the whole-process participation of an expert doctor in surgical training is replaced by an artificial intelligence technology, and the objectivity, reliability and response rate of the training effect are improved.
Drawings
FIG. 1 is a flow chart of a method for evaluating cataract surgery based on deep learning in an embodiment of the present invention;
FIG. 2 is a flowchart of an embodiment of the present invention for classifying the surgical phase of a video frame;
FIG. 3 is a flow chart of training a generic network in an embodiment of the present invention;
FIG. 4 is a diagram of a generic network training in an embodiment of the present invention;
FIG. 5 is a flow chart of a classification evaluation of an incision link in an operation according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a classified evaluation of intraocular lens implantation sessions during surgery, in accordance with an embodiment of the present invention.
Detailed Description
It should be noted that the technical solutions in the embodiments of the present invention may be combined with each other, but it is necessary to be based on the realization of the technical solutions by those skilled in the art, and when the technical solutions are contradictory to each other or cannot be realized, such a combination of the technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
The following are specific embodiments of the present invention, and the technical solutions of the present invention will be further described with reference to the drawings, but the present invention is not limited to these embodiments.
As shown in fig. 1, the cataract surgery evaluation method based on deep learning of the present invention includes the steps of:
s1, classifying the operation stage of the video frame through a preset classification network by using the characteristics of the surgical instruments in the video frame in combination with the eye background characteristics; the operation stage comprises an incision ring section, a capsulorhexis section and an artificial lens implantation section.
As shown in fig. 2, the step of classifying the surgical phase of the video frame includes:
and S11, carrying out hierarchical sampling processing on the video frame.
Because the surgical skill levels of surgeons are different and the requirements of each operation stage are different, the video recording time is longer, the time of each operation stage is different, and the operation scene is complex and changeable, the method adopts layered sampling to process the video frames, and can effectively improve the training efficiency of the network model.
S12, acquiring surgical instruments and eye background areas in the video frame through the trained preset target detection model, and performing batch cutting on the surgical instruments and the eye background areas.
According to the method, a target detection model based on Yolov3 is trained according to surgical instruments and eye background labels, surgical instruments and eye background areas which are focused in video frames are obtained, and are cut in batches to be used as input of a classifier, so that fine-grained information of the layout of the video frames is improved.
And S13, respectively sending the cut surgical instruments, the eye background area and the corresponding video frame into a trained classifier, and outputting the classification result of the surgical stage of the video frame through the classifier.
The surgical instruments, the eye background area and the corresponding video frames are respectively sent into a ResNet network to extract global and local features. And then, the features respectively output by the full-connection layers of each classifier are connected in series to be used as fine-grained feature representation of the whole frame, so that the accuracy of video frame classification is improved.
According to the invention, the accurate classification of the video frames is realized by taking the surgical instruments related to different surgical links as main learning characteristics and combining eye background information, and algorithm support is provided for subsequent data processing.
And S2, training the universal network according to the initial feature labels of all links of the operation to extract the evaluation features for operation evaluation in the video frames corresponding to all the links.
According to the invention, in order to improve the universality of a feature extraction network, reduce the task load of label labeling and improve the feature extraction efficiency, patches (image blocks) extracted from images by tracking are trained according to the learning visual corresponding relation of label-free videos and tracked in a plurality of frames which are continuous forwards and backwards, so that a feature space is learned, and the visual similarity of the patches which are continuous frames along a time sequence is calculated. When extracting the characteristics of the links of incision, capsulorhexis and intraocular lens implantation, the labels of incision, capsulorhexis and intraocular lens implantation are given, and the characteristics required by the evaluation of each link of the operation can be directly extracted through the network.
As shown in fig. 3 and 4, the step of training the general-purpose network in the present invention includes:
and T1, randomly cutting an image block from the video frame, inputting the image block and the previous and next frame images into a spatial feature encoder to respectively calculate corresponding spatial features.
And T2, calculating the positioning parameters corresponding to the image block which is most matched with the image block in the corresponding spatial characteristics of the front and rear frame images through a micro tracker, and performing bilinear sampling through a bilinear sampler to obtain the spatial characteristics of the most matched image block.
T3, the spatial signature encoder and the micro trackable device are trained end-to-end through steps T1 and T2, resulting in a trained universal network.
The invention provides a method for randomly clipping a patch (P) from a video framet) With preceding and succeeding frame images (I)t-1) Respectively inputting the spatial feature into a spatial feature encoder phi based on a ResNet network, and respectively calculating spatial features xPAnd xIAnd the channel dimension of the spatial features is normalized to facilitate subsequent similarity calculation.
Micro-trackable device T first pair xIAnd xPThe similarity between the coordinates is measured and then the feature x is calculatedIIn and xPThe positioning parameter theta corresponding to the most matched image block is used for comparing the image characteristic x through a bilinear samplerIAnd theta are bilinearly sampled to generate a new patch characteristic xP’And then, the similarity of the new patch characteristic and the previous and subsequent frame images is calculated, so that a loop is formed, and the training of the network is realized.
After the training of the universal network is finished, by giving different feature labels, the feature extraction of incision, capsulorhexis and intraocular lens implantation links in an operation can be realized, and information such as the position of a surgical instrument, an optical flow field, the shape of a corneal limbus, the position of an intraocular lens and the like is acquired so as to be used for classification evaluation work of each link in the next step.
And S3, acquiring quantitative information of each link of the operation according to the extracted evaluation features and inputting the quantitative information into a trained preset classification evaluation network to perform classification evaluation on each link of the operation.
According to the information such as the position of the surgical instrument, the optical flow field, the corneal limbus morphology and the position of the artificial lens obtained by the universal network, the information is converted into quantitative information which can be evaluated, such as the movement track of the surgical instrument, the movement rate and the movement direction (namely the optical flow field information) of the surgical instrument, the information such as the position and the morphological change of the corneal limbus of the eye is combined, and a classification model is trained respectively by combining with the classification label of an expert doctor according to the ICO-OSCAR standard, so that the classification evaluation work of the incision, the capsulorhexis and the artificial lens implantation link is realized.
As shown in fig. 5, the step of performing classification evaluation on the incision links in the operation includes:
a1, fitting the corneal limbus according to the corneal limbus position characteristics to establish a corneal limbus center;
a2, obtaining the relative motion track of the surgical instrument by taking the corneal limbus center as a reference point according to the position information of the surgical instrument;
a3, inputting the relative motion track of the surgical instrument and the morphological characteristics of the corneal limbus into a preset classification evaluation network, and evaluating the surgical incision link according to ICO-OSCAR standard to obtain the operation score of the surgical incision link.
The method for classifying and evaluating the capsulorhexis link in the operation comprises the following steps:
and inputting the optical flow field information into a preset classification evaluation network to evaluate the capsulorhexis link in the operation according to ICO-OSCAR standard so as to obtain the operation score of the capsulorhexis link in the operation.
As shown in FIG. 6, the classification and evaluation steps for the intraocular lens implantation procedure in surgery are as follows:
b1, fitting the corneal limbus position feature artificial lens position features extracted through the universal network to obtain the respective central point position information of the corneal limbus position feature artificial lens and the corneal limbus position feature artificial lens;
and B2, inputting the central point position information of the two into a preset classification evaluation network, and evaluating the intraocular lens implantation link in the operation according to an ICO-OSCAR standard to obtain an operation score of the intraocular lens implantation link in the operation.
According to the invention, the quantitative relation between descriptive evaluation indexes in the ICO-OSCAR standard and surgical instrument paths, corneal limbus forms, surgical instrument optical flow information and the like which can be learned by a deep learning network is established, so that an artificial intelligence technology is used for replacing an expert doctor to participate in the surgical training in the whole process, and the objectivity, reliability and response rate of the training effect are improved.
In another embodiment of the present invention, there is also provided a deep learning-based cataract surgery evaluation system, which includes at least one processor and at least one memory, wherein the memory stores a computer program, and when the program is executed by the processor, the processor is enabled to execute the above-mentioned deep learning-based cataract surgery evaluation method.
In another embodiment of the present invention, there is also provided a computer-readable storage medium, wherein instructions of the storage medium, when executed by a processor in a device, enable the device to perform the above-mentioned cataract surgery evaluation method based on deep learning.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments, or alternatives may be employed, by those skilled in the art, without departing from the spirit or ambit of the invention as defined in the appended claims.
Claims (9)
1. A cataract surgery evaluation method based on deep learning is characterized by comprising the following steps:
s1, classifying the operation stage of the video frame through a preset classification network by using the characteristics of the surgical instruments in the video frame in combination with the eye background characteristics; the operation stage comprises an incision link, a capsulorhexis link and an artificial lens implantation link;
s2, training a universal network according to the initial feature tags of each link of the operation to extract evaluation features for operation evaluation in the video frames corresponding to each link;
and S3, acquiring quantitative information of each link of the operation according to the extracted evaluation features and the preset labels, and inputting the quantitative information into a trained preset classification evaluation network to perform classification evaluation on each link of the operation.
2. The cataract surgery evaluation method based on deep learning of claim 1, wherein the step of classifying the surgery stage of the video frame comprises:
s11, carrying out hierarchical sampling processing on the video frame;
s12, acquiring surgical instruments and eye background areas in the video frame through the trained preset target detection model, and cutting the surgical instruments and the eye background areas in batches;
and S13, respectively sending the cut surgical instruments, the eye background area and the corresponding video frame into a trained classifier, and outputting the classification result of the surgical stage of the video frame through the classifier.
3. The method for evaluating cataract surgery based on deep learning of claim 1, wherein the step of training the universal network comprises:
t1, randomly cutting an image block from the video frame, inputting the image block and the previous and next frame images into a spatial feature encoder to respectively calculate corresponding spatial features;
t2, calculating the positioning parameters corresponding to the image block which is most matched with the image block in the space characteristics corresponding to the front and rear frame images through a micro tracker, and performing bilinear sampling through a bilinear sampler to obtain the space characteristics of the most matched image block;
and T3, performing end-to-end training on the space feature encoder and the micro tracker through steps T1 and T2, and thus obtaining the trained universal network.
4. The cataract surgery evaluation method based on deep learning of claim 1, wherein the evaluation features for surgery evaluation comprise surgical instrument position information, optical flow field information, corneal limbus morphology and position features and artificial lens position features.
5. The cataract surgery evaluation method based on deep learning of claim 4, wherein the step of classifying and evaluating incision links in surgery comprises:
a1, fitting the corneal limbus according to the corneal limbus position characteristics to establish a corneal limbus center;
a2, obtaining the relative motion track of the surgical instrument by taking the corneal limbus center as a reference point according to the position information of the surgical instrument;
and A3, inputting the relative motion track of the surgical instrument and the corneal limbus morphological characteristics into a preset classification evaluation network, and evaluating the surgical incision link according to ICO-OSCAR standard to obtain the operation score of the surgical incision link.
6. The cataract surgery evaluation method based on deep learning of claim 4 is characterized in that the step of classifying and evaluating the capsulorhexis link in the surgery is as follows:
and inputting the optical flow field information into a preset classification evaluation network to evaluate the capsulorhexis link in the operation according to ICO-OSCAR standard so as to obtain the operation score of the capsulorhexis link in the operation.
7. The cataract surgery evaluation method based on deep learning of claim 4 is characterized in that the step of classifying and evaluating the intraocular lens implantation link in surgery comprises:
b1, fitting the corneal limbus position feature artificial lens position features extracted through the universal network to obtain respective central point position information of the corneal limbus position feature artificial lens and the corneal limbus position feature artificial lens;
and B2, inputting the central point position information of the two into a preset classification evaluation network, and evaluating the intraocular lens implantation link in the operation according to ICO-OSCAR standard to obtain the operation score of the intraocular lens implantation link in the operation.
8. A deep learning-based cataract surgery evaluation system comprising at least one processor and at least one memory, wherein the memory stores a computer program which, when executed by the processor, enables the processor to carry out the deep learning-based cataract surgery evaluation method according to any one of claims 1 to 7.
9. A computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor in a device, enable the device to perform the method for deep learning-based cataract surgical assessment of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210229362.2A CN114782855A (en) | 2022-03-10 | 2022-03-10 | Cataract surgery evaluation method, system and medium based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210229362.2A CN114782855A (en) | 2022-03-10 | 2022-03-10 | Cataract surgery evaluation method, system and medium based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114782855A true CN114782855A (en) | 2022-07-22 |
Family
ID=82424103
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210229362.2A Pending CN114782855A (en) | 2022-03-10 | 2022-03-10 | Cataract surgery evaluation method, system and medium based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114782855A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115205769A (en) * | 2022-09-16 | 2022-10-18 | 中国科学院宁波材料技术与工程研究所 | Ophthalmologic operation skill evaluation method, system and storage medium |
-
2022
- 2022-03-10 CN CN202210229362.2A patent/CN114782855A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115205769A (en) * | 2022-09-16 | 2022-10-18 | 中国科学院宁波材料技术与工程研究所 | Ophthalmologic operation skill evaluation method, system and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xu et al. | FFU‐Net: Feature Fusion U‐Net for Lesion Segmentation of Diabetic Retinopathy | |
Zulkifley et al. | Pterygium-Net: a deep learning approach to pterygium detection and localization | |
Liu et al. | A framework of wound segmentation based on deep convolutional networks | |
CN112132817A (en) | Retina blood vessel segmentation method for fundus image based on mixed attention mechanism | |
Shenavarmasouleh et al. | Drdr: Automatic masking of exudates and microaneurysms caused by diabetic retinopathy using mask r-cnn and transfer learning | |
Huh et al. | Detection of mitosis within a stem cell population of high cell confluence in phase-contrast microscopy images | |
Gao et al. | Diabetic retinopathy classification using an efficient convolutional neural network | |
CN111553436A (en) | Training data generation method, model training method and device | |
CN115018756A (en) | Method and device for classifying artery and vein of retina and storage medium | |
Zhang et al. | A human-in-the-loop deep learning paradigm for synergic visual evaluation in children | |
CN114782855A (en) | Cataract surgery evaluation method, system and medium based on deep learning | |
Ghamsarian et al. | Relevance detection in cataract surgery videos by spatio-temporal action localization | |
Tjandrasa et al. | Classification of non-proliferative diabetic retinopathy based on segmented exudates using K-Means clustering | |
Saranya et al. | Detection of exudates from retinal images for non-proliferative diabetic retinopathy detection using deep learning model | |
Tu et al. | Efficient Spatiotemporal Learning of Microscopic Video for Augmented reality-guided phacoemulsification cataract surgery | |
Biswas et al. | Estimating risk levels and epidemiology of diabetic retinopathy using transfer learning | |
Mazumder et al. | Deep learning approaches for diabetic retinopathy detection by image classification | |
CN110490876A (en) | A kind of lightweight neural network for image segmentation | |
Lalitha et al. | Detection of Diabetic Retinopathy in Retinal Fundus Image Using YOLO-RF Model | |
Touati et al. | A Deep Learning Model for Diabetic Retinopathy Classification | |
Alhajim et al. | Application of optimized Deep Learning mechanism for recognition and categorization of retinal diseases | |
Whitten et al. | Clinically-relevant summarisation of cataract surgery videos using deep learning | |
Vignesh et al. | Detection of Diabetic Retinopathy Image Analysis using Convolution Graph Neural Network | |
Sharma et al. | A system for grading diabetic maculopathy severity level | |
Shreyas et al. | A Study on Machine Learning Based Diabetic Retinopathy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |