CN114782855A - Cataract surgery evaluation method, system and medium based on deep learning - Google Patents
Cataract surgery evaluation method, system and medium based on deep learning Download PDFInfo
- Publication number
- CN114782855A CN114782855A CN202210229362.2A CN202210229362A CN114782855A CN 114782855 A CN114782855 A CN 114782855A CN 202210229362 A CN202210229362 A CN 202210229362A CN 114782855 A CN114782855 A CN 114782855A
- Authority
- CN
- China
- Prior art keywords
- link
- evaluation
- deep learning
- surgical
- video frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
本发明属于深度学习技术领域,提供了一种基于深度学习的白内障手术评价方法、系统及介质,包括步骤:S1、利用视频帧中手术器械特征结合眼部背景特征,对该视频帧所处的手术阶段进行分类;S2、根据手术各环节的初始特征标签训练通用型网络来提取各个环节对应视频帧中用于手术评价的评价特征;S3、将提取的评价特征根据预设标签获取手术各环节的量化信息,将量化信息输入至训练后的预设分类评价网络中对手术各环节进行分类评价。本发明的优点在于建立ICO‑OSCAR标准中描述性评价指标与深度学习网络可学习手术特征的量化关系,从而实现通过人工智能技术替代专家医生全程参与手术培训,提高培训效果的客观性、可靠性,及响应速率。
The present invention belongs to the technical field of deep learning, and provides a cataract surgery evaluation method, system and medium based on deep learning, comprising the steps of: S1. Using the characteristics of surgical instruments in a video frame combined with the background characteristics of the eye, determine the location of the video frame where the video frame is located. Classify the operation stage; S2, train a general-purpose network according to the initial feature labels of each link of the operation to extract the evaluation features used for surgical evaluation in the video frames corresponding to each link; S3, obtain the extracted evaluation features according to the preset labels for each link of the operation The quantitative information is input into the pre-trained classification evaluation network to classify and evaluate each link of the operation. The advantage of the present invention lies in establishing the quantitative relationship between the descriptive evaluation index in the ICO‑OSCAR standard and the characteristics of the operation that can be learned by the deep learning network, so that the artificial intelligence technology can replace the expert doctor to participate in the whole operation training, and the objectivity and reliability of the training effect can be improved. , and the response rate.
Description
技术领域technical field
本发明涉及深度学习技术领域,尤其涉及一种基于深度学习的白 内障手术评价方法、系统及介质。The present invention relates to the technical field of deep learning, and in particular, to a method, system and medium for cataract surgery evaluation based on deep learning.
背景技术Background technique
白内障是第一致盲疾病,在我国,白内障患者约占盲人总数的一 半。手术是帮助患者重获光明的主要途径,目前,我国手术率仍处于 较低水平,提升白内障手术率、确保手术效果,是当前防盲、治盲工 作亟待解决的主要问题之一。白内障手术已进入屈光时代,患者的个 性化需求及对术后视觉效果的追求,对白内障手术切口位置、撕囊大 小和人工晶体植入后的居中性等提出了更高的要求,及时获取术中针 对切口、撕囊和人工晶体植入环节的评价反馈信息,对提升新手医生 手术技能在白内障手术培训中尤为重要。Cataract is the first blinding disease. In my country, cataract patients account for about half of the total number of blind people. Surgery is the main way to help patients regain their light. At present, the surgery rate in my country is still at a low level. Improving the cataract surgery rate and ensuring the effect of surgery is one of the main problems to be solved urgently in the current blindness prevention and blindness treatment. Cataract surgery has entered the era of refraction. The individual needs of patients and the pursuit of postoperative visual effects have put forward higher requirements for the location of the cataract surgery incision, the size of the capsulorhexis, and the centering of the intraocular lens after implantation. Intraoperative evaluation and feedback information on incision, capsulorhexis and intraocular lens implantation is particularly important for improving the surgical skills of novice doctors in cataract surgery training.
白内障手术标准化培训,以缩短学习曲线、规范手术流程、降低 手术并发症为目的。传统白内障手术培训反馈通常采用国际眼科协会 的眼科手术能力评估标准(ICO-OSCAR)来对医生进行跟踪评价,由 专家医生按照标准对白内障手术的每一环节逐一打分,该工作耗时长 且主观差异性大,很难在短时间内通过训练满足临床要求。Standardized training for cataract surgery is aimed at shortening the learning curve, standardizing surgical procedures, and reducing surgical complications. Traditional cataract surgery training feedback usually uses the International Ophthalmology Association's Ophthalmic Surgery Competency Assessment Standard (ICO-OSCAR) to track and evaluate doctors, and expert doctors score each part of cataract surgery according to the standard, which is time-consuming and subjective. It is difficult to meet clinical requirements through training in a short period of time.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种基于深度学习的白内障手术评价方 法、系统及介质,用以解决上述问题。The purpose of the present invention is to provide a cataract surgery evaluation method, system and medium based on deep learning to solve the above problems.
为了实现上述目的,本发明采用的技术方案为:In order to achieve the above object, the technical scheme adopted in the present invention is:
一种基于深度学习的白内障手术评价方法,包括步骤:A cataract surgery evaluation method based on deep learning, including steps:
S1、利用视频帧中手术器械特征结合眼部背景特征,通过预设分 类网络对该视频帧所处的手术阶段进行分类;所述手术阶段包括切口 环节、撕囊环节以及人工晶体植入环节;S1, using the surgical instrument feature in the video frame combined with the eye background feature, through a preset classification network to classify the surgical stage where the video frame is located; the surgical stage includes an incision link, a capsulorhexis link and an intraocular lens implantation link;
S2、根据手术各环节的初始特征标签训练通用型网络来提取各个 环节对应视频帧中用于手术评价的评价特征;S2, training a general-purpose network according to the initial feature labels of each link of the operation to extract the evaluation features used for surgical evaluation in the video frames corresponding to each link;
S3、将提取的评价特征根据预设标签获取手术各环节的量化信 息,将所述量化信息输入至训练后的预设分类评价网络中对手术各环 节进行分类评价。S3. Obtain the quantitative information of each link of the operation according to the extracted evaluation feature according to the preset label, and input the quantitative information into the preset classification evaluation network after training to classify and evaluate each link of the operation.
进一步的,对该视频帧所处的手术阶段进行分类的步骤包括:Further, the step of classifying the surgical stage in which the video frame is located includes:
S11、对视频帧进行分层抽样处理;S11. Perform hierarchical sampling processing on the video frame;
S12、通过训练后的预设目标检测模型,获取视频帧中的手术器 械和眼部背景区域,并将所述手术器械和眼部背景区域进行批量裁 剪;S12, obtaining the surgical instrument and the eye background area in the video frame through the trained preset target detection model, and batch cropping the surgical instrument and the eye background area;
S13、将裁剪后的手术器械和眼部背景区域,以及对应的视频帧 分别送入到经过训练的分类器中,通过该分类器输出该视频帧所处手 术阶段的分类结果。S13, send the cropped surgical instrument and the eye background area, and the corresponding video frame to the trained classifier respectively, and output the classification result of the surgical stage where the video frame is located through the classifier.
进一步的,对所述通用型网络进行训练的步骤包括:Further, the step of training the general-purpose network includes:
T1、从视频帧中随机裁剪得到一图像块,将其和前后帧图像一并 输入至空间特征编码器分别计算得到对应的空间特征;T1, randomly crop an image block from the video frame, and input it and the front and rear frame images to the spatial feature encoder to calculate the corresponding spatial feature respectively;
T2、通过可微跟踪器计算出前后帧图像对应空间特征中与该图像 块最匹配的图像块对应的定位参数,并通过双线性采样器进行双线性 采样,得到最匹配图像块的空间特征;T2. The differentiable tracker is used to calculate the positioning parameter corresponding to the image block that best matches the image block in the corresponding spatial features of the front and rear frame images, and the bilinear sampling is performed by the bilinear sampler to obtain the space that best matches the image block. feature;
T3、通过步骤T1和T2对空间特征编码器和可微跟踪器进行端到 端的训练,从而得到训练后的通用型网络。T3. Perform end-to-end training on the spatial feature encoder and the differentiable tracker through steps T1 and T2, so as to obtain a trained general-purpose network.
进一步的,所述用于手术评价的评价特征包括手术器械位置信 息、光流场信息、角膜缘形态和位置特征以及人工晶体位置特征。Further, the evaluation features for surgical evaluation include surgical instrument position information, optical flow field information, limbal morphology and position features, and intraocular lens position features.
进一步的,对手术中切口环节进行分类评价的步骤包括:Further, the steps of classifying and evaluating the incision link in the operation include:
A1、根据所述角膜缘位置特征拟合角膜缘确立角膜缘中心;A1. Fitting the limbus to establish the limbal center according to the positional feature of the limbus;
A2、以角膜缘中心为参考点根据手术器械的位置信息得到手术器 械的相对运动轨迹;A2. Taking the center of the corneal limbus as the reference point, the relative motion trajectory of the surgical instrument is obtained according to the position information of the surgical instrument;
A3、将手术器械的相对运动轨迹与角膜缘形态特征输入至预设分 类评价网络中依据ICO-OSCAR标准对手术切口环节进行评价,得到手 术切口环节的操作评分。A3. Input the relative motion trajectory of the surgical instrument and the morphological characteristics of the corneal limbus into the preset classification evaluation network to evaluate the surgical incision according to the ICO-OSCAR standard, and obtain the operation score of the surgical incision.
进一步的,对手术中撕囊环节进行分类评价的步骤为:Further, the steps of classifying and evaluating the capsulorhexis link in the operation are as follows:
将光流场信息输入至预设分类评价网络中依据ICO-OSCAR标准 对手术中撕囊环节进行评价,得到手术中撕囊环节的操作评分。Input the optical flow field information into the preset classification evaluation network to evaluate the capsulorhexis in the operation according to the ICO-OSCAR standard, and obtain the operation score of the capsulorhexis in the operation.
进一步的,对手术中人工晶体植入环节进行分类评价的步骤为:Further, the steps of classifying and evaluating the intraocular lens implantation link in the operation are as follows:
B1、将通过通用型网络提取的角膜缘位置特征人工晶体位置特征 进行拟合,得到两者各自的中心点位置信息;B1. Fit the limbus position feature and intraocular lens position feature extracted by the universal network to obtain the respective center point position information of the two;
B2、将两者的中心点位置信息输入至预设分类评价网络中依据 ICO-OSCAR标准对手术中人工晶体植入环节进行评价,得到手术中人 工晶体植入环节的操作评分。B2. Input the position information of the center points of the two into the preset classification evaluation network to evaluate the intraoperative intraocular lens implantation according to the ICO-OSCAR standard, and obtain the operation score of the intraoperative intraocular lens implantation.
本发明的第二方面,提供了一种基于深度学习的白内障手术评价 系统,包括至少一个处理器、以及至少一个存储器,其中,所述存储 器存储有计算机程序,当所述程序被所述处理器执行时,使得所述处 理器能够执行所述的基于深度学习的白内障手术评价方法。A second aspect of the present invention provides a deep learning-based cataract surgery evaluation system, comprising at least one processor and at least one memory, wherein the memory stores a computer program, and when the program is executed by the processor When executed, the processor is enabled to execute the deep learning-based cataract surgery evaluation method.
本发明的第三方面,提供了一种计算机可读存储介质,当所述存 储介质中的指令由设备内的处理器执行时,使得所述设备能够执行所 述的基于深度学习的白内障手术评价方法。In a third aspect of the present invention, a computer-readable storage medium is provided, when instructions in the storage medium are executed by a processor in the device, the device can perform the deep learning-based cataract surgery evaluation method.
本发明与现有技术相比,至少包含以下有益效果:Compared with the prior art, the present invention at least includes the following beneficial effects:
(1)本发明根据不同手术环节涉及的手术器械为主要学习特征, 结合眼部背景信息,通过训练后的精准分类器,实现视频帧的准确分 类,为手术评价前的数据处理提供算法支撑;(1) The present invention takes the surgical instruments involved in different surgical links as the main learning feature, combines the eye background information, and realizes the accurate classification of video frames through the accurate classifier after training, and provides algorithm support for data processing before surgical evaluation;
(2)本发明通过开发通用型多特征提取网络,实现不同手术环 节可量化手术特征的同一网络提取,通过该通用型网络获取的手术器 械位置、光流与眼部形变特征等信息,结合分类标签,实现手术切口、 撕囊和人工晶体植入环节的技能评价;(2) The present invention achieves the same network extraction of quantifiable surgical features in different surgical links by developing a general-purpose multi-feature extraction network. Labeling to achieve skill evaluation of surgical incision, capsulorhexis and intraocular lens implantation;
(3)建立ICO-OSCAR标准中描述性评价指标与深度学习网络可 学习的手术路径、角膜缘形态、手术器械光流信息等的量化关系,从 而实现通过人工智能技术替代专家医生全程参与手术培训,提高培训 效果的客观性、可靠性,及响应速率。(3) Establish the quantitative relationship between the descriptive evaluation indicators in the ICO-OSCAR standard and the surgical path, limbal morphology, and optical flow information of surgical instruments that can be learned by the deep learning network, so as to realize the replacement of expert doctors with artificial intelligence technology to participate in the whole surgical training. , to improve the objectivity, reliability, and response rate of the training effect.
附图说明Description of drawings
图1是本发明实施例中基于深度学习的白内障手术评价方法的 流程图;Fig. 1 is the flow chart of the cataract surgery evaluation method based on deep learning in the embodiment of the present invention;
图2是本发明实施例中对视频帧所处的手术阶段进行分类的流 程图;Fig. 2 is the flow chart of classifying the operation stage in which video frame is located in the embodiment of the present invention;
图3是本发明实施例中对通用型网络进行训练的流程图;3 is a flowchart of training a general-purpose network in an embodiment of the present invention;
图4是本发明实施例中通用型网络训练的示意图;4 is a schematic diagram of general-purpose network training in an embodiment of the present invention;
图5是本发明实施例中对手术中切口环节进行分类评价的流程 图;Fig. 5 is the flow chart that the incision link in the operation is classified and evaluated in the embodiment of the present invention;
图6是本发明实施例中对手术中人工晶体植入环节进行分类评 价的流程图。Fig. 6 is a flow chart of classifying and evaluating intraoperative intraocular lens implantation in an embodiment of the present invention.
具体实施方式Detailed ways
需要说明,本发明各个实施例之间的技术方案可以相互结合,但 是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合 出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也 不在本发明要求的保护范围之内。It should be noted that the technical solutions between the various embodiments of the present invention can be combined with each other, but must be based on the realization by those of ordinary skill in the art. When the combination of technical solutions is contradictory or cannot be realized, it should be considered that such technical solutions The combination does not exist and is not within the scope of protection claimed by the present invention.
以下是本发明的具体实施例,并结合附图对本发明的技术方案作 进一步的描述,但本发明并不限于这些实施例。The following are specific embodiments of the present invention, and the technical solutions of the present invention will be further described in conjunction with the accompanying drawings, but the present invention is not limited to these embodiments.
如图1所示,本发明一种基于深度学习的白内障手术评价方法, 包括步骤:As shown in Figure 1, a method for evaluating cataract surgery based on deep learning of the present invention includes the steps:
S1、利用视频帧中手术器械特征结合眼部背景特征,通过预设分 类网络对该视频帧所处的手术阶段进行分类;手术阶段包括切口环 节、撕囊环节以及人工晶体植入环节。S1. Using the characteristics of surgical instruments in the video frame combined with the background characteristics of the eye, classify the surgical stage in which the video frame is located through a preset classification network; the surgical stage includes the incision link, the capsulorhexis link and the intraocular lens implantation link.
其中,如图2所示,对该视频帧所处的手术阶段进行分类的步骤 包括:Wherein, as shown in Figure 2, the step of classifying the surgical stage at which the video frame is located includes:
S11、对视频帧进行分层抽样处理。S11. Perform hierarchical sampling processing on the video frame.
由于外科医生手术技能层次不齐及手术各阶段要求不同,视频录 像时长较长,且手术各阶段时长不一,手术场景复杂多变,本发明采 用了分层抽样对视频帧进行处理,能够有效提高网络模型的训练效 率。Due to the uneven level of surgical skills of surgeons and different requirements for different stages of surgery, the video recording time is long, the duration of each stage of surgery is different, and the surgical scene is complex and changeable. The present invention uses stratified sampling to process video frames, which can effectively Improve the training efficiency of the network model.
S12、通过训练后的预设目标检测模型,获取视频帧中的手术器 械和眼部背景区域,并将所述手术器械和眼部背景区域进行批量裁 剪。S12. Obtain the surgical instrument and the eye background area in the video frame through the trained preset target detection model, and cut the surgical instrument and the eye background area in batches.
本发明根据手术器械、眼部背景标签,训练基于Yolov3的目标 检测模型,获取视频帧中重点关注的手术器械和眼部背景区域,并将 其批量裁减作为分类器的输入,以此提高视频帧的布局细粒度信息。The present invention trains the target detection model based on Yolov3 according to the surgical instruments and eye background labels, obtains the surgical instruments and eye background areas that are focused on in the video frame, and cuts them in batches as the input of the classifier, so as to improve the video frame Layout fine-grained information.
S13、将裁剪后的手术器械和眼部背景区域,以及对应的视频帧 分别送入到经过训练的分类器中,通过该分类器输出该视频帧所处手 术阶段的分类结果。S13, send the cropped surgical instrument and the eye background area, and the corresponding video frame to the trained classifier respectively, and output the classification result of the surgical stage where the video frame is located through the classifier.
通过将手术器械、眼部背景区域与对应的视频帧分别送入到 ResNet网络中提取全局以及局部的特征。然后将每个分类器的全连 接层分别输出的特征串联起来,作为整个帧的细粒度特征表示,提高 视频帧分类的准确性。The global and local features are extracted by feeding surgical instruments, eye background regions and corresponding video frames into the ResNet network respectively. Then, the features output by the fully connected layer of each classifier are concatenated as a fine-grained feature representation of the entire frame, which improves the accuracy of video frame classification.
本发明根据不同手术环节涉及的手术器械为主要学习特征,结合 眼部背景信息,实现视频帧的准确分类,为后续的数据处理提供算法 支撑。According to the surgical instruments involved in different surgical links as the main learning feature, the present invention realizes accurate classification of video frames by combining eye background information, and provides algorithm support for subsequent data processing.
S2、根据手术各环节的初始特征标签训练通用型网络来提取各个 环节对应视频帧中用于手术评价的评价特征。S2. Train a general-purpose network according to the initial feature labels of each link of the operation to extract the evaluation features used for surgical evaluation in the video frames corresponding to each link.
本发明为提高特征提取网络的通用性,减少标签标注的任务量, 提高特征提取效率,根据无标签视频学习视觉对应关系,训练通过跟 踪从图像中提取的patch(图像块),在向前向后连续的若干帧中跟 踪,从而学习特征空间,并计算沿着时间序列连续帧目标patch的视 觉相似性。在提取切口、撕囊、人工晶体植入环节的特征时,给定切 口、撕囊、人工晶体植入的标签,通过该网络能够分别直接提取手术 各环节评价所需的特征。In order to improve the versatility of the feature extraction network, reduce the task amount of labeling, and improve the efficiency of feature extraction, the present invention learns the visual correspondence according to the unlabeled video, and trains the patch (image block) extracted from the image by tracking the forward direction. After tracking in several consecutive frames, it learns the feature space and computes the visual similarity of target patches along consecutive frames in the time series. When extracting the features of incision, capsulorhexis, and intraocular lens implantation, given the labels of incision, capsulorhexis, and intraocular lens implantation, the network can directly extract the features required for evaluation of each segment of the operation.
其中,如图3和图4所示,在本发明中对所述通用型网络进行训 练的步骤包括:Wherein, as shown in Figure 3 and Figure 4, in the present invention, the step of training the general-purpose network includes:
T1、从视频帧中随机裁剪得到一图像块,将其和前后帧图像一并 输入至空间特征编码器分别计算得到对应的空间特征。T1. An image block is randomly cropped from the video frame, and input to the spatial feature encoder together with the frame images before and after to obtain the corresponding spatial feature respectively.
T2、通过可微跟踪器计算出前后帧图像对应空间特征中与该图像 块最匹配的图像块对应的定位参数,并通过双线性采样器进行双线性 采样,得到最匹配图像块的空间特征。T2. The differentiable tracker is used to calculate the positioning parameter corresponding to the image block that best matches the image block in the corresponding spatial features of the front and rear frame images, and the bilinear sampling is performed by the bilinear sampler to obtain the space that best matches the image block. feature.
T3、通过步骤T1和T2对空间特征编码器和可微跟踪器进行端到 端的训练,从而得到训练后的通用型网络。T3. Perform end-to-end training on the spatial feature encoder and the differentiable tracker through steps T1 and T2, so as to obtain a trained general-purpose network.
本发明通过将从视频帧中随机裁剪的patch(Pt)与前后帧图像 (It-1)分别输入基于ResNet网络的空间特征编码器Φ中,分别计算 空间特征xP和xI,并对空间特征的通道维进行归一化,以方便后续 的相似度计算。The present invention calculates the spatial features x P and x I respectively by inputting the randomly cropped patch (P t ) and the front and rear frame images (I t-1 ) from the video frame into the spatial feature encoder Φ based on the ResNet network, respectively, and The channel dimension of spatial features is normalized to facilitate subsequent similarity calculation.
可微跟踪器T首先对xI与xP坐标之间的相似度进行度量,然后 计算特征xI中与xP最匹配的图像块对应的定位参数θ,通过双线性 采样器对图像特征xI和θ进行双线性采样,产生新的patch特征xP’, 而后再将新的patch特征与其前后帧图像进行相似度的计算,从而形 成循环,实现对网络的训练。The differentiable tracker T first measures the similarity between the coordinates of x I and x P , and then calculates the localization parameter θ corresponding to the image patch in the feature x I that best matches x P. Bilinear sampling is performed on x I and θ to generate a new patch feature x P' , and then the similarity between the new patch feature and its previous and previous frame images is calculated to form a loop and realize the training of the network.
该通用型网络训练完成后,通过给定不同的特征标签,其能实现 手术中切口、撕囊、人工晶体植入环节的特征提取,获取手术器械位 置、光流场、角膜缘形态、人工晶体位置等信息,以便用于下一步的 各环节分类评价工作。After the training of the universal network is completed, by giving different feature labels, it can realize the feature extraction of incision, capsulorhexis, and intraocular lens implantation in surgery, and obtain the position of surgical instruments, optical flow field, corneal limbus shape, intraocular lens Location and other information for use in the next step of the classification and evaluation of each link.
S3、将提取的评价特征根据预设标签获取手术各环节的量化信 息,将所述量化信息输入至训练后的预设分类评价网络中对手术各环 节进行分类评价。S3. Obtain the quantitative information of each link of the operation according to the extracted evaluation feature according to the preset label, and input the quantitative information into the preset classification evaluation network after training to classify and evaluate each link of the operation.
根据通用型网络得到的手术器械位置、光流场、角膜缘形态以及 人工晶体位置等信息,将其转换为可以进行评价的量化信息,例如手 术器械运动轨迹,手术器械的运动速率与方向(即光流场信息),并 结合眼部角膜缘位置、形态变化等信息,结合专家医生根据ICO-OSCAR 标准的分类标签,分别训练分类模型,实现切口、撕囊以及人工晶体 植入环节的分类评价工作。According to the information of surgical instrument position, optical flow field, corneal limbus shape and intraocular lens position obtained from the general network, it is converted into quantitative information that can be evaluated, such as the movement trajectory of the surgical instrument, the movement speed and direction of the surgical instrument (ie Optical flow field information), and combined with the limbal position, morphological changes and other information, combined with the classification labels of experts and doctors according to the ICO-OSCAR standard, the classification models were trained separately to realize the classification and evaluation of incision, capsulorhexis and intraocular lens implantation. Work.
其中,如图5所示,对手术中切口环节进行分类评价的步骤包括:Among them, as shown in Figure 5, the steps of classifying and evaluating the incision link in the operation include:
A1、根据所述角膜缘位置特征拟合角膜缘确立角膜缘中心;A1. Fitting the limbus to establish the limbal center according to the positional feature of the limbus;
A2、以角膜缘中心为参考点根据手术器械的位置信息得到手术器 械的相对运动轨迹;A2. Taking the center of the corneal limbus as the reference point, the relative motion trajectory of the surgical instrument is obtained according to the position information of the surgical instrument;
A3、将手术器械的相对运动轨迹与角膜缘形态特征输入至预设分 类评价网络中依据ICO-OSCAR标准对手术切口环节进行评价,得到手 术切口环节的操作评分。A3. Input the relative motion trajectory of the surgical instrument and the morphological characteristics of the corneal limbus into the preset classification evaluation network to evaluate the surgical incision according to the ICO-OSCAR standard, and obtain the operation score of the surgical incision.
对手术中撕囊环节进行分类评价的步骤为:The steps to classify and evaluate the capsulorhexis in the operation are as follows:
将光流场信息输入至预设分类评价网络中依据ICO-OSCAR标准 对手术中撕囊环节进行评价,得到手术中撕囊环节的操作评分。Input the optical flow field information into the preset classification evaluation network to evaluate the capsulorhexis in the operation according to the ICO-OSCAR standard, and obtain the operation score of the capsulorhexis in the operation.
如图6所示,对手术中人工晶体植入环节进行分类评价的步骤 为:As shown in Figure 6, the steps for classifying and evaluating intraocular lens implantation during surgery are as follows:
B1、将通过通用型网络提取的角膜缘位置特征人工晶体位置特征 进行拟合,得到两者各自的中心点位置信息;B1. Fit the limbus position feature and intraocular lens position feature extracted by the universal network to obtain the respective center point position information of the two;
B2、将两者的中心点位置信息输入至预设分类评价网络中依据 ICO-OSCAR标准对手术中人工晶体植入环节进行评价,得到手术中人 工晶体植入环节的操作评分。B2. Input the position information of the center points of the two into the preset classification evaluation network to evaluate the intraoperative intraocular lens implantation according to the ICO-OSCAR standard, and obtain the operation score of the intraoperative intraocular lens implantation.
本发明通过建立ICO-OSCAR标准中描述性评价指标与深度学习 网络可学习的手术器械路径、角膜缘形态、手术器械光流信息等的量 化关系,从而实现通过人工智能技术替代专家医生全程参与手术培 训,提高了培训效果的客观性、可靠性,及响应速率。The invention establishes the quantitative relationship between the descriptive evaluation index in the ICO-OSCAR standard and the surgical instrument path, corneal limbus shape, and optical flow information of the surgical instrument that can be learned by the deep learning network, so as to realize the replacement of expert doctors by artificial intelligence technology to participate in the whole operation. Training improves the objectivity, reliability, and response rate of the training effect.
在本发明另一实施例中,还提供一种基于深度学习的白内障手术 评价系统,包括至少一个处理器、以及至少一个存储器,其中,所述 存储器存储有计算机程序,当所述程序被所述处理器执行时,使得所 述处理器能够执行上述的基于深度学习的白内障手术评价方法。In another embodiment of the present invention, a deep learning-based cataract surgery evaluation system is also provided, comprising at least one processor and at least one memory, wherein the memory stores a computer program, and when the program is executed by the When the processor executes, the processor can execute the above-mentioned deep learning-based cataract surgery evaluation method.
在本发明另一实施例中,还提供一种计算机可读存储介质,当所 述存储介质中的指令由设备内的处理器执行时,使得所述设备能够执 行上述的基于深度学习的白内障手术评价方法。In another embodiment of the present invention, a computer-readable storage medium is also provided. When the instructions in the storage medium are executed by a processor in the device, the device can perform the above-mentioned deep learning-based cataract surgery. Evaluation method.
本领域内的技术人员应明白,本发明的实施例可提供为方法、装 置或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软 件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可 采用在一个或多个其中包含有计算机可用程序代码的计算机可用存 储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施 的计算机程序产品的形式。As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
本发明是参照根据本发明实施例的方法、设备(系统)、和计算 机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序 指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图 和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指 令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理 设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处 理设备的处理器执行的指令产生用于实现在流程图一个流程或多个 流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block in the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数 据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计 算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实 现在流程图一个流程或多个流程和/或方框图一个方框或多个方框 中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
这些计算机程序指令也可装载到计算机或其他可编程数据处理 设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产 生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令 提供用于实现在流程图一个流程或多个流程和/或方框图一个方框 或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.
本文中所描述的具体实施例仅仅是对本发明精神作举例说明。本 发明所属技术领域的技术人员可以对所描述的具体实施例做各种各 样的修改或补充或采用类似的方式替代,但并不会偏离本发明的精神 或者超越所附权利要求书所定义的范围。The specific embodiments described herein are merely illustrative of the spirit of the invention. Those skilled in the art to which the present invention pertains can make various modifications or additions to the described specific embodiments or substitute in similar manners, but will not deviate from the spirit of the present invention or go beyond the definitions of the appended claims range.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210229362.2A CN114782855A (en) | 2022-03-10 | 2022-03-10 | Cataract surgery evaluation method, system and medium based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210229362.2A CN114782855A (en) | 2022-03-10 | 2022-03-10 | Cataract surgery evaluation method, system and medium based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114782855A true CN114782855A (en) | 2022-07-22 |
Family
ID=82424103
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210229362.2A Pending CN114782855A (en) | 2022-03-10 | 2022-03-10 | Cataract surgery evaluation method, system and medium based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114782855A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115205769A (en) * | 2022-09-16 | 2022-10-18 | 中国科学院宁波材料技术与工程研究所 | Ophthalmic surgery skill evaluation method, system and storage medium |
CN119418164A (en) * | 2024-10-29 | 2025-02-11 | 山东大学齐鲁医院 | Surgical evaluation method and system based on multimodal representation and causal reasoning |
CN119851552A (en) * | 2025-03-19 | 2025-04-18 | 绵阳市第三人民医院 | Ophthalmic simulation teaching equipment and method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150320510A1 (en) * | 2014-05-12 | 2015-11-12 | University Of Rochester | Computer Vision Based Method And System For Evaluating And Grading Surgical Procedures |
CN113662664A (en) * | 2021-09-29 | 2021-11-19 | 哈尔滨工业大学 | Instrument tracking-based objective and automatic evaluation method for surgical operation quality |
-
2022
- 2022-03-10 CN CN202210229362.2A patent/CN114782855A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150320510A1 (en) * | 2014-05-12 | 2015-11-12 | University Of Rochester | Computer Vision Based Method And System For Evaluating And Grading Surgical Procedures |
CN113662664A (en) * | 2021-09-29 | 2021-11-19 | 哈尔滨工业大学 | Instrument tracking-based objective and automatic evaluation method for surgical operation quality |
Non-Patent Citations (2)
Title |
---|
XIAOLONG WANG ET AL.: "《Learning Correspondence from the Cycle-consistency of Time》", 《2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》, 9 January 2020 (2020-01-09), pages 1 - 11 * |
YUANYUAN GU ET AL.: "《Construction of Quantitative Indexes for Cataract Surgery Evaluation Based on Deep Learning》", 《OPHTHALMIC MEDICAL IMAGE ANALYSIS》, 20 November 2020 (2020-11-20), pages 195 - 205 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115205769A (en) * | 2022-09-16 | 2022-10-18 | 中国科学院宁波材料技术与工程研究所 | Ophthalmic surgery skill evaluation method, system and storage medium |
CN119418164A (en) * | 2024-10-29 | 2025-02-11 | 山东大学齐鲁医院 | Surgical evaluation method and system based on multimodal representation and causal reasoning |
CN119851552A (en) * | 2025-03-19 | 2025-04-18 | 绵阳市第三人民医院 | Ophthalmic simulation teaching equipment and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114782855A (en) | Cataract surgery evaluation method, system and medium based on deep learning | |
US20250176798A1 (en) | Methods, systems, and computer readable media for generating and providing artificial intelligence assisted surgical guidance | |
CN111616800B (en) | ophthalmic surgery navigation system | |
KR20200005409A (en) | Fundus image management device and method for determining suitability of fundus image | |
CN112233087A (en) | Artificial intelligence-based ophthalmic ultrasonic disease diagnosis method and system | |
CN111553436A (en) | Training data generation method, model training method and equipment | |
CN111583261B (en) | Method and terminal for analyzing ultra-wide angle image of eye bottom | |
Fox et al. | Pixel-based tool segmentation in cataract surgery videos with mask R-CNN | |
CN112950577A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
Tu et al. | Phase-specific augmented reality guidance for microscopic cataract surgery using spatiotemporal fusion network | |
Guarin et al. | The effect of improving facial alignment accuracy on the video-based detection of neurological diseases | |
CN111160431A (en) | Method and device for identifying keratoconus based on multi-dimensional feature fusion | |
CN109903297A (en) | Coronary artery dividing method and system based on disaggregated model | |
CN114931436B (en) | Cataract surgery navigation system | |
Ju et al. | Bridge the domain gap between ultra-wide-field and traditional fundus images via adversarial domain adaptation | |
CN115205769A (en) | Ophthalmic surgery skill evaluation method, system and storage medium | |
Tu et al. | Efficient spatiotemporal learning of microscopic video for augmented reality-guided phacoemulsification cataract surgery | |
Wijewickrema et al. | Region-specific automated feedback in temporal bone surgery simulation | |
CN109730769B (en) | Skin tumor precise operation intelligent tracking method and system based on machine vision | |
CN118037650A (en) | Retina detachment zone positioning method and system based on weak supervised learning | |
CN116894805B (en) | Lesion characteristic identification system based on wide-angle fundus image | |
CN115909470B (en) | Fully automatic eyelid disease postoperative appearance prediction system and method based on deep learning | |
EP4521365A1 (en) | Determining types of microsurgical interventions | |
Gandomi et al. | A Deep Dive Into Capsulorhexis Segmentation: From Dataset Creation to SAM Fine-tuning | |
CN114283260B (en) | An AR navigation method and system for corneal transplant suturing operation based on instance segmentation network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |