CN112085830A - Optical coherent angiography imaging method based on machine learning - Google Patents
Optical coherent angiography imaging method based on machine learning Download PDFInfo
- Publication number
- CN112085830A CN112085830A CN201910513946.0A CN201910513946A CN112085830A CN 112085830 A CN112085830 A CN 112085830A CN 201910513946 A CN201910513946 A CN 201910513946A CN 112085830 A CN112085830 A CN 112085830A
- Authority
- CN
- China
- Prior art keywords
- machine learning
- oct
- network model
- training
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000010801 machine learning Methods 0.000 title claims abstract description 70
- 238000003384 imaging method Methods 0.000 title claims abstract description 36
- 238000002583 angiography Methods 0.000 title claims abstract description 20
- 230000003287 optical effect Effects 0.000 title claims abstract description 16
- 230000001427 coherent effect Effects 0.000 title claims abstract description 7
- 238000012549 training Methods 0.000 claims abstract description 55
- FCKYPQBAHLOOJQ-UHFFFAOYSA-N Cyclohexane-1,2-diaminetetraacetic acid Chemical compound OC(=O)CN(CC(O)=O)C1CCCCC1N(CC(O)=O)CC(O)=O FCKYPQBAHLOOJQ-UHFFFAOYSA-N 0.000 claims abstract description 32
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 22
- 238000000034 method Methods 0.000 claims abstract description 14
- 230000000694 effects Effects 0.000 claims abstract description 9
- 238000012360 testing method Methods 0.000 claims description 15
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 7
- 238000011056 performance test Methods 0.000 claims description 5
- 230000003044 adaptive effect Effects 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 230000000306 recurrent effect Effects 0.000 claims description 2
- 238000012216 screening Methods 0.000 claims description 2
- 238000001514 detection method Methods 0.000 abstract description 2
- 230000002792 vascular Effects 0.000 abstract description 2
- 238000002601 radiography Methods 0.000 abstract 2
- 238000005516 engineering process Methods 0.000 description 11
- 241000282414 Homo sapiens Species 0.000 description 6
- 210000004204 blood vessel Anatomy 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 4
- 230000018109 developmental process Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 210000003743 erythrocyte Anatomy 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 210000001525 retina Anatomy 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000003325 tomography Methods 0.000 description 2
- 206010044565 Tremor Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000000149 argon plasma sintering Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 239000002872 contrast media Substances 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000011423 initialization method Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000004256 retinal image Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
- G06T2207/10121—Fluoroscopy
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及光学相干血管造影成像技术,具体涉及一种基于机器学习的光学相干血管造影成像方法。The invention relates to an optical coherence angiography imaging technology, in particular to an optical coherence angiography imaging method based on machine learning.
背景技术Background technique
光学相干层析成像(Optical Coherent Tomography,OCT)是一种高分辨、非接触、速度快的三维成像技术。它利用了生物组织中散射光的相干原理,其信号对比度来源于不同生物组织光散射能力的差异。OCT技术结合了半导体和超快激光技术,利用宽带光源、迈克尔逊干涉仪和光电探测器等核心部件获得生物组织的背向散射信号,最终能够通过计算机的数字信号处理获得生物组织的实时微米级断层图像。因此,OCT技术早已成为解剖结构影像诊断的重要手段之一,它不仅在眼科临床检查中发挥着重要作用,还在诸如皮肤医学、肠胃医学、心脏病学和神经医学等领域发挥着重要的推动作用。Optical Coherent Tomography (OCT) is a high-resolution, non-contact and fast three-dimensional imaging technology. It utilizes the coherence principle of scattered light in biological tissues, and its signal contrast is derived from the difference in light scattering ability of different biological tissues. OCT technology combines semiconductor and ultrafast laser technology, and uses core components such as broadband light source, Michelson interferometer and photodetector to obtain the backscattered signal of biological tissue, and finally can obtain real-time micron-scale biological tissue through digital signal processing by computer. tomographic image. Therefore, OCT technology has long become one of the important means of anatomical imaging diagnosis. It not only plays an important role in ophthalmology clinical examination, but also plays an important role in fields such as dermatology, gastroenterology, cardiology and neurology. effect.
随着科学技术的发展,OCT技术在过去近30年的时间中经历了多次软硬件上的重大突破和发展,拥有更快的成像速度和更高的系统灵敏度。特别是2002年以后,随着频域OCT技术的成熟,OCT技术得到了各领域的关注和应用。With the development of science and technology, OCT technology has experienced many major breakthroughs and developments in software and hardware in the past nearly 30 years, with faster imaging speed and higher system sensitivity. Especially after 2002, with the maturity of frequency domain OCT technology, OCT technology has received attention and applications in various fields.
1991年,美国麻省理工学院的Huang等搭建了第一台OCT原型机,其纵向分辨率达15μm,并将第一幅离体人眼视网膜OCT扫描图像与相应的组织切片图发表于Science杂志,验证了OCT系统的可行性。Wojtkowski等在2002年获得了世界上第一幅基于频域OCT技术的活体人眼视网膜图像,Johannes和Leitgeb又相继从理论和实验上对比了频域OCT相比时域OCT在各项参数,证明频域OCT拥有更高的灵敏度和更快的成像速度。自此,频域OCT逐渐取代了时域OCT,并且得到了广泛的关注和应用。In 1991, Huang et al. of the Massachusetts Institute of Technology built the first OCT prototype with a longitudinal resolution of 15 μm, and published the first isolated human retina OCT scan image and the corresponding tissue slice in the journal Science. , which verifies the feasibility of the OCT system. In 2002, Wojtkowski et al. obtained the world's first live human retinal image based on frequency domain OCT technology. Johannes and Leitgeb successively compared the parameters of frequency domain OCT compared with time domain OCT theoretically and experimentally, and proved that Frequency domain OCT has higher sensitivity and faster imaging speed. Since then, frequency domain OCT has gradually replaced time domain OCT, and has received extensive attention and applications.
光学相干血管造影成像(Optical Coherent Tomography Angiography,OCTA)是近年来出现的新型无创血管成像技术。具体成像时,信号光通过振镜系统对样品进行扫描,扫描区域一般为矩形,分为快轴方向与慢轴方向,扫描时,信号光在快轴方向连续重复扫描多次(一般为4次),以此记录下同一位置在不同时刻的OCT信号,随后通过算法处理去除组织信息,提取血流信号,生成血管造影图像。它巧妙利用流动的红细胞作为造影剂,即当红细胞在血管中不断流动时,血管内的OCT信号不断变化,以此与静态组织的稳定信号相区别。Optical coherent angiography (Optical Coherent Tomography Angiography, OCTA) is a new type of non-invasive vascular imaging technology that appeared in recent years. During the specific imaging, the signal light scans the sample through the galvanometer system. The scanning area is generally rectangular and divided into the fast axis direction and the slow axis direction. During scanning, the signal light continuously scans repeatedly in the fast axis direction for many times (usually 4 times). ) to record the OCT signals at the same position at different times, and then remove tissue information through algorithm processing, extract blood flow signals, and generate angiography images. It cleverly uses the flowing red blood cells as a contrast agent, that is, when the red blood cells are continuously flowing in the blood vessels, the OCT signal in the blood vessels is constantly changing, so as to distinguish it from the stable signal of the static tissue.
目前OCTA的成像算法依据血管信息的来源主要分为基于相位变化的、基于振幅变化的、基于相位与振幅联合变化的三类成像算法。其本质是通过解析计算的方式对比同一位置不同时刻的OCT信号。但是这些方法往往只利用了OCT信号中的一部分信息,导致造影图像信噪比较低,散斑严重等问题。目前解决这一问题的主要手段是增加同一位置的扫描次数,增强血管信号的强度,这一方法将导致扫描时间过长,样品的颤动会产生伪影,如眼科检查时,病人眼睛的抖动和呼吸。另外,长时间的激光辐照也会对生物组织造成伤害。At present, the imaging algorithms of OCTA are mainly divided into three types of imaging algorithms based on phase changes, amplitude changes, and combined phase and amplitude changes according to the source of blood vessel information. Its essence is to compare the OCT signals at the same location at different times by means of analytical calculation. However, these methods often only utilize a part of the information in the OCT signal, which leads to problems such as low signal-to-noise ratio and serious speckle in angiography images. At present, the main method to solve this problem is to increase the number of scans at the same position and enhance the intensity of the blood vessel signal. This method will lead to too long scanning time, and the vibration of the sample will produce artifacts, such as the shaking of the patient's eyes and breathe. In addition, prolonged laser irradiation can also cause damage to biological tissue.
发明内容SUMMARY OF THE INVENTION
针对以上现有技术中存在的问题,本发明提出了一种基于机器学习的光学相干血管造影成像方法。In view of the above problems in the prior art, the present invention proposes an optical coherence angiography imaging method based on machine learning.
本发明的基于机器学习的光学相干血管造影成像方法,包括以下步骤:The optical coherence angiography imaging method based on machine learning of the present invention comprises the following steps:
1)生成原始数据集:1) Generate the original dataset:
利用OCTA设备采集得到的样品的OCT三维结构图像,生成网络模型训练所需的原始数据集,原始数据集包括j×k组OCT结构图像序列,每一组OCT结构图像序列包括i个二维横截面(B-Scan)的OCT结构图像,其中,k为样品的个数,j为每个样品的慢轴扫描位置的个数,i为同一样品的同一个慢轴扫描位置的扫描次数,i为>4的自然数,j为>50的自然数,k>5的自然数;The OCT three-dimensional structure image of the sample collected by OCTA equipment is used to generate the original data set required for network model training. The original data set includes j×k groups of OCT structure image sequences, each group of OCT structure image sequences includes i two-dimensional horizontal The OCT structure image of the cross-section (B-Scan), where k is the number of samples, j is the number of slow-axis scanning positions of each sample, i is the scanning times of the same slow-axis scanning position of the same sample, i is a natural number > 4, j is a natural number > 50, and k > 5;
2)数据筛选:2) Data filtering:
采用刚性配准算法对同一组OCT结构图像中的i个B-Scan面OCT结构图像进行配准,配准后利用相关性算法计算配准准确度,剔除配准效果较差的整组OCT结构图像,保留n组筛选后的OCT结构图像,n为自然数,且 The rigid registration algorithm is used to register the i B-Scan surface OCT structure images in the same group of OCT structure images. After registration, the correlation algorithm is used to calculate the registration accuracy, and the whole group of OCT structures with poor registration effect is eliminated. image, retain n groups of filtered OCT structure images, n is a natural number, and
3)生成训练数据集:3) Generate a training dataset:
利用步骤2)得到的n组筛选后的OCT结构图像,采用OCTA算法进行造影成像,每组OCT结构图像将得到一张B-Scan面的OCTA造影图像,称作标签图像;从与每个标签图像相对应一组OCT结构图像的i个B-Scan面OCT结构图像中取出m个B-Scan面OCT结构图像,称作输入数据,与标签图像配对,输入数据与标签图像组成网络模型训练所需的训练数据集,其中,m=2,3或4;Using the n groups of screened OCT structural images obtained in step 2), the OCTA algorithm is used for contrast imaging, and each group of OCT structural images will obtain an OCTA contrast image of the B-Scan surface, which is called a label image; The image corresponds to the i B-Scan surface OCT structure images of a set of OCT structure images, and m B-Scan surface OCT structure images are taken out, which are called input data and are paired with label images. The input data and label images form the network model training center. The required training data set, where m=2, 3 or 4;
4)建立机器学习网络模型:4) Build a machine learning network model:
构建机器学习网络模型,并设定机器学习网络模型的超参数;并将训练数据集分成n1组训练集和n2组测试集,训练集与测试集互相独立,n1和n2分别为自然数,且 Build a machine learning network model and set the hyperparameters of the machine learning network model; and divide the training data set into n 1 sets of training sets and n 2 sets of test sets, the training sets and test sets are independent of each other, n 1 and n 2 are respectively natural numbers, and
5)训练机器学习网络模型:5) Train the machine learning network model:
利用步骤4)建立的机器学习网络模型,以训练数据集中的输入数据作为机器学习网络模型的输入,其中以n1组训练集用于训练机器学习网络模型,并以n2组测试集用于检验机器学习网络模型的性能;训练过程中,训练集将分多个批次并重复输入机器学习网络模型训练多轮,同时判断或计算机器学习网络模型的输出图像与标签图像之间的差异作为训练误差以训练机器学习网络模型,每一批次的训练结束后,使用测试集对机器学习网络模型进行性能测试,待机器学习网络模型的性能测试指标训练趋于稳定后时,则认为机器学习网络模型的训练完成,保存训练完成的机器学习网络模型;Using the machine learning network model established in step 4), the input data in the training data set is used as the input of the machine learning network model, wherein n 1 groups of training sets are used for training the machine learning network model, and n 2 groups of test sets are used for Test the performance of the machine learning network model; during the training process, the training set will be divided into multiple batches and repeatedly input into the machine learning network model for multiple rounds of training, and the difference between the output image of the machine learning network model and the label image will be judged or calculated as The training error is used to train the machine learning network model. After each batch of training, use the test set to test the performance of the machine learning network model. When the performance test indicators of the machine learning network model are stabilized, it is considered that machine learning After the training of the network model is completed, save the trained machine learning network model;
6)机器学习网络模型进行OCTA造影:6) Machine learning network model for OCTA imaging:
利用训练完成的机器学习网络模型,将OCTA设备采集得到的样品的OCT结构图像作为输入,输出图像即为OCTA造影图像。Using the trained machine learning network model, the OCT structure image of the sample collected by the OCTA device is used as the input, and the output image is the OCTA angiography image.
其中,在步骤1)中,OCTA设备对样品进行采集时,对一个样品的一个扫描位置的一个慢轴扫描位置扫描一次得到一个B-Scan面的OCT结构图像,对同一个慢轴扫描位置扫描i次,每个样品具有j个慢轴扫描位置,共有k个样品,从而得到i×j×k个B-Scan面的OCT结构图像,将i×j×k个B-Scan面的OCT结构图像分成j×k组OCT结构图像,每一组OCT结构图像包括i个B-Scan面的OCT结构图像,即每一个慢轴扫描位置对应一组OCT结构图像,从而得到原始数据集。Wherein, in step 1), when the OCTA device collects the sample, it scans a slow-axis scanning position of a scanning position of a sample once to obtain an OCT structure image of the B-Scan surface, and scans the same slow-axis scanning position. i times, each sample has j slow-axis scanning positions, and there are k samples in total, so as to obtain the OCT structure images of i×j×k B-Scan surfaces, and the OCT structures of i×j×k B-Scan surfaces are obtained. The images are divided into j×k groups of OCT structural images, each group of OCT structural images includes i OCT structural images of the B-Scan surface, that is, each slow-axis scanning position corresponds to a group of OCT structural images, thereby obtaining the original data set.
在步骤2)中,对于j×k组配准后的OCT结构图像,进比较配准准确度,配准准确度较高的前n组OCT结构图像保留,其余的认为配准效果较差,剔除这些组OCT结构图像。In step 2), for the registered OCT structure images of the j × k groups, the registration accuracy is compared, and the first n groups of OCT structure images with higher registration accuracy are retained, and the rest are considered to have poor registration effects. These sets of OCT structural images were excluded.
在步骤4)中,机器学习网络模型采用深度卷积神经网络CNN、生成式对抗网络GAN或者循环神经网络RNN;超参数包括网络层数、卷积核、学习率、参数初始化、训练轮数和批次规模。In step 4), the machine learning network model adopts deep convolutional neural network CNN, generative adversarial network GAN or recurrent neural network RNN; hyperparameters include the number of network layers, convolution kernel, learning rate, parameter initialization, number of training rounds and batch size.
在步骤5)中,以输出图像与标签图像之间的均方误差、结构相似度或峰值信噪比作为训练误差和性能测试指标,使用随机梯度下降(Stochastic gradient descent,SGD)、自适应矩估计优化算法(Adaptive Moment Estimation,Adam)以及动量算法(Momentum)中的一种最小化训练误差,以训练机器学习网络模型。In step 5), the mean square error, structural similarity or peak signal-to-noise ratio between the output image and the label image is used as the training error and performance test indicator, using stochastic gradient descent (SGD), adaptive moment One of the estimation optimization algorithm (Adaptive Moment Estimation, Adam) and the momentum algorithm (Momentum) minimizes the training error to train the machine learning network model.
在步骤6)中,样品的OCT结构图像为在同一慢轴扫描位置扫描多次,扫描次数≤4。In step 6), the OCT structure image of the sample is scanned multiple times at the same slow-axis scanning position, and the number of scans is ≤4.
机器学习是人工智能研究发展到一定阶段的必然产物,已经成为人工智能的核心研究课题。其目的在于让计算机通过模仿人类学习的行为,以获得知识或者技能并且可以不断学习新的知识以改善性能。机器学习借鉴了生理学、心理学、认知学等学科,通过对人类本身自我学习机理的了解,建立了类似于人类学习的计算模型或认知模型,从而形成了各种学习理论和学习方法,并且面向特定任务建立具有特定应用的学习系统。Machine learning is an inevitable product of the development of artificial intelligence research to a certain stage, and has become the core research topic of artificial intelligence. Its purpose is to allow computers to acquire knowledge or skills by imitating human learning behaviors and can continuously learn new knowledge to improve performance. Machine learning draws on physiology, psychology, cognition and other disciplines. Through the understanding of the self-learning mechanism of human beings, a computational model or cognitive model similar to human learning is established, thus forming various learning theories and learning methods. And a learning system with specific applications is established for specific tasks.
目前机器学习中的常用算法包括人工神经网络、支持向量机、朴素贝叶斯、随机森林、稀疏字典、增强学习、表征学习和相似度度量学习等。随着计算机硬件的发展,深度学习逐渐发展起来,它是人工神经网络的一次全面演进。深度学习拓展了人工神经网络的深度和宽度,能够无限逼近更复杂的非线性模型,从而学习到隐藏在数据之中的客观规律和内在联系。一般意义上,深度学习算法包含深度置信网络、深度神经网络和卷积神经网络,其中深度置信网络和深度神经网络的结构非常相似。目前应用在图像处理较多的深度学习网络是卷积神经网络及生成式对抗网络。目前机器学习已广泛应用在医学图像的重建、增强及分割领域,但尚未应用于OCTA的图像重建中。At present, the commonly used algorithms in machine learning include artificial neural network, support vector machine, naive Bayes, random forest, sparse dictionary, reinforcement learning, representation learning and similarity metric learning. With the development of computer hardware, deep learning has gradually developed, which is a comprehensive evolution of artificial neural networks. Deep learning expands the depth and width of artificial neural networks, and can infinitely approximate more complex nonlinear models, so as to learn the objective laws and internal connections hidden in the data. In a general sense, deep learning algorithms include deep belief networks, deep neural networks and convolutional neural networks, where the structures of deep belief networks and deep neural networks are very similar. The deep learning networks that are currently used in image processing are convolutional neural networks and generative adversarial networks. At present, machine learning has been widely used in the field of medical image reconstruction, enhancement and segmentation, but it has not been used in OCTA image reconstruction.
本发明的优点:Advantages of the present invention:
本发明在OCTA领域能够发挥巨大作用,其强大的数据挖掘能力帮助OCTA设备能够生成信噪比更高、血管连接度更好的血管造影图,并且在很大程度上抑制了OCT图像中常见的散斑效应;值得一提的是,本发明中的标签图像是由算法自动生成,不同于常见的机器学习应用,需要通过专家标注获得标签数据,扩大了这一方法的适用性而不受到不同系统带来本身系统误差的影响。另外在同样的OCTA设备中,为了得到相同水平的OCTA造影像,本发明能够使用更小的探测功率进行成像,减少激光对生物组织的伤害(如眼科),或在成像时减少成像所需的数据量,即减少同一位置的扫描次数,能够更快的完成扫描,减少因扫描时间过长,样品颤动而带来的伪影(如眼底成像时病人抖动、呼吸等)。The invention can play a huge role in the field of OCTA, and its powerful data mining ability helps OCTA equipment to generate angiography images with higher signal-to-noise ratio and better blood vessel connectivity, and to a large extent suppresses common occurrences in OCT images. Speckle effect; it is worth mentioning that the label image in the present invention is automatically generated by the algorithm, which is different from common machine learning applications, which need to obtain label data through expert labeling, which expands the applicability of this method without being affected by different The system brings the influence of its own system error. In addition, in the same OCTA equipment, in order to obtain the same level of OCTA imaging, the present invention can use a smaller detection power for imaging, reduce the damage of laser to biological tissue (such as ophthalmology), or reduce the imaging requirements during imaging. The amount of data, that is, the number of scans at the same position is reduced, the scan can be completed faster, and the artifacts (such as patient shaking and breathing during fundus imaging) caused by too long scanning time and sample tremors are reduced.
附图说明Description of drawings
图1为本发明的基于机器学习的光学相干血管造影成像方法的流程图;FIG. 1 is a flowchart of an optical coherence angiography imaging method based on machine learning of the present invention;
图2为根据本发明的基于机器学习的光学相干血管造影成像方法的一个实施例得到的OCTA图像。FIG. 2 is an OCTA image obtained by an embodiment of the optical coherence angiography imaging method based on machine learning according to the present invention.
具体实施方式Detailed ways
下面结合附图,通过具体实施例,进一步阐述本发明。Below in conjunction with the accompanying drawings, the present invention will be further described through specific embodiments.
本实施例的基于机器学习的光学相干血管造影成像方法,如图1所示,包括以下步骤:The optical coherence angiography imaging method based on machine learning in this embodiment, as shown in FIG. 1 , includes the following steps:
1)生成原始数据集:1) Generate the original dataset:
利用OCTA设备采集得到的视网膜的OCT三维结构图像,生成网络模型训练所需的原始数据集,同一样品的同一个慢轴扫描位置扫描50次,每个样品的慢轴扫描位置为100个,样品为30只人眼,从而生成原始数据集包括100×30组OCT结构图像序列,每一组OCT结构图像序列包括50个B-Scan面的OCT结构图像;The OCT three-dimensional structure image of the retina collected by OCTA equipment is used to generate the original data set required for network model training. The same slow-axis scanning position of the same sample is scanned 50 times, and the slow-axis scanning position of each sample is 100. For 30 human eyes, the original data set includes 100×30 groups of OCT structure image sequences, and each group of OCT structure image sequences includes 50 B-Scan surface OCT structure images;
2)数据筛选:2) Data filtering:
采用刚性配准算法对同一组OCT结构图像中的50个B-Scan面OCT结构图像进行配准,配准后利用相关性算法计算配准准确度,剔除配准效果较差的整组OCT结构图像,保留70%,即100×30×0.7组筛选后的OCT结构图像;The rigid registration algorithm is used to register 50 B-Scan surface OCT structure images in the same group of OCT structure images. After registration, the correlation algorithm is used to calculate the registration accuracy, and the whole group of OCT structures with poor registration effect is eliminated. Images, retain 70%, that is, 100 × 30 × 0.7 groups of OCT structural images after screening;
3)生成训练数据集:3) Generate a training dataset:
利用步骤2)得到的100×30×0.7组筛选后的OCT结构图像,采用OCTA算法进行造影成像,每组OCT结构图像将得到一张B-Scan面的OCTA造影图像,称作标签图像;从与每个标签图像相对应一组OCT结构图像的50个B-Scan面OCT结构图像中取出4个B-Scan面OCT结构图像,称作输入数据,与标签图像配对,输入数据与标签图像组成网络模型训练所需的训练数据集;Using the 100 × 30 × 0.7 groups of screened OCT structural images obtained in step 2), the OCTA algorithm is used for contrast imaging, and each group of OCT structural images will obtain an OCTA contrast image of the B-Scan surface, which is called a label image; From the 50 B-Scan surface OCT structure images of a set of OCT structure images corresponding to each label image, 4 B-Scan surface OCT structure images are taken out, which are called input data and paired with the label image. The input data is composed of the label image. The training dataset required for network model training;
4)建立机器学习网络模型:4) Build a machine learning network model:
机器学习网络模型采用深度卷积神经网络DnCNN,构成为20层卷积层,其中第1层使用64个3×3×4的卷积核对输入的4个OCT结构图像进行卷积,生成64张特征图,并使用ReLU函数作为激活函数;第2至19层均使用64个3×3×64的卷积核对前一层的特征图进行卷积,并在批量归一化后连接ReLU函数作为输出;第20层使用1个64个3×3×4的卷积核对前层的特征图进行卷积,其输出图像则为网络的输出图像;网络参数中,学习率设为0.001,网络参数初始化使用Kaiming初始化方法,重复训练轮数为50,批次规模为16,训练集与测试集的比例为7:3;The machine learning network model adopts the deep convolutional neural network DnCNN, which is composed of 20 convolutional layers, of which the first layer uses 64 3×3×4 convolution kernels to convolve the input 4 OCT structure images to generate 64 images. feature map, and use the ReLU function as the activation function; layers 2 to 19 use 64 3×3×64 convolution kernels to convolve the feature maps of the previous layer, and connect the ReLU function after batch normalization as Output; the 20th layer uses a 64 3×3×4 convolution kernel to convolve the feature map of the previous layer, and the output image is the output image of the network; in the network parameters, the learning rate is set to 0.001, and the network parameters Initialization uses Kaiming initialization method, the number of repeated training rounds is 50, the batch size is 16, and the ratio of training set to test set is 7:3;
5)训练机器学习网络模型:5) Train the machine learning network model:
利用步骤4)建立的机器学习网络模型,以训练数据集中的输入数据作为机器学习网络模型的输入,其中100×30×0.7×0.7组训练集用于训练机器学习网络模型,100×30×0.7×0.3组测试集用于检验机器学习网络模型的性能;训练过程中,训练集将分批次并重复输入机器学习网络模型训练50轮,同时计算机器学习网络模型的输出图像与标签图像之间的均方误差作为训练误差,使用自适应矩估计优化算法(Adam)最小化训练误差,以训练机器学习网络模型;每一批次的训练结束后,使用测试集对机器学习网络模型进行性能测试,待机器学习网络模型的性能测试指标(输出图像与标签图像之间的均方误差)趋于稳定时,则认为机器学习网络模型的训练完成,保存训练完成的机器学习网络模型;Using the machine learning network model established in step 4), the input data in the training data set is used as the input of the machine learning network model, of which 100 × 30 × 0.7 × 0.7 sets of training sets are used to train the machine learning network model, 100 × 30 × 0.7 ×0.3 sets of test sets are used to test the performance of the machine learning network model; during the training process, the training set will be input into the machine learning network model in batches and repeated for 50 rounds of training, and the difference between the output image of the machine learning network model and the label image will be calculated. As the training error, the adaptive moment estimation optimization algorithm (Adam) is used to minimize the training error to train the machine learning network model; after each batch of training, use the test set to test the performance of the machine learning network model , when the performance test index of the machine learning network model (the mean square error between the output image and the label image) tends to be stable, it is considered that the training of the machine learning network model is completed, and the trained machine learning network model is saved;
6)机器学习网络模型进行OCTA造影:6) Machine learning network model for OCTA imaging:
利用训练完成的机器学习网络模型,将OCTA设备采集得到的样品的OCT结构图像作为输入,同一慢轴扫描位置的扫描次数为4次,输出图像即为OCTA造影图像,如图2所示。Using the trained machine learning network model, the OCT structure image of the sample collected by the OCTA device is used as input, the number of scans at the same slow-axis scanning position is 4, and the output image is the OCTA angiography image, as shown in Figure 2.
最后需要注意的是,公布实施例的目的在于帮助进一步理解本发明,但是本领域的技术人员可以理解:在不脱离本发明及所附的权利要求的精神和范围内,各种替换和修改都是可能的。因此,本发明不应局限于实施例所公开的内容,本发明要求保护的范围以权利要求书界定的范围为准。Finally, it should be noted that the purpose of publishing the embodiments is to help further understanding of the present invention, but those skilled in the art can understand that various replacements and modifications can be made without departing from the spirit and scope of the present invention and the appended claims. It is possible. Therefore, the present invention should not be limited to the contents disclosed in the embodiments, and the scope of protection of the present invention shall be subject to the scope defined by the claims.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910513946.0A CN112085830B (en) | 2019-06-14 | 2019-06-14 | Optical coherence angiography imaging method based on machine learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910513946.0A CN112085830B (en) | 2019-06-14 | 2019-06-14 | Optical coherence angiography imaging method based on machine learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112085830A true CN112085830A (en) | 2020-12-15 |
CN112085830B CN112085830B (en) | 2024-02-27 |
Family
ID=73733802
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910513946.0A Active CN112085830B (en) | 2019-06-14 | 2019-06-14 | Optical coherence angiography imaging method based on machine learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112085830B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114209278A (en) * | 2021-12-14 | 2022-03-22 | 复旦大学 | Deep learning skin disease diagnosis system based on optical coherence tomography |
CN116313115A (en) * | 2023-05-10 | 2023-06-23 | 浙江大学 | Drug action mechanism prediction method based on mitochondrial dynamic phenotype and deep learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120213423A1 (en) * | 2009-05-29 | 2012-08-23 | University Of Pittsburgh -- Of The Commonwealth System Of Higher Education | Blood vessel segmentation with three dimensional spectral domain optical coherence tomography |
CN107865642A (en) * | 2017-09-28 | 2018-04-03 | 温州医科大学 | A kind of glaucoma filtering operation avascular filtering bleb evaluation method based on OCT Angiographies |
CN108670239A (en) * | 2018-05-22 | 2018-10-19 | 浙江大学 | A kind of the three-dimensional flow imaging method and system in feature based space |
US20190090732A1 (en) * | 2017-09-27 | 2019-03-28 | Topcon Corporation | Ophthalmic apparatus, ophthalmic image processing method and recording medium |
-
2019
- 2019-06-14 CN CN201910513946.0A patent/CN112085830B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120213423A1 (en) * | 2009-05-29 | 2012-08-23 | University Of Pittsburgh -- Of The Commonwealth System Of Higher Education | Blood vessel segmentation with three dimensional spectral domain optical coherence tomography |
US20190090732A1 (en) * | 2017-09-27 | 2019-03-28 | Topcon Corporation | Ophthalmic apparatus, ophthalmic image processing method and recording medium |
CN107865642A (en) * | 2017-09-28 | 2018-04-03 | 温州医科大学 | A kind of glaucoma filtering operation avascular filtering bleb evaluation method based on OCT Angiographies |
CN108670239A (en) * | 2018-05-22 | 2018-10-19 | 浙江大学 | A kind of the three-dimensional flow imaging method and system in feature based space |
Non-Patent Citations (1)
Title |
---|
周丽萍;李培;潘聪;郭立;丁志华;李鹏: "高灵敏、高对比度无标记三维光学微血管造影系统与脑科学应用研究", 物理学报, vol. 65, no. 15 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114209278A (en) * | 2021-12-14 | 2022-03-22 | 复旦大学 | Deep learning skin disease diagnosis system based on optical coherence tomography |
CN114209278B (en) * | 2021-12-14 | 2023-08-25 | 复旦大学 | Deep learning skin disease diagnosis system based on optical coherence tomography |
CN116313115A (en) * | 2023-05-10 | 2023-06-23 | 浙江大学 | Drug action mechanism prediction method based on mitochondrial dynamic phenotype and deep learning |
CN116313115B (en) * | 2023-05-10 | 2023-08-15 | 浙江大学 | Drug Mechanism Prediction Method Based on Mitochondrial Dynamic Phenotype and Deep Learning |
Also Published As
Publication number | Publication date |
---|---|
CN112085830B (en) | 2024-02-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Deng et al. | Deep learning in photoacoustic imaging: a review | |
US12094082B2 (en) | Image processing apparatus, image processing method and computer-readable medium | |
US20230196572A1 (en) | Method and system for an end-to-end deep learning based optical coherence tomography (oct) multi retinal layer segmentation | |
Kadomoto et al. | Enhanced visualization of retinal microvasculature in optical coherence tomography angiography imaging via deep learning | |
CN113424222A (en) | System and method for providing stroke lesion segmentation using a conditional generation countermeasure network | |
Mujeeb Rahman et al. | Automatic screening of diabetic retinopathy using fundus images and machine learning algorithms | |
Wan Zaki et al. | Towards a connected mobile cataract screening system: A future approach | |
Le et al. | Segmentation and quantitative analysis of photoacoustic imaging: a review | |
Menten et al. | Physiology-based simulation of the retinal vasculature enables annotation-free segmentation of OCT angiographs | |
KR20220064408A (en) | Systems and methods for analyzing medical images based on spatiotemporal data | |
Karn et al. | On machine learning in clinical interpretation of retinal diseases using oct images | |
Miyagawa et al. | Lumen segmentation in optical coherence tomography images using convolutional neural network | |
Gurevich et al. | Development and experimental investigation of mathematical methods for automating the diagnostics and analysis of ophthalmological images | |
Huang et al. | Automatic Retinal Vessel Segmentation Based on an Improved U‐Net Approach | |
Kepp et al. | Segmentation of retinal low-cost optical coherence tomography images using deep learning | |
Biswas et al. | A method for delineation of bone surfaces in photoacoustic computed tomography of the finger | |
Li et al. | Deep learning algorithm for generating optical coherence tomography angiography (OCTA) maps of the retinal vasculature | |
Ottakath et al. | Ultrasound-based image analysis for predicting carotid artery stenosis risk: A comprehensive review of the problem, techniques, datasets, and future directions | |
CN112085830A (en) | Optical coherent angiography imaging method based on machine learning | |
Zhao et al. | Automatic generation of retinal optical coherence tomography images based on generative adversarial networks | |
Marciniak et al. | Neural Networks Application for Accurate Retina Vessel Segmentation from OCT Fundus Reconstruction | |
Sultana et al. | RIMNet: Image magnification network with residual block for retinal blood vessel segmentation | |
CN112489150A (en) | Deep neural network multi-scale sequential training method for rapid MRI | |
Kumar et al. | Improved Blood Vessels Segmentation of Retinal Image of Infants. | |
Ilyasova et al. | Systems for recognition and intelligent analysis of biomedical images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |