CN113837947B - Processing method for obtaining optical coherence tomography large focal depth image - Google Patents
Processing method for obtaining optical coherence tomography large focal depth image Download PDFInfo
- Publication number
- CN113837947B CN113837947B CN202111428250.1A CN202111428250A CN113837947B CN 113837947 B CN113837947 B CN 113837947B CN 202111428250 A CN202111428250 A CN 202111428250A CN 113837947 B CN113837947 B CN 113837947B
- Authority
- CN
- China
- Prior art keywords
- oct
- image
- face
- resolution
- resolution image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012014 optical coherence tomography Methods 0.000 title claims abstract description 213
- 238000003672 processing method Methods 0.000 title claims abstract description 13
- 238000000034 method Methods 0.000 claims abstract description 45
- 238000013507 mapping Methods 0.000 claims abstract description 6
- 238000010276 construction Methods 0.000 claims abstract description 5
- 238000012545 processing Methods 0.000 claims description 12
- 238000013135 deep learning Methods 0.000 claims description 8
- 238000011426 transformation method Methods 0.000 claims description 8
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 238000010187 selection method Methods 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 abstract description 7
- 238000011161 development Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 6
- 238000006073 displacement reaction Methods 0.000 description 6
- 241000252212 Danio rerio Species 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000000386 microscopy Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000002775 capsule Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000033772 system development Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
Abstract
Description
技术领域technical field
本发明属于图像处理和成像技术领域,尤其是涉及一种获得光学相干层析大焦深图像的处理方法。The invention belongs to the technical field of image processing and imaging, and in particular relates to a processing method for obtaining a large focal depth image of optical coherence tomography.
背景技术Background technique
光学相干层析术(Optical coherence tomography, OCT)是一种非接触、非侵入的成像技术。OCT的横向分辨率由样品聚焦光束的衍射限制光斑大小决定。作为一种三维成像技术,OCT的横向分辨率越高,焦深就会越小。为了获得大深度成像,在OCT系统开发中通常会在横向分辨率和焦深方面进行折中选择。Optical coherence tomography (OCT) is a non-contact, non-invasive imaging technique. The lateral resolution of OCT is determined by the diffraction-limited spot size of the sample focused beam. As a 3D imaging technique, the higher the lateral resolution of OCT, the smaller the depth of focus. In order to obtain large depth imaging, a trade-off between lateral resolution and depth of focus is usually made in the development of OCT systems.
如何在高分辨率的情况下,提高OCT图像的焦深一直是一个难点。典型的硬件方法是使用二元相位空间滤波器或轴向透镜来实现贝塞尔光束以拓展焦深。此外,2017年E. Bo等人利用多重孔径合成来拓展焦深(Bo E, Luo Y, Chen S, Liu X, Wang N, Ge X, WangX, Chen S, Chen S, Li J, Liu L. Depth-of-focus extension in optical coherencetomography via multiple aperture synthesis. Optica. 2017, 4(7), 701-706.)。然而,硬件方法通常需要开发复杂的成像系统,稳定性差。How to improve the depth of focus of OCT images in the case of high resolution has always been a difficult problem. A typical hardware approach is to use binary phase spatial filters or axial lenses to implement Bessel beams to extend the depth of focus. In addition, in 2017, E. Bo et al. used multiple aperture synthesis to expand the depth of focus (Bo E, Luo Y, Chen S, Liu X, Wang N, Ge X, WangX, Chen S, Chen S, Li J, Liu L. Depth-of-focus extension in optical coherencetomography via multiple aperture synthesis. Optica. 2017, 4(7), 701-706.). However, hardware methods usually require the development of complex imaging systems with poor stability.
与基于硬件的方法相比,数字信号处理方法提供了替代的和相对便宜的解决方案。例如,干涉合成孔径显微术(Interferometric synthetic aperture microscopy,ISAM) (Ralston T S, Marks D L, Carney P S, Boppart S A. Interferometricsynthetic aperture microscopy. Nature physics. 2007; 3(2), 129-134.) 可以在大深度范围内进行高分辨成像,但在采集过程中需要保证绝对的相位稳定。Compared to hardware-based methods, digital signal processing methods offer alternative and relatively inexpensive solutions. For example, Interferometric synthetic aperture microscopy (ISAM) (Ralston T S, Marks D L, Carney P S, Boppart S A. Interferometric synthetic aperture microscopy. Nature physics. 2007; 3(2), 129-134.) can High-resolution imaging is performed in a large depth range, but absolute phase stability is required during acquisition.
发明内容SUMMARY OF THE INVENTION
本发明的主要目的是解决如何在提高横向分辨率的情况下获得大焦深图像的问题,提供一种获得光学相干层析大焦深图像的处理方法。The main purpose of the present invention is to solve the problem of how to obtain a large focal depth image while improving the lateral resolution, and to provide a processing method for obtaining a large focal depth image of optical coherence tomography.
本发明方法首先建立不同离焦量下配准的OCT en face低分辨图像和高分辨图像对所构成的数据集,然后构建OCT en face图像超分辨模型,通过深度学习方法,学习OCTen face高分辨率图像和低分辨率图像间的映射关系,获得OCT三维图像的数字重聚焦,以便得到OCT图像焦深拓展后的高分辨图像即大焦深图像。The method of the invention first establishes a data set composed of OCT en face low-resolution images and high-resolution image pairs registered under different defocusing amounts, then constructs an OCT en face image super-resolution model, and learns the OCT en face high-resolution model through a deep learning method. According to the mapping relationship between the high-resolution image and the low-resolution image, the digital refocusing of the OCT three-dimensional image is obtained, so as to obtain the high-resolution image after the focal depth of the OCT image is expanded, that is, the large focal depth image.
为实现以上目的,本发明采用如下技术方案:To achieve the above purpose, the present invention adopts the following technical solutions:
一种获得光学相干层析大焦深图像的处理方法,包括三个步骤:构建光学相干层析(Optical Coherence Tomography, OCT)en face图像数据集、构建OCT en face图像超分辨模型以及基于构建的OCT en face图像超分辨模型实现OCT图像数字重聚焦三个步骤;A processing method for obtaining a large focal depth image of optical coherence tomography, comprising three steps: constructing an optical coherence tomography (OCT) en face image dataset, constructing an OCT en face image super-resolution model, and building an image based on the constructed OCT en face image. The OCT en face image super-resolution model realizes three steps of digital refocusing of OCT images;
步骤1:构建OCT en face图像数据集;Step 1: Construct the OCT en face image dataset;
所述OCT en face图像数据集是指不同离焦量下配准的OCT en face低分辨图像和高分辨图像对所构成的数据集;具体构建方法:The OCT en face image dataset refers to a dataset composed of pairs of OCT en face low-resolution images and high-resolution images registered under different defocus amounts; the specific construction method:
步骤1.1:把入射光束聚焦在样品内部,对所述样品内部的选定区域以等间隔方式采集OCT en face图像序列;具体方法如下:Step 1.1: Focus the incident beam on the inside of the sample, and collect the OCT en face image sequence at equal intervals for the selected area inside the sample; the specific method is as follows:
沿指定方向移动样品台,采集入射光束聚焦于M+1个位置时所述样品内部的选定区域的OCT en face图像序列,每个图像序列采集的图像数目均为P个;其中,在入射光束的原有焦深范围内采集的一组OCT en face图像序列作为基准图像序列,所述基准图像序列属于OCT en face高分辨图像;其余不在入射光束的原有焦深范围内的M个位置采集的M组图像序列为不同离焦量下的OCT en face低分辨图像序列,所述不同离焦量下的OCT enface低分辨图像序列对应不同的分辨率;Move the sample stage along the specified direction, and collect the OCT en face image sequence of the selected area inside the sample when the incident beam is focused at M+1 positions, and the number of images collected in each image sequence is P; A set of OCT en face image sequences collected within the original focal depth range of the beam is used as a reference image sequence, and the reference image sequence belongs to the OCT en face high-resolution image; the rest are not within the original focal depth range of the incident beam. M positions The M groups of image sequences collected are OCT en face low-resolution image sequences under different defocus amounts, and the OCT en face low-resolution image sequences under different defocus amounts correspond to different resolutions;
所述样品内部的选定区域的长度等于入射光束原有焦深的0.8-1倍;The length of the selected area inside the sample is equal to 0.8-1 times the original focal depth of the incident beam;
所述M为2-8之间的自然数;The M is a natural number between 2-8;
所述每个图像序列采集的图像数目P为4-100之间的自然数;The number P of images collected by each image sequence is a natural number between 4 and 100;
步骤1.2:对于不同分辨率的OCT en face低分辨图像序列,在所述基准图像序列中利用OCT en face图像配准方法寻找并配准对应位置的OCT en face高分辨图像,建立M个不同离焦量下的数据集,每个数据集中包含P个配准的OCT en face低分辨图像和高分辨图像对;Step 1.2: For the OCT en face low-resolution image sequences with different resolutions, use the OCT en face image registration method in the reference image sequence to find and register the OCT en face high-resolution images at the corresponding positions, and establish M different distances. Data sets under the focal volume, each data set contains P registered OCT en face low-resolution image and high-resolution image pairs;
其中,所述OCT en face图像配准方法包括两步:OCT en face低分辨图像和高分辨图像对的选择、OCT en face低分辨图像和高分辨图像的精细配准;Wherein, the OCT en face image registration method includes two steps: the selection of the OCT en face low-resolution image and the high-resolution image pair, and the fine registration of the OCT en face low-resolution image and the high-resolution image;
(1)所述OCT en face低分辨图像和高分辨图像对的选择方法是,对于从某一个离焦位置采集的所述OCT en face低分辨图像序列中提取的每一个离焦的OCT en face低分辨图像,从所述基准图像序列中选择一个OCT en face高分辨图像进行配对;(1) The selection method of the OCT en face low-resolution image and the high-resolution image pair is, for each defocused OCT en face extracted from the OCT en face low-resolution image sequence collected from a certain defocus position; A low-resolution image, select an OCT en face high-resolution image from the reference image sequence for pairing;
获得一个图像对的具体方法:将某个离焦的OCT en face低分辨图像与所述基准图像序列中的每一个OCT en face高分辨图像初步配准;然后,计算初步配准后的离焦的OCT en face低分辨图像与所述每一个OCT en face高分辨图像之间的相关系数r,从中选择相关系数最高的OCT en face高分辨图像与离焦的OCT en face低分辨图像作为配对图像;The specific method for obtaining an image pair: preliminarily register a certain defocused OCT en face low-resolution image with each OCT en face high-resolution image in the reference image sequence; then, calculate the defocus after the preliminary registration The correlation coefficient r between the OCT en face low-resolution image and each of the OCT en face high-resolution images, from which the OCT en face high-resolution image with the highest correlation coefficient and the out-of-focus OCT en face low-resolution image are selected as paired images ;
两幅图像间的相关系数r由以下公式计算:The correlation coefficient r between the two images is calculated by the following formula:
(1) (1)
其中f(x,y)和g(x,y)分别代表高分辨图像和低分辨图像的灰度值,x表示图像的横坐标,y表示图像的纵坐标,代表高分辨图像的平均灰度值,代表低分辨图像的平均灰度值。where f(x,y) and g(x,y) represent the gray value of the high-resolution image and low-resolution image respectively, x represents the abscissa of the image, y represents the ordinate of the image, represents the average gray value of the high-resolution image, Represents the average gray value of the low-resolution image.
所述OCT en face低分辨图像和高分辨图像初步配准的方法包括仿射变换法、刚体变换法、投影变换法、非线性变换法或矩和主轴法;The method for the preliminary registration of the low-resolution image and the high-resolution image of the OCT en face includes an affine transformation method, a rigid body transformation method, a projection transformation method, a nonlinear transformation method or a moment and principal axis method;
(2)OCT en face低分辨图像和高分辨图像的精细配准方法包括金字塔配准法、小波变换配准法、最大互信息配准法和图谱配准法;(2) The fine registration methods of OCT en face low-resolution image and high-resolution image include pyramid registration method, wavelet transform registration method, maximum mutual information registration method and atlas registration method;
步骤2:构建OCT en face图像超分辨模型;Step 2: Build the OCT en face image super-resolution model;
通过深度学习方法分别学习OCT en face高分辨图像和M组不同分辨率的OCT enface低分辨图像间的映射关系,构建M个OCT en face图像超分辨模型;The deep learning method is used to learn the mapping relationship between the OCT en face high-resolution images and M sets of OCT en face low-resolution images with different resolutions, and construct M OCT en face image super-resolution models;
步骤3:基于所述OCT en face图像超分辨模型实现OCT图像数字重聚焦;Step 3: realize digital refocusing of OCT images based on the OCT en face image super-resolution model;
步骤3.1:对于待处理的OCT三维图像数据,首先确定入射光束的聚焦位置,依据聚焦位置、原有焦深和所述离焦量划分待处理的OCT三维图像,获得在原有焦深范围内的OCTen face高分辨图像序列和分别沿深度方向的焦深范围前和焦深范围后各M组共计2M组OCTen face低分辨图像序列;其中,OCT en face高分辨图像不需要进行超分辨处理;Step 3.1: For the OCT 3D image data to be processed, first determine the focal position of the incident beam, divide the OCT 3D image to be processed according to the focal position, the original focal depth and the defocus amount, and obtain the OCT 3D image within the original focal depth range. The OCTen face high-resolution image sequence and each M group in the depth direction before the focal depth range and after the focal depth range, a total of 2M groups of OCTen face low-resolution image sequences; the OCTen face high-resolution image does not require super-resolution processing;
步骤3.2:采用步骤2构建的M个所述OCT en face图像超分辨模型分别对不同离焦量下划分的所述焦深范围前和焦深范围后各M组OCT en face低分辨图像序列进行图像超分辨处理,获得焦深范围前和焦深范围后各M组OCT en face超分辨图像序列,实现图像重聚焦,提高了待处理的OCT三维图像中原有焦深之外的OCT en face图像的横向分辨率,最后把没有进行超分辨处理的焦深范围内的一组OCT en face高分辨图像序列和处理得到的2M组OCT en face超分辨图像序列分别沿深度方向的焦深范围前和焦深范围后重新堆叠起来形成重聚焦的OCT三维图像,实现焦深拓展。Step 3.2: Use the M OCT en face image super-resolution models constructed in
本发明的有益效果:Beneficial effects of the present invention:
1、本发明无需任何机械硬件装置的辅助,通过数字方法来实现OCT图像的焦深拓展,可以降低系统开发的硬件成本;1. The present invention does not require the assistance of any mechanical hardware device, and realizes the expansion of the focal depth of the OCT image through a digital method, which can reduce the hardware cost of system development;
2、本发明结合图像配准算法,利用深度学习方法来实现OCT图像的数字重聚焦,处理速度快;2. The present invention uses the deep learning method to realize the digital refocusing of the OCT image in combination with the image registration algorithm, and the processing speed is fast;
3、本发明的实施对OCT硬件系统要求低,不需要相位匹配,并且泛化能力强。3. The implementation of the present invention has low requirements on the OCT hardware system, does not require phase matching, and has strong generalization ability.
附图说明Description of drawings
图1是本发明提供的获得光学相干层析大焦深图像的处理方法流程图;Fig. 1 is the processing method flow chart of obtaining optical coherence tomography large focal depth image provided by the present invention;
图2是本发明提供的样品内部选定区域OCT en face图像序列采集示意图;其中(a)是基准图像序列采集示意图,(b)-(e)分别是4个不同离焦量下的OCT en face低分辨图像序列采集示意图;Fig. 2 is a schematic diagram of acquisition of OCT en face image sequence in a selected area inside the sample provided by the present invention; wherein (a) is a schematic diagram of the acquisition of a reference image sequence, and (b)-(e) are OCT en face under 4 different defocus amounts respectively. Schematic diagram of face low-resolution image sequence acquisition;
图3是本发明提供的OCT en face图像对配准方法流程图;Fig. 3 is the OCT en face image pair registration method flow chart provided by the present invention;
图4是本发明的生成器结构示意图;Fig. 4 is the generator structure schematic diagram of the present invention;
图5是本发明的生成器中的残差密连接块结构示意图;5 is a schematic structural diagram of a residual dense connection block in the generator of the present invention;
图6是本发明的判别器结构示意图;Fig. 6 is the structure schematic diagram of the discriminator of the present invention;
图7是本发明采集的斑马鱼OCT en face低分辨图像;Fig. 7 is the zebrafish OCT en face low-resolution image collected by the present invention;
图8是本发明采集的斑马鱼OCT en face高分辨图像;Fig. 8 is the zebrafish OCT en face high-resolution image collected by the present invention;
图9是本发明实现的斑马鱼OCT en face数字重聚焦图像。Fig. 9 is a digital refocusing image of zebrafish OCT en face realized by the present invention.
具体实施方式Detailed ways
下面结合附图对本发明目的的实现、功能特点及优点做进一步说明。The realization, functional characteristics and advantages of the present invention will be further described below with reference to the accompanying drawings.
一种获得光学相干层析大焦深图像的处理方法,其流程图如附图1所示,包括:A processing method for obtaining a large focal depth image of optical coherence tomography, the flowchart of which is shown in accompanying drawing 1, including:
步骤1:构建OCT en face图像数据集;Step 1: Construct the OCT en face image dataset;
所述OCT en face图像数据集是指不同离焦量下配准的OCT en face低分辨图像和高分辨图像对所构成的数据集;具体构建方法:The OCT en face image dataset refers to a dataset composed of pairs of OCT en face low-resolution images and high-resolution images registered under different defocus amounts; the specific construction method:
把入射光束聚焦在样品内部,对所述样品内部的选定区域以等间隔方式采集OCTen face图像序列;具体方法如下:The incident beam is focused inside the sample, and the OCTen face image sequence is collected at equal intervals for the selected area inside the sample; the specific method is as follows:
沿指定方向移动样品台,采集入射光束聚焦于M+1个位置时样品内部的选定区域的OCT en face图像序列,每个图像序列采集的图像数目均为P个;其中,在入射光束的原有焦深范围内采集的一组OCT en face图像序列作为基准图像序列,所述基准图像序列属于OCT en face高分辨图像;其余不在入射光束的原有焦深范围内的M个位置采集的M组图像序列为不同离焦量下的OCT en face低分辨图像序列,所述不同离焦量下的OCT en face低分辨图像序列对应不同的分辨率;利用OCT en face图像配准方法寻找并配准对应位置的OCT en face高分辨图像,建立M个不同离焦量下的数据集,每个数据集中包含P个配准的OCT en face低分辨图像和高分辨图像对;Move the sample stage along the specified direction, and collect the OCT en face image sequence of the selected area inside the sample when the incident beam is focused at M+1 positions, and the number of images collected in each image sequence is P; A set of OCT en face image sequences collected within the original focal depth range are taken as the reference image sequence, and the reference image sequence belongs to the OCT en face high-resolution images; the rest are collected from M positions that are not within the original focal depth range of the incident beam. The M groups of image sequences are OCT en face low-resolution image sequences under different defocus amounts, and the OCT en face low-resolution image sequences under different defocus amounts correspond to different resolutions; the OCT en face image registration method is used to find And register the high-resolution OCT en face images at the corresponding positions, and establish M data sets under different defocus amounts, each data set contains P registered OCT en face low-resolution images and high-resolution image pairs;
具体实施方法如下:The specific implementation method is as follows:
首先把入射光束聚焦在样品内部,以等间隔方式采集所述样品内部的选定区域的OCT en face图像序列(图像序列的图像数量P在自然数4-100之间选择,本实施例选为20),样品内部的选定区域OCT en face图像序列采集示意图如附图2所示,201表示扫描物镜,202表示待测样品,203表示样品台,204表示样品内部的选定区域。样品内部的选定区域的长度等于入射光束原有焦深的0.8-1倍,本实施例中入射光束原有焦深为60微米,样品内部的选定区域的长度取入射光束原有焦深的1倍,取为60微米。如附图2(a)所示,把入射光束聚焦在样品内部,在入射光束的原有焦深范围内采集一组OCT en face图像序列作为基准图像序列,所述基准图像序列属于OCT en face高分辨图像;然后沿指定方向移动样品台203,使待测样品202的位置逐步远离或靠近扫描物镜201,改变光束在样品内部的聚焦位置,使得样品内部的选定区域204具有不同的离焦量。依据不同离焦量设置,采集M个不同离焦量下的OCT en face低分辨图像序列,所述不同离焦量下的OCT en face低分辨图像序列对应不同的分辨率。在本实施例中,我们使得样品202逐步远离扫描物镜201,设置M=4,离焦量分别设置为60μm,120μm,180μm和240μm,分别如附图2(b)-2(e)所示。First, focus the incident beam on the inside of the sample, and collect the OCT en face image sequence of the selected area inside the sample at equal intervals (the image number P of the image sequence is selected between 4-100, a natural number, and 20 is selected in this embodiment. ), a schematic diagram of OCT en face image sequence acquisition of a selected area inside the sample is shown in Figure 2, 201 represents the scanning objective, 202 represents the sample to be tested, 203 represents the sample stage, and 204 represents the selected area inside the sample. The length of the selected area inside the sample is equal to 0.8-1 times the original focal depth of the incident beam. In this embodiment, the original focal depth of the incident beam is 60 microns, and the length of the selected area inside the sample is taken as the original focal depth of the incident beam. 1 times, take 60 microns. As shown in Fig. 2(a), the incident beam is focused inside the sample, and a set of OCT en face image sequences are collected within the original focal depth range of the incident beam as a reference image sequence, and the reference image sequence belongs to OCT en face High-resolution image; then move the
对于不同分辨率的OCT en face低分辨图像序列,在所述基准图像序列中利用OCTen face图像配准方法寻找并配准对应位置的OCT en face高分辨图像,建立4个不同离焦量下的数据集,每个数据集中包含20个配准的OCT en face低分辨图像和高分辨图像对。For the OCT en face low-resolution image sequences with different resolutions, the OCT en face image registration method is used in the reference image sequence to find and register the OCT en face high-resolution images at the corresponding positions, and four different defocusing amounts are established. datasets, each dataset contains 20 registered OCT en face low- and high-resolution image pairs.
实施例中的OCT en face图像配准方法设计如附图3所示,包括;The design of the OCT en face image registration method in the embodiment is shown in accompanying drawing 3, including;
首先选择OCT en face低分辨图像和高分辨图像对,具体步骤如下:First select the OCT en face low-resolution image and high-resolution image pair, the specific steps are as follows:
对于从某一个离焦位置采集的OCT en face低分辨图像序列中提取的每一个离焦的OCT en face低分辨图像,从基准图像序列中选择一个OCT en face高分辨图像进行配对;For each out-of-focus OCT en face low-resolution image extracted from the OCT en face low-resolution image sequence collected from a certain defocus position, select an OCT en face high-resolution image from the reference image sequence for pairing;
获得一个图像对的具体方法:将某个离焦的OCT en face低分辨图像与基准图像序列中的每一个OCT en face高分辨图像利用仿射变换初步配准;然后,计算初步配准后的离焦的OCT en face低分辨图像与每一个OCT en face高分辨图像之间的相关系数r,从中选择相关系数最高的OCT en face高分辨图像与离焦的OCT en face低分辨图像作为配对图像;The specific method of obtaining an image pair: Preliminarily register a low-resolution OCT en face image with an out-of-focus image with each high-resolution OCT en face image in the reference image sequence using affine transformation; then, calculate the pre-registered image. The correlation coefficient r between the out-of-focus OCT en face low-resolution image and each OCT en face high-resolution image, from which the OCT en face high-resolution image with the highest correlation coefficient and the out-of-focus OCT en face low-resolution image are selected as paired images ;
两幅图像间的相关系数r由以下公式计算:The correlation coefficient r between the two images is calculated by the following formula:
(1) (1)
其中f(x,y)和g(x,y)分别代表高分辨图像和低分辨图像的灰度值,x表示图像的横坐标,y表示图像的纵坐标,代表高分辨图像的平均灰度值,代表低分辨图像的平均灰度值。where f(x,y) and g(x,y) represent the gray value of the high-resolution image and low-resolution image respectively, x represents the abscissa of the image, y represents the ordinate of the image, represents the average gray value of the high-resolution image, Represents the average gray value of the low-resolution image.
然后,采用金字塔配准算法在初步配准的OCT en face低分辨图像和高分辨图像对上完成精细的多尺度图像配准,具体实施方法如下:Then, the pyramid registration algorithm is used to complete the fine multi-scale image registration on the initially registered OCT en face low-resolution image and high-resolution image pair. The specific implementation method is as follows:
图像首先被分成N×N个图像块(实施例中取N=5),计算对应图像块的二维归一化互相关图。The image is first divided into N×N image blocks (N=5 in the embodiment), and a two-dimensional normalized cross-correlation map of the corresponding image blocks is calculated.
高分辨图像f和低分辨图像g间的互相关图(cross-correlation map, CCM)定义如下:The cross-correlation map (CCM) between the high-resolution image f and the low-resolution image g is defined as follows:
(2) (2)
其中u和v分别表示CCM的横坐标和纵坐标。where u and v represent the abscissa and ordinate of the CCM, respectively.
高分辨图像f和低分辨图像g间的归一化互相关图(normalized cross-correlation map, nCCM)计算如下所示:The normalized cross-correlation map (nCCM) between the high-resolution image f and the low-resolution image g is calculated as follows:
(3) (3)
其中PPMCC是皮尔逊积矩相关系数(Pearson product-moment correlationcoefficient, PPMCC), cov()代表协方差函数,是f的标准差,是g的标准差,max表示求最大值,min表示求最小值,是PPMCC的最大值,是PPMCC的最小值。where PPMCC is the Pearson product-moment correlation coefficient (PPMCC), and cov() represents the covariance function, is the standard deviation of f, is the standard deviation of g, max indicates the maximum value, min indicates the minimum value, is the maximum value of PPMCC, is the minimum value of PPMCC.
然后将归一化互相关图拟合为二维高斯函数,如下定义:The normalized cross-correlation plot is then fitted to a two-dimensional Gaussian function, defined as:
(4) (4)
其中x0和y0分别代表输入图像对在两个方向上的亚像素位移值,A表示灰度值,和分别表示x和y方向上的标准差。二维高斯函数的峰值坐标就对应每个图像块的位移;5×5个图像块得到5×5个位移,把这些位移分别沿x和y方向插值到图像的像素个数,获得与原图像大小一致的位移图,利用该图像进行OCT en face低分辨图像和高分辨图像配准。如果位移图中的最大灰度值(或者容差)大于设定值,则重复附图3中的步骤 2.2.2-2.2.4。如果容差小于设定值,判断图像块大小是否小于设定的最小图像块大小,如果不小于设定值,则增加N值(赋值N=N+2)并执行附图3中的步骤2.2.1-2.2.5。如果小于设定的最小图像块大小,则完成图像配准,此时输入的低分辨图像和高分辨图像就达到了亚像素级配准。本实施例中容差的设定值为0.2;最小图像块大小为40×40(像素×像素)。where x 0 and y 0 represent the sub-pixel displacement values of the input image pair in two directions, respectively, A represents the gray value, and are the standard deviations in the x and y directions, respectively. The peak coordinate of the two-dimensional Gaussian function corresponds to the displacement of each image block; 5 × 5 image blocks get 5 × 5 displacements, and these displacements are interpolated to the number of pixels in the image along the x and y directions respectively, and the original image is obtained. Displacement map of the same size, which is used for OCT en face low-resolution image and high-resolution image registration. If the maximum gray value (or tolerance) in the displacement map is greater than the set value, repeat steps 2.2.2-2.2.4 in Figure 3. If the tolerance is less than the set value, judge whether the image block size is less than the set minimum image block size, if not less than the set value, increase the N value (assign N=N+2) and perform step 2.2 in Figure 3 .1-2.2.5. If it is smaller than the set minimum image block size, the image registration is completed, and the input low-resolution image and high-resolution image at this time achieve sub-pixel level registration. The set value of the tolerance in this embodiment is 0.2; the minimum image block size is 40×40 (pixel×pixel).
在图像配准之后,建立了4个不同离焦量下配准的OCT en face低分辨图像和高分辨图像对数据集,每个数据集包含20个图像对。其中,图像大小为1000×1000(像素×像素),对应于3mm×3mm的视野范围。把大图像划分为大小为80 × 80(像素×像素)的图像块,在划分时去除特征信息较少的图像块,然后经过翻转、旋转等数据扩充,生成3000对80× 80(像素×像素)OCT en face低分辨图像和高分辨图像对数据集。最后,按照3:1:1的比例分配训练集、验证集和测试集。After image registration, 4 datasets of OCT en face low-resolution and high-resolution image pairs registered under different defocus amounts are established, each dataset contains 20 image pairs. Among them, the image size is 1000 × 1000 (pixel × pixel), which corresponds to a field of view of 3 mm × 3 mm. Divide the large image into image blocks with a size of 80 × 80 (pixel × pixel), remove the image blocks with less feature information when dividing, and then perform data expansion such as flipping, rotation, etc., to generate 3000 pairs of 80 × 80 (pixel × pixel). ) OCT en face low-resolution image and high-resolution image pair dataset. Finally, the training set, validation set and test set are distributed in a ratio of 3:1:1.
步骤2:构建OCT en face 图像超分辨模型;Step 2: Build the OCT en face image super-resolution model;
具体实施方法如下:The specific implementation method is as follows:
在OCT en face 图像超分辨模型构建中,通过深度学习方法学习OCT en face高分辨图像和不同分辨率的OCT en face低分辨图像间的映射关系,获得OCT三维图像的数字重聚焦,实现OCT图像焦深的拓展。In the construction of the OCT en face image super-resolution model, the deep learning method is used to learn the mapping relationship between the OCT en face high-resolution image and the OCT en face low-resolution image of different resolutions, and the digital refocusing of the OCT three-dimensional image is obtained to realize the OCT image. Depth of focus expansion.
本实施例采用生成对抗神经网络作为深度学习的超分辨模型(图像超分辨模型还可以选择卷积神经网络、胶囊网络或图神经网络)。生成对抗神经网络由一个生成器和一个判别器组成,输入低分辨图像到生成对抗网络的生成器中得到生成图像,该生成图像和高分辨图像被输入到判别器中,判别器用于预测真实高分辨图像比生成图像更真实的概率,并返回损失结果更新生成器和判别器权重参数,如此往复训练直至生成对抗神经网络收敛。In this embodiment, a generative adversarial neural network is used as the super-resolution model of deep learning (the image super-resolution model can also choose a convolutional neural network, a capsule network or a graph neural network). The generative adversarial neural network consists of a generator and a discriminator. The low-resolution image is input to the generator of the generative adversarial network to obtain the generated image. The generated image and the high-resolution image are input to the discriminator, and the discriminator is used to predict the real high-resolution image. Identify the probability that the image is more realistic than the generated image, and return the loss result to update the generator and discriminator weight parameters, and so on and so forth until the generative adversarial neural network converges.
判别器的预测概率如下式所示:The predicted probability of the discriminator is as follows:
(5) (5)
其中为生成图像, 为高分辨图像,表示判断高分辨图像真实的概率,表示判断超分辨图像更真实的概率,C(x)为判别器的输出,和分别代表对所有生成数据和高分辨取平均值的操作。对抗损失结果由判别器生成,用于更新判别器的对抗损失函数由公式(6)计算,更新生成器的对抗损失函数由公式(7)计算;in To generate images, for high-resolution images, represents the probability of judging the real high-resolution image, Indicates the probability of judging that the super-resolution image is more realistic, C(x) is the output of the discriminator, and represent the operation of averaging over all generated data and high resolution, respectively. The adversarial loss result is generated by the discriminator and used to update the adversarial loss function of the discriminator Calculated by Equation (6), the adversarial loss function of the generator is updated Calculated by formula (7);
(6) (6)
(7) (7)
像素损失函数计算方法如公式(8)所示:The calculation method of pixel loss function is shown in formula (8):
(8) (8)
其中w和h分别代表图像的宽度和高度方向上的像素总数,x和y表示图像的横坐标和纵坐标。实施例中w和h均等于80。Where w and h represent the total number of pixels in the width and height directions of the image, respectively, and x and y represent the abscissa and ordinate of the image. Both w and h are equal to 80 in the examples.
特征损失函数通过预先训练的图像分类网络来衡量图像之间的语义差异,采用K.Simonyan等人(Simonyan K, Zisserman A. Very deep convolutional networks forlarge-scale image recognition. in Proceedings of International Conference onLearning Representations. 2015, 1–14.)描述的预先训练的VGG19网络提取感知域特征。VGG19是一个卷积神经网络,由16个卷积层和3个全连接层组成。特征损失函数采用公式(9)定义:The feature loss function measures the semantic difference between images through a pre-trained image classification network, using K. Simonyan et al. (Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. in Proceedings of International Conference on Learning Representations. 2015, 1–14.) to extract receptive domain features from the pretrained VGG19 network described. VGG19 is a convolutional neural network consisting of 16 convolutional layers and 3 fully connected layers. The feature loss function is defined by formula (9):
(9) (9)
其中,表示在VGG19网络中第i层池化层后第j层卷积层输出的特征图, 和 是特征图的高和宽。在这里使用特征图 定义的 作为特征损失函数。in, Represents the feature map output by the j-th convolutional layer after the i-th pooling layer in the VGG19 network, and are the height and width of the feature map. Use feature map here Defined as a feature loss function.
实施例中采用对抗损失函数为判别器的损失函数;生成器的损失函数包含对抗损失函数、像素损失函数和特征损失函数,其计算方法如公式(10):Adopt confrontation loss function to be the loss function of discriminator in the embodiment; The loss function of generator comprises confrontation loss function, pixel loss function and feature loss function, and its calculation method is such as formula (10):
(10) (10)
其中采用加权参数m、θ和η来控制三种损失函数之间的权衡。本实施例中m=0.01,θ=1,η=0.005。The weighting parameters m, θ and η are used to control the trade-off between the three loss functions. In this embodiment, m=0.01, θ=1, and η=0.005.
实施例生成对抗网络中生成器网络结构如附图4所示,包括卷积模块、特征提取模块和特征重建模块;The generator network structure in the example generative adversarial network is shown in FIG. 4, including a convolution module, a feature extraction module and a feature reconstruction module;
卷积模块包括一个卷积层Conv;The convolution module includes a convolution layer Conv;
特征提取模块包括64个残差密连接块RRDB Block。每个残差密连接块结构如附图5所示,每个残差密连接块由23个密集连接块组成,同时每个密集连接块之间都加入了跳跃连接;密集连接块由五个卷积层Conv和四个LReLU层组成,在每个块中,每层的特征图都与相同尺度的所有先前特征连接在一起。The feature extraction module includes 64 residual densely connected blocks RRDB Block. The structure of each residual dense connection block is shown in Figure 5. Each residual dense connection block consists of 23 dense connection blocks, and skip connections are added between each dense connection block; the dense connection block consists of five The convolutional layer Conv is composed of four LReLU layers, and in each block, the feature maps of each layer are concatenated with all previous features of the same scale.
特征重建模块由两个卷积层Conv和一个LReLU层组成,用于将学习到的特征重建为超分辨图像。The feature reconstruction module consists of two convolutional layers Conv and one LReLU layer to reconstruct the learned features into super-resolved images.
实施例生成器网络中各卷积层对应的卷积核大小(k)、个数(n)、步长(s)在图4和图5中已标注。The size (k), number (n), and stride (s) of the convolution kernels corresponding to each convolutional layer in the generator network of the embodiment are marked in Figures 4 and 5.
实施例判别器结构如附图6所示,包含8个具有3 × 3滤波器核的卷积层Conv。除了第一层之外,每个卷积层后面都有一个批处理归一化层BN和一个LReLU层。卷积层之后是两个线性层和一个LReLU层。各卷积层对应的卷积核大小(k)、个数(n)、步长(s)在图6中已标注。The structure of the embodiment discriminator is shown in Fig. 6, including 8 convolutional layers Conv with 3 × 3 filter kernels. Except for the first layer, each convolutional layer is followed by a batch normalization layer BN and an LReLU layer. The convolutional layer is followed by two linear layers and an LReLU layer. The size (k), number (n), and stride (s) of convolution kernels corresponding to each convolutional layer are marked in Figure 6.
利用4个不同离焦量下配准的OCT en face低分辨图像和高分辨图像数据集,分别训练生成对抗神经网络,得到4个不同离焦量下的OCT en face图像超分辨模型,所述OCTen face图像超分辨模型均采用Adam算法进行优化,超参数分别为α =0, β1= 0.9, β2=0.99。设置训练迭代次数为150000次。所述生成对抗神经网络的训练和测试使用深度学习框架Pytorch完成。Using the OCT en face low-resolution image and high-resolution image datasets registered under 4 different defocus amounts, the generative adversarial neural network is trained respectively, and the OCT en face image super-resolution model under 4 different defocus amounts is obtained. The OCTen face image super-resolution models are all optimized by the Adam algorithm, and the hyperparameters are α = 0, β 1 = 0.9, and β 2 = 0.99. Set the number of training iterations to 150,000. The training and testing of the generative adversarial neural network is done using the deep learning framework Pytorch.
步骤3:基于构建的OCT en face图像超分辨模型实现OCT图像数字重聚焦;具体实施方法如下:Step 3: realize digital refocusing of OCT images based on the constructed OCT en face image super-resolution model; the specific implementation method is as follows:
对于待处理的OCT三维图像数据,首先确定入射光束的聚焦位置,依据聚焦位置、原有焦深和所述离焦量划分待处理的OCT三维图像,获得在原有焦深范围内的OCT en face高分辨图像序列和分别沿深度方向的焦深范围前和焦深范围后各4组(共计8组)OCT enface低分辨图像序列;其中,OCT en face高分辨图像不需要进行超分辨处理;沿深度方向的焦深范围前和焦深范围后各4组OCT en face低分辨图像序列对应的离焦量分别为30μm-90μm,90μm-150μm,150μm-210μm,210μm以上。分别采用构建的4个对应离焦量(60μm,120μm,180μm和240μm)下的OCT en face图像超分辨模型对不同离焦量下划分的焦深范围前和焦深范围后各4组OCT en face低分辨图像序列进行超分辨处理,获得焦深范围前和焦深范围后各4组OCT en face超分辨图像序列,实现图像重聚焦,提高了待处理的OCT三维图像中原有焦深之外的OCT en face图像的横向分辨率,最后把没有进行超分辨处理的焦深范围内的一组OCT en face高分辨图像序列和处理得到的8组OCT en face超分辨图像序列分别沿深度方向的焦深范围前和焦深范围后重新堆叠起来形成重聚焦的OCT三维图像,实现焦深拓展。For the OCT 3D image data to be processed, first determine the focal position of the incident beam, divide the OCT 3D image to be processed according to the focal position, the original focal depth and the defocus amount, and obtain the OCT en face within the original focal depth range High-resolution image sequence and 4 groups (total 8 groups) of OCT enface low-resolution image sequences before and after the depth of focus range along the depth direction respectively; among them, the high-resolution OCT en face images do not need super-resolution processing; The defocus amounts corresponding to the four groups of OCT en face low-resolution image sequences in the depth direction are 30μm-90μm, 90μm-150μm, 150μm-210μm, and more than 210μm, respectively. Using the constructed OCT en face image super-resolution models at four corresponding defocus amounts (60μm, 120μm, 180μm and 240μm), respectively, four groups of OCT en face were divided into four groups of OCT en face before and after the focal depth range divided by different defocus amounts. The face low-resolution image sequence is subjected to super-resolution processing, and 4 sets of OCT en face super-resolution image sequences are obtained before and after the focal depth range. The lateral resolution of the OCT en face image is obtained, and finally, a set of OCT en face high-resolution image sequences within the focal depth range without super-resolution processing and the processed 8 sets of OCT en face super-resolution image sequences are respectively along the depth direction. The front of the focal depth range and the back of the focal depth range are re-stacked to form a refocusing OCT 3D image to achieve focal depth expansion.
如附图7、附图8和附图9所示,给出了本发明的OCT en face图像数字重聚焦结果。图7为离焦量240 μm的斑马鱼OCT en face低分辨图像,图8为相同样品在焦深范围内获得的一个OCT en face高分辨图像,图9为本发明输出的图7的数字重聚焦图像或者超分辨图像,从数字重聚焦图像中可以看到清晰的斑马鱼结构。这表明通过本发明的方法,离焦的OCT en face图像的横向分辨率得到提高,实现了OCT图像的焦深拓展。As shown in FIG. 7 , FIG. 8 and FIG. 9 , the digital refocusing results of the OCT en face image of the present invention are presented. Fig. 7 is a low-resolution image of zebrafish OCT en face with a defocus amount of 240 μm, Fig. 8 is a high-resolution image of OCT en face obtained from the same sample in the range of focal depth, and Fig. 9 is the digital image of Fig. 7 output by the present invention Focused or super-resolved images, clear zebrafish structures can be seen from digitally refocused images. This shows that by the method of the present invention, the lateral resolution of the out-of-focus OCT en face image is improved, and the focal depth expansion of the OCT image is realized.
本发明实施时,首先依据离焦量设置,利用图像配准算法构建OCT en face高分辨图像和低分辨图像对数据集,然后构建OCT en face图像超分辨模型,通过深度学习方法学习OCT en face高分辨图像和不同分辨率的OCT en face低分辨图像间的映射关系,基于不同离焦量下的OCT en face图像超分辨模型获得OCT三维图像的数字重聚焦,实现OCT图像焦深的拓展。本发明可以在不增加OCT系统硬件复杂度的情况下实现光学相干层析图像的焦深拓展,速度快、可移植性强。When the present invention is implemented, firstly, according to the defocus amount setting, the image registration algorithm is used to construct the OCT en face high-resolution image and the low-resolution image pair data set, then the OCT en face image super-resolution model is constructed, and the OCT en face is learned through the deep learning method. The mapping relationship between high-resolution images and OCT en face low-resolution images of different resolutions is based on the super-resolution model of OCT en face images at different defocusing amounts to obtain digital refocusing of OCT 3D images, and to expand the depth of focus of OCT images. The invention can realize the focal depth expansion of the optical coherence tomography image without increasing the hardware complexity of the OCT system, and has high speed and strong portability.
上述实施例为说明本发明的技术构思及特点,其目的在于让熟悉此项技术的人员能够了解本发明内容并加以实施,并不能以此限制本发明的保护范围,凡根据本发明实质所做的等效变化或修饰,都应涵盖在本发明的保护范围内。The above-mentioned embodiment is to illustrate the technical concept and characteristics of the present invention, and its purpose is to enable those who are familiar with the art to understand the content of the present invention and implement it, and cannot limit the protection scope of the present invention with this. Equivalent changes or modifications should be included within the protection scope of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111428250.1A CN113837947B (en) | 2021-11-29 | 2021-11-29 | Processing method for obtaining optical coherence tomography large focal depth image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111428250.1A CN113837947B (en) | 2021-11-29 | 2021-11-29 | Processing method for obtaining optical coherence tomography large focal depth image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113837947A CN113837947A (en) | 2021-12-24 |
CN113837947B true CN113837947B (en) | 2022-05-20 |
Family
ID=78971849
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111428250.1A Active CN113837947B (en) | 2021-11-29 | 2021-11-29 | Processing method for obtaining optical coherence tomography large focal depth image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113837947B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114972033B (en) * | 2022-06-07 | 2024-06-14 | 南开大学 | Self-supervision method for improving longitudinal resolution of optical coherence tomography image |
CN116309899A (en) * | 2022-12-05 | 2023-06-23 | 深圳英美达医疗技术有限公司 | Three-dimensional imaging method, system, electronic device and readable storage medium |
CN117372274B (en) * | 2023-10-31 | 2024-08-23 | 珠海横琴圣澳云智科技有限公司 | Scanned image refocusing method, apparatus, electronic device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107233069A (en) * | 2017-07-11 | 2017-10-10 | 中国科学院上海光学精密机械研究所 | Increase the optical coherence tomography system of focal depth range |
CN110070601A (en) * | 2017-12-18 | 2019-07-30 | Fei 公司 | Micro-image is rebuild and the methods, devices and systems of the long-range deep learning of segmentation |
CN113269677A (en) * | 2021-05-20 | 2021-08-17 | 中国人民解放军火箭军工程大学 | HSI super-resolution reconstruction method based on unsupervised learning and related equipment |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7324214B2 (en) * | 2003-03-06 | 2008-01-29 | Zygo Corporation | Interferometer and method for measuring characteristics of optically unresolved surface features |
JP5448353B2 (en) * | 2007-05-02 | 2014-03-19 | キヤノン株式会社 | Image forming method using optical coherence tomography and optical coherence tomography apparatus |
CN101703389A (en) * | 2009-11-03 | 2010-05-12 | 南开大学 | Method for improving focal depth range of optical coherence tomography system |
NL2017882A (en) * | 2015-12-17 | 2017-06-26 | Asml Netherlands Bv | Optical metrology of lithographic processes using asymmetric sub-resolution features to enhance measurement |
CN106137134B (en) * | 2016-08-08 | 2023-05-12 | 浙江大学 | Multi-angle composite blood flow imaging method and system |
CN207071084U (en) * | 2017-02-17 | 2018-03-06 | 浙江大学 | A kind of high-resolution Diode laser OCT image system based on path encoding |
CN107945110A (en) * | 2017-11-17 | 2018-04-20 | 杨俊刚 | A kind of blind depth super-resolution for light field array camera calculates imaging method |
CN111881925B (en) * | 2020-08-07 | 2023-04-18 | 吉林大学 | Significance detection method based on camera array selective light field refocusing |
-
2021
- 2021-11-29 CN CN202111428250.1A patent/CN113837947B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107233069A (en) * | 2017-07-11 | 2017-10-10 | 中国科学院上海光学精密机械研究所 | Increase the optical coherence tomography system of focal depth range |
CN110070601A (en) * | 2017-12-18 | 2019-07-30 | Fei 公司 | Micro-image is rebuild and the methods, devices and systems of the long-range deep learning of segmentation |
CN113269677A (en) * | 2021-05-20 | 2021-08-17 | 中国人民解放军火箭军工程大学 | HSI super-resolution reconstruction method based on unsupervised learning and related equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113837947A (en) | 2021-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113837947B (en) | Processing method for obtaining optical coherence tomography large focal depth image | |
CN106846463B (en) | Three-dimensional reconstruction method and system of microscopic image based on deep learning neural network | |
CN112767466B (en) | A light field depth estimation method based on multimodal information | |
CN113762460B (en) | Multimode optical fiber transmission image migration reconstruction algorithm based on numerical value speckle | |
CN113313176B (en) | A point cloud analysis method based on dynamic graph convolutional neural network | |
TWI805282B (en) | Methods and apparatuses of depth estimation from focus information | |
CN114359503A (en) | Oblique photography modeling method based on unmanned aerial vehicle | |
CN113936047B (en) | Dense depth map generation method and system | |
CN113158487A (en) | Wavefront phase difference detection method based on long-short term memory depth network | |
Shen et al. | Deeper super-resolution generative adversarial network with gradient penalty for sonar image enhancement | |
WO2024199439A1 (en) | Self-supervised structured light microscopy reconstruction method and system based on pixel rearrangement | |
Wang et al. | Accurate 3D reconstruction of single-frame speckle-encoded textureless surfaces based on densely connected stereo matching network | |
CN110956601B (en) | Infrared image fusion method and device based on multi-sensor mode coefficients and computer readable storage medium | |
CN114897693A (en) | Microscopic Image Super-Resolution Method Based on Mathematical Imaging Theory and Generative Adversarial Networks | |
CN113762484B (en) | Multi-focus image fusion method based on deep distillation | |
CN118470219B (en) | Multi-view three-dimensional reconstruction method and system based on calibration-free image | |
CN119048446B (en) | A method for evaluating the imaging quality of pathological images | |
CN111524078B (en) | Dense network-based microscopic image deblurring method | |
CN112070675B (en) | A graph-based normalized light-field super-resolution method and light-field microscopy device | |
CN112785517A (en) | Image defogging method and device based on high-resolution representation | |
Shin et al. | LoGSRN: Deep super resolution network for digital elevation model | |
CN113129237B (en) | Depth image deblurring method based on multi-scale fusion coding network | |
CN116109768A (en) | Super-resolution imaging method and system for Fourier light field microscope | |
CN114998405A (en) | Digital human body model construction method based on image drive | |
CN110766797B (en) | A GAN-based 3D map inpainting method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |