CN114092464B - OCT image processing method and device - Google Patents
OCT image processing method and device Download PDFInfo
- Publication number
- CN114092464B CN114092464B CN202111435331.4A CN202111435331A CN114092464B CN 114092464 B CN114092464 B CN 114092464B CN 202111435331 A CN202111435331 A CN 202111435331A CN 114092464 B CN114092464 B CN 114092464B
- Authority
- CN
- China
- Prior art keywords
- image
- target
- scan image
- pixel points
- line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000012545 processing Methods 0.000 claims abstract description 135
- 239000010410 layer Substances 0.000 claims abstract description 83
- 238000003062 neural network model Methods 0.000 claims abstract description 63
- 239000011229 interlayer Substances 0.000 claims abstract description 61
- 238000000034 method Methods 0.000 claims abstract description 59
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 56
- 238000013517 stratification Methods 0.000 claims description 97
- 230000006870 function Effects 0.000 claims description 62
- 238000001914 filtration Methods 0.000 claims description 31
- 230000002207 retinal effect Effects 0.000 claims description 28
- 238000012549 training Methods 0.000 claims description 27
- 238000012360 testing method Methods 0.000 claims description 15
- 238000009499 grossing Methods 0.000 claims description 10
- 238000010276 construction Methods 0.000 claims description 9
- 230000009286 beneficial effect Effects 0.000 abstract description 11
- 238000012014 optical coherence tomography Methods 0.000 description 58
- 210000001519 tissue Anatomy 0.000 description 17
- 238000010586 diagram Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000010606 normalization Methods 0.000 description 5
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 4
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 4
- 210000001775 bruch membrane Anatomy 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 201000010099 disease Diseases 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 210000001525 retina Anatomy 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 210000004379 membrane Anatomy 0.000 description 1
- 239000012528 membrane Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000608 photoreceptor cell Anatomy 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical Field
本发明图像处理技术领域,尤其涉及一种OCT图像的处理方法及装置。The present invention relates to the field of image processing technology, and in particular to an OCT image processing method and device.
背景技术Background technique
随着计算机和医疗技术的快速发展,OCT(Optical Coherence Tomography,光学相干层析成像)技术已被广泛应用于眼底疾病的诊断设备中,对眼底疾病的检测、实验以及教材资料的编写有着重要意义。OCT属于一种高灵敏度、高分辨率、高速度、无入侵的断层扫描成像方式,利用光的相干性对眼底扫描成像,每次扫描被称为一个A-scan,相邻连续的多次扫描组合在一起称为一个B-scan图像,B-scan图像也即通常所看到的OCT截面图(也可以理解为OCT图像),是医学诊断中OCT最主要的成像方式。With the rapid development of computer and medical technology, OCT (Optical Coherence Tomography) technology has been widely used in diagnostic equipment for fundus diseases, which is of great significance for the detection, experiments and compilation of teaching materials of fundus diseases. OCT is a high-sensitivity, high-resolution, high-speed, non-invasive tomography imaging method, which uses the coherence of light to scan the fundus. Each scan is called an A-scan, and multiple adjacent continuous scans are combined together to form a B-scan image. The B-scan image is also the commonly seen OCT cross-sectional image (it can also be understood as an OCT image), which is the most important imaging method of OCT in medical diagnosis.
在实际应用中,诊断设备对于眼底疾病的诊断通常依赖于OCT图像分层后的目标特征分层结果,如视网膜分层结果。然而实践发现,基于直方图、边界分层和区域分层等传统分层技术的OCT图像分层方法,尽管算法性能较为鲁棒,但对OCT图像进行分层得到的目标特征分层结果的分层速度较慢。因此,提供一种OCT图像分层算法在保证算法性能的前提下,提高OCT图像的分层效率显得尤为重要。In practical applications, the diagnosis of fundus diseases by diagnostic equipment usually relies on the target feature stratification results after OCT image stratification, such as retinal stratification results. However, it is found in practice that the OCT image stratification method based on traditional stratification techniques such as histogram, boundary stratification and regional stratification, although the algorithm performance is relatively robust, the stratification speed of the target feature stratification results obtained by stratifying OCT images is slow. Therefore, it is particularly important to provide an OCT image stratification algorithm to improve the stratification efficiency of OCT images while ensuring the algorithm performance.
发明内容Summary of the invention
本发明所要解决的技术问题在于,提供一种OCT图像的处理方法及装置,能够结合深度神经网络模型,在保证OCT图像分层算法性能的前提下,提高OCT图像的分层效率以及提高分层结果的准确率。The technical problem to be solved by the present invention is to provide an OCT image processing method and device, which can be combined with a deep neural network model to improve the stratification efficiency of the OCT image and the accuracy of the stratification results while ensuring the performance of the OCT image stratification algorithm.
为了解决上述技术问题,本发明第一方面公开了一种OCT图像的处理方法,所述方法包括:In order to solve the above technical problems, the first aspect of the present invention discloses a method for processing an OCT image, the method comprising:
获取目标特征对应的B-Scan图像;Obtain the B-Scan image corresponding to the target feature;
通过预设的图像处理算法对所述B-Scan图像执行图像分层处理,得到初始分层结果;Performing image layering processing on the B-Scan image by using a preset image processing algorithm to obtain an initial layering result;
将所述B-Scan图像输入到预先训练好的深度神经网络模型中,得到输出结果,所述输出结果包括所述B-Scan图像的目标区域对应的每个像素点对应的概率,每个所述像素点对应的概率用于表示每个所述像素点属于所述初始分层结果所包括的某两个相邻分层的层间分界的可能性,所述目标区域为包括所述目标特征的区域;Input the B-Scan image into a pre-trained deep neural network model to obtain an output result, wherein the output result includes a probability corresponding to each pixel point corresponding to a target area of the B-Scan image, and the probability corresponding to each pixel point is used to indicate the possibility that each pixel point belongs to a layer boundary between two adjacent layers included in the initial layering result, and the target area is a region including the target feature;
根据所有所述像素点对应的概率,确定所述B-Scan图像中所述目标特征对应的层间分界信息。According to the corresponding probabilities of all the pixel points, the inter-layer boundary information corresponding to the target feature in the B-Scan image is determined.
作为一种可选的实施方式,在本发明第一方面中,所述通过预设的图像处理算法对所述B-Scan图像执行图像分层处理,得到初始分层结果,包括:As an optional implementation, in the first aspect of the present invention, performing image layering processing on the B-Scan image by a preset image processing algorithm to obtain an initial layering result includes:
通过预设的滤波函数对所述B-Scan图像执行滤波处理,得到滤波图像;Performing filtering processing on the B-Scan image by using a preset filtering function to obtain a filtered image;
计算所述滤波图像在图像竖直方向上的正梯度,并根据所述正梯度构建得到第一代价函数;Calculating the positive gradient of the filtered image in the vertical direction of the image, and constructing a first cost function according to the positive gradient;
根据预先确定的路径算法以及所述第一代价函数,确定所述滤波图像从左侧边缘至右侧边缘的第一最小代价路径,得到第一分层线;Determine a first minimum cost path from the left edge to the right edge of the filtered image according to a predetermined path algorithm and the first cost function, and obtain a first layering line;
根据所述路径算法以及所述第一代价函数,确定所述滤波图像从左侧边缘至右侧边缘的第二最小代价路径,得到第二分层线;Determine a second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function, and obtain a second layering line;
计算所述滤波图像在图像竖直方向上的负梯度,并根据所述负梯度构建得到第二代价函数;Calculating the negative gradient of the filtered image in the vertical direction of the image, and constructing a second cost function according to the negative gradient;
确定搜索区域,所述搜索区域为所述第一分层线与所述第二分层线中位置相对在下的分层线对应的下方区域;Determine a search area, where the search area is a lower area corresponding to a lower layer line between the first layer line and the second layer line;
根据所述路径算法以及所述第二代价函数,确定所述搜索区域从区域左侧边缘至区域右侧边缘的第三最小代价路径,并对所述第三最小代价路径执行平滑滤波操作,得到第三分层线;Determine a third minimum cost path from the left edge of the search area to the right edge of the search area according to the path algorithm and the second cost function, and perform a smoothing filter operation on the third minimum cost path to obtain a third layering line;
将所述第一分层线、所述第二分层线以及所述第三分层线确定为初始分层结果;determining the first stratification line, the second stratification line and the third stratification line as initial stratification results;
以及,所述根据所述路径算法以及所述第一代价函数,确定所述滤波图像从左侧边缘至右侧边缘的第二最小代价路径,得到第二分层线之前,所述方法还包括:And, before determining the second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function to obtain the second layering line, the method further includes:
在所述滤波图像中标记所述第一路径为不可达路径。The first path is marked as an unreachable path in the filtered image.
作为一种可选的实施方式,在本发明第一方面中,所述将所述B-Scan图像输入到预先训练好的深度神经网络模型中,得到输出结果之前,所述方法还包括:As an optional implementation, in the first aspect of the present invention, before inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, the method further includes:
确定所述B-Scan图像中包括所述目标特征的目标区域;Determining a target area including the target feature in the B-Scan image;
其中,所述确定所述B-Scan图像中包括所述目标特征的目标区域,包括:Wherein, determining the target area including the target feature in the B-Scan image comprises:
将所述第一分层线与所述第二分层线中位置相对在上的分层线在图像竖直方向上向上偏移第一预设距离,得到第一边界线;The first stratification line and the second stratification line that are located relatively above each other are shifted upward by a first preset distance in the vertical direction of the image to obtain a first boundary line;
将所述第三分层线在图像竖直方向上向下偏移第二预设距离,得到第二边界线;The third layering line is shifted downward by a second preset distance in the vertical direction of the image to obtain a second boundary line;
将所述第一边界线以下、所述第二边界线以上的区域确定为所述B-Scan图像中包括所述目标特征的目标区域。An area below the first boundary line and above the second boundary line is determined as a target area including the target feature in the B-Scan image.
作为一种可选的实施方式,在本发明第一方面中,所述将所述B-Scan图像输入到预先训练好的深度神经网络模型中,得到输出结果之后,所述根据所有所述像素点对应的概率,确定所述B-Scan图像中所述目标特征对应的层间分界信息之前,所述方法还包括:As an optional implementation, in the first aspect of the present invention, after inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, before determining the inter-layer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points, the method further includes:
判断所有所述像素点中是否存在落入到所述目标区域之外的目标像素点;Determine whether there is a target pixel point falling outside the target area among all the pixel points;
当判断出所有所述像素点中不存在落入到所述目标区域之外的所述目标像素点时,触发执行所述的根据所有所述像素点对应的概率,确定所述B-Scan图像中所述目标特征对应的层间分界信息的操作;When it is determined that none of the target pixel points falls outside the target area among all the pixel points, triggering the operation of determining the inter-layer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points;
当判断出所有所述像素点中存在落入到所述目标区域之外的所述目标像素点时,对落入到所述目标区域之外的所有所述目标像素点执行概率更新操作,以更新所有所述目标像素点对应的概率,并触发执行所述的根据所有所述像素点对应的概率,确定所述B-Scan图像中所述目标特征对应的层间分界信息的操作。When it is determined that there are target pixel points that fall outside the target area among all the pixel points, a probability update operation is performed on all the target pixel points that fall outside the target area to update the probabilities corresponding to all the target pixel points, and trigger the execution of the operation of determining the inter-layer boundary information corresponding to the target feature in the B-Scan image based on the probabilities corresponding to all the pixel points.
作为一种可选的实施方式,在本发明第一方面中,所述判断所有所述像素点中是否存在落入到所述目标区域之外的目标像素点,包括:As an optional implementation manner, in the first aspect of the present invention, the determining whether there is a target pixel point falling outside the target area among all the pixel points includes:
对于所有所述像素点中的每列像素点,判断该列像素点中是否存在落入到所述目标区域之外的目标像素点;For each column of all the pixel points, determining whether there is a target pixel point in the column that falls outside the target area;
以及,所述对落入到所述目标区域之外的所有所述目标像素点执行概率更新操作,以更新落入到所述目标区域之外的所有所述目标像素点对应的概率,包括:And, performing a probability update operation on all the target pixel points falling outside the target area to update the probabilities corresponding to all the target pixel points falling outside the target area includes:
对于所有所述像素点中的每列像素点,若该列像素点中存在落入到所述目标区域之外的目标像素点,则对该列像素点中落入到所述目标区域之外的每个所述目标像素点乘以与该目标像素点对应的预设数值,得到乘积结果,并根据所述乘积结果更新该目标像素点对应的概率。For each column of all the pixel points, if there is a target pixel point in the column that falls outside the target area, each target pixel point in the column that falls outside the target area is multiplied by a preset value corresponding to the target pixel point to obtain a product result, and the probability corresponding to the target pixel point is updated according to the product result.
作为一种可选的实施方式,在本发明第一方面中,所述根据所有所述像素点对应的概率,确定所述B-Scan图像中所述目标特征对应的层间分界信息,包括:As an optional implementation manner, in the first aspect of the present invention, determining the inter-layer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points includes:
对于所有所述像素点中的每列像素点,对该列像素点的概率分布执行归一化处理,得到该列像素点的归一化概率分布;For each column of pixel points among all the pixel points, normalizing the probability distribution of the column of pixel points to obtain a normalized probability distribution of the column of pixel points;
对于所有所述像素点中的每列像素点,将该列像素点的归一化概率分布与该列像素点对应的行号分布进行点积运算,得到该列像素点对应的层间分布结果;For each column of pixels among all the pixels, a dot product operation is performed on the normalized probability distribution of the column of pixels and the row number distribution corresponding to the column of pixels to obtain an inter-layer distribution result corresponding to the column of pixels;
根据所有所述像素点中的每列像素点对应的层间分布结果,确定所述B-Scan图像中所述目标特征对应的层间分界信息。According to the inter-layer distribution result corresponding to each column of pixel points in all the pixel points, the inter-layer boundary information corresponding to the target feature in the B-Scan image is determined.
作为一种可选的实施方式,在本发明第一方面中,所述深度神经网络模型通过以下方式训练得到:As an optional implementation, in the first aspect of the present invention, the deep neural network model is trained in the following manner:
获取包括标注信息的B-Scan图像集合,所述B-Scan图像集合中每个B-Scan图像对应的所述标注信息包括所述目标特征对应的标签信息,以及所述目标特征对应的分界信息;Acquire a B-Scan image set including annotation information, wherein the annotation information corresponding to each B-Scan image in the B-Scan image set includes label information corresponding to the target feature and boundary information corresponding to the target feature;
划分所述B-Scan图像集合,得到训练集和测试集,所述训练集用于训练深度神经网络模型,所述测试集用于验证训练好的所述深度神经网络模型的可靠性;Dividing the B-Scan image set to obtain a training set and a test set, wherein the training set is used to train a deep neural network model, and the test set is used to verify the reliability of the trained deep neural network model;
对所述训练集所包括的所有B-Scan图像执行目标处理操作,得到处理结果,所述目标处理操作包括上下移动处理、左右翻转处理、上下反转处理以及对比度调整处理中的至少一种;Performing a target processing operation on all B-Scan images included in the training set to obtain a processing result, wherein the target processing operation includes at least one of an up-and-down movement process, a left-and-right flip process, an up-and-down inversion process, and a contrast adjustment process;
将所述处理结果作为输入数据输入到预先确定的深度神经网络模型中,得到输出结果;Inputting the processing result as input data into a predetermined deep neural network model to obtain an output result;
根据所述输出结果、所述训练集所包括的B-Scan图像以及所述分界信息,分析计算联合损失,得到联合损失值;Analyze and calculate the joint loss according to the output result, the B-Scan image included in the training set, and the boundary information to obtain a joint loss value;
将所述联合损失值在所述深度神经网络模型中进行反向传播,并进行预设周期长度的迭代训练,得到训练好的深度神经网络模型;Back-propagating the joint loss value in the deep neural network model, and performing iterative training of a preset cycle length to obtain a trained deep neural network model;
其中,所述测试集用于验证训练好的所述深度神经网络模型的可靠性。The test set is used to verify the reliability of the trained deep neural network model.
作为一种可选的实施方式,在本发明第一方面中,所述目标特征为视网膜特征。As an optional embodiment, in the first aspect of the present invention, the target feature is a retinal feature.
本发明第二方面公开了一种OCT图像的处理装置,所述装置包括:A second aspect of the present invention discloses an OCT image processing device, the device comprising:
获取模块,用于获取目标特征对应的B-Scan图像;An acquisition module is used to acquire a B-Scan image corresponding to the target feature;
第一处理模块,用于通过预设的图像处理算法对所述B-Scan图像执行图像分层处理,得到初始分层结果;A first processing module, configured to perform image layering processing on the B-Scan image using a preset image processing algorithm to obtain an initial layering result;
第二处理模块,用于将所述B-Scan图像输入到预先训练好的深度神经网络模型中,得到输出结果,所述输出结果包括所述B-Scan图像的目标区域对应的每个像素点对应的概率,每个所述像素点对应的概率用于表示每个所述像素点属于所述初始分层结果所包括的某两个相邻分层的层间分界的可能性,所述目标区域为包括所述目标特征的区域;A second processing module is used to input the B-Scan image into a pre-trained deep neural network model to obtain an output result, wherein the output result includes a probability corresponding to each pixel point corresponding to a target area of the B-Scan image, and the probability corresponding to each pixel point is used to indicate the possibility that each pixel point belongs to a layer boundary between two adjacent layers included in the initial layering result, and the target area is a region including the target feature;
第一确定模块,用于根据所有所述像素点对应的概率,确定所述B-Scan图像中所述目标特征对应的层间分界信息。The first determination module is used to determine the inter-layer boundary information corresponding to the target feature in the B-Scan image according to the corresponding probabilities of all the pixel points.
作为一种可选的实施方式,在本发明第二方面中,所述第一处理模块,包括:As an optional implementation, in the second aspect of the present invention, the first processing module includes:
滤波子模块,用于通过预设的滤波函数对所述B-Scan图像执行滤波处理,得到滤波图像;A filtering submodule, used for performing filtering processing on the B-Scan image through a preset filtering function to obtain a filtered image;
函数构建子模块,用于计算所述滤波图像在图像竖直方向上的正梯度,并根据所述正梯度构建得到第一代价函数;A function construction submodule, used for calculating the positive gradient of the filtered image in the vertical direction of the image, and constructing a first cost function according to the positive gradient;
第一确定子模块,根据预先确定的路径算法以及所述第一代价函数,确定所述滤波图像从左侧边缘至右侧边缘的第一最小代价路径,得到第一分层线;A first determination submodule, determining a first minimum cost path from a left edge to a right edge of the filtered image according to a predetermined path algorithm and the first cost function, to obtain a first layering line;
所述第一确定子模块,还用于根据所述路径算法以及所述第一代价函数,确定所述滤波图像从左侧边缘至右侧边缘的第二最小代价路径,得到第二分层线;The first determination submodule is further used to determine a second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function to obtain a second layering line;
所述函数构建子模块,还用于计算所述滤波图像在图像竖直方向上的负梯度,并根据所述负梯度构建得到第二代价函数;The function construction submodule is further used to calculate the negative gradient of the filtered image in the vertical direction of the image, and construct a second cost function according to the negative gradient;
第二确定子模块,用于确定搜索区域,所述搜索区域为所述第一分层线与所述第二分层线中位置相对在下的分层线对应的下方区域;A second determination submodule is used to determine a search area, where the search area is a lower area corresponding to a lower layer line between the first layer line and the second layer line;
所述第一确定子模块,还用于根据所述路径算法以及所述第二代价函数,确定所述搜索区域从区域左侧边缘至区域右侧边缘的第三最小代价路径,并对所述第三最小代价路径执行平滑滤波操作,得到第三分层线;The first determination submodule is further used to determine a third minimum cost path from the left edge of the search area to the right edge of the area according to the path algorithm and the second cost function, and perform a smoothing filter operation on the third minimum cost path to obtain a third layer line;
所述第二确定子模块,还用于将所述第一分层线、所述第二分层线以及所述第三分层线确定为初始分层结果;The second determining submodule is further configured to determine the first stratification line, the second stratification line and the third stratification line as initial stratification results;
以及,所述第一处理模块还包括:And, the first processing module also includes:
标记子模块,用于在所述第一确定子模块根据所述路径算法以及所述第一代价函数,确定所述滤波图像从左侧边缘至右侧边缘的第二最小代价路径,得到第二分层线之前,在所述滤波图像中标记所述第一路径为不可达路径。A marking submodule is used to mark the first path as an unreachable path in the filtered image before the first determination submodule determines the second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function to obtain the second layer line.
作为一种可选的实施方式,在本发明第二方面中,。As an optional implementation, in the second aspect of the present invention,.
作为一种可选的实施方式,在本发明第二方面中,所述装置还包括:As an optional implementation, in the second aspect of the present invention, the device further includes:
第二确定模块,用于在所述第二处理模块将所述B-Scan图像输入到预先训练好的深度神经网络模型中,得到输出结果之前,确定所述B-Scan图像中包括所述目标特征的目标区域;A second determination module, configured to determine a target area including the target feature in the B-Scan image before the second processing module inputs the B-Scan image into a pre-trained deep neural network model to obtain an output result;
其中,所述第二确定模块确定所述B-Scan图像中包括所述目标特征的目标区域的方式具体包括:The manner in which the second determination module determines the target area including the target feature in the B-Scan image specifically includes:
将所述第一分层线与所述第二分层线中位置相对在上的分层线在图像竖直方向上向上偏移第一预设距离,得到第一边界线;The first stratification line and the second stratification line that are located relatively above each other are shifted upward by a first preset distance in the vertical direction of the image to obtain a first boundary line;
将所述第三分层线在图像竖直方向上向下偏移第二预设距离,得到第二边界线;The third layering line is shifted downward by a second preset distance in the vertical direction of the image to obtain a second boundary line;
将所述第一边界线以下、所述第二边界线以上的区域确定为所述B-Scan图像中包括所述目标特征的目标区域。An area below the first boundary line and above the second boundary line is determined as a target area including the target feature in the B-Scan image.
作为一种可选的实施方式,在本发明第二方面中,所述装置还包括:As an optional implementation, in the second aspect of the present invention, the device further includes:
判断模块,用于在所述第二处理模块将所述B-Scan图像输入到预先训练好的深度神经网络模型中,得到输出结果之后,以及所述第一确定模块根据所有所述像素点对应的概率,确定所述B-Scan图像中所述目标特征对应的层间分界信息之前,判断所有所述像素点中是否存在落入到所述目标区域之外的目标像素点,当判断出所有所述像素点中不存在落入到所述目标区域之外的所述目标像素点时,触发所述第一确定模块执行所述的根据所有所述像素点对应的概率,确定所述B-Scan图像中所述目标特征对应的层间分界信息的操作;A judgment module, used for judging whether there are target pixels falling outside the target area among all the pixels after the second processing module inputs the B-Scan image into a pre-trained deep neural network model to obtain an output result and before the first determination module determines the inter-layer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixels, and when it is judged that there are no target pixels falling outside the target area among all the pixels, triggering the first determination module to perform the operation of determining the inter-layer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixels;
第三处理模块,用于当所述判断模块判断出所有所述像素点中存在落入到所述目标区域之外的所述目标像素点时,对落入到所述目标区域之外的所有所述目标像素点执行概率更新操作,以更新所有所述目标像素点对应的概率,并触发所述第一确定模块执行所述的根据所有所述像素点对应的概率,确定所述B-Scan图像中所述目标特征对应的层间分界信息的操作。The third processing module is used to perform a probability update operation on all the target pixel points that fall outside the target area when the judgment module determines that there are target pixel points among all the pixel points that fall outside the target area, so as to update the probabilities corresponding to all the target pixel points, and trigger the first determination module to perform the operation of determining the inter-layer boundary information corresponding to the target feature in the B-Scan image based on the probabilities corresponding to all the pixel points.
作为一种可选的实施方式,在本发明第二方面中,所述判断模块判断所有所述像素点中是否存在落入到所述目标区域之外的目标像素点的方式具体包括:As an optional implementation, in the second aspect of the present invention, the manner in which the judgment module judges whether there is a target pixel point falling outside the target area among all the pixel points specifically includes:
对于所有所述像素点中的每列像素点,判断该列像素点中是否存在落入到所述目标区域之外的目标像素点;For each column of all the pixel points, determining whether there is a target pixel point in the column that falls outside the target area;
以及,所述第三处理模块对落入到所述目标区域之外的所有所述目标像素点执行概率更新操作,以更新落入到所述目标区域之外的所有所述目标像素点对应的概率的方式具体包括:Furthermore, the third processing module performs a probability update operation on all the target pixel points falling outside the target area, so as to update the probabilities corresponding to all the target pixel points falling outside the target area, specifically including:
对于所有所述像素点中的每列像素点,若该列像素点中存在落入到所述目标区域之外的目标像素点,则对该列像素点中落入到所述目标区域之外的每个所述目标像素点乘以与该目标像素点对应的预设数值,得到乘积结果,并根据所述乘积结果更新该目标像素点对应的概率。For each column of all the pixel points, if there is a target pixel point in the column that falls outside the target area, each target pixel point in the column that falls outside the target area is multiplied by a preset value corresponding to the target pixel point to obtain a product result, and the probability corresponding to the target pixel point is updated according to the product result.
作为一种可选的实施方式,在本发明第二方面中,所述第一确定模块根据所有所述像素点对应的概率,确定所述B-Scan图像中所述目标特征对应的层间分界信息的方式具体包括:As an optional implementation, in the second aspect of the present invention, the first determination module determines the inter-layer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixel points in a manner specifically including:
对于所有所述像素点中的每列像素点,对该列像素点的概率分布执行归一化处理,得到该列像素点的归一化概率分布;For each column of pixel points among all the pixel points, normalizing the probability distribution of the column of pixel points to obtain a normalized probability distribution of the column of pixel points;
对于所有所述像素点中的每列像素点,将该列像素点的归一化概率分布与该列像素点对应的行号分布进行点积运算,得到该列像素点对应的层间分布结果;For each column of pixels among all the pixels, a dot product operation is performed on the normalized probability distribution of the column of pixels and the row number distribution corresponding to the column of pixels to obtain an inter-layer distribution result corresponding to the column of pixels;
根据所有所述像素点中的每列像素点对应的层间分布结果,确定所述B-Scan图像中所述目标特征对应的层间分界信息。According to the inter-layer distribution result corresponding to each column of pixel points in all the pixel points, the inter-layer boundary information corresponding to the target feature in the B-Scan image is determined.
作为一种可选的实施方式,在本发明第二方面中,所述深度神经网络模型通过以下方式训练得到:As an optional implementation, in the second aspect of the present invention, the deep neural network model is trained in the following manner:
获取包括标注信息的B-Scan图像集合,所述B-Scan图像集合中每个B-Scan图像对应的所述标注信息包括所述目标特征对应的标签信息,以及所述目标特征对应的分界信息;Acquire a B-Scan image set including annotation information, wherein the annotation information corresponding to each B-Scan image in the B-Scan image set includes label information corresponding to the target feature and boundary information corresponding to the target feature;
划分所述B-Scan图像集合,得到训练集和测试集,所述训练集用于训练深度神经网络模型,所述测试集用于验证训练好的所述深度神经网络模型的可靠性;Dividing the B-Scan image set to obtain a training set and a test set, wherein the training set is used to train a deep neural network model, and the test set is used to verify the reliability of the trained deep neural network model;
对所述训练集所包括的所有B-Scan图像执行目标处理操作,得到处理结果,所述目标处理操作包括上下移动处理、左右翻转处理、上下反转处理以及对比度调整处理中的至少一种;Performing a target processing operation on all B-Scan images included in the training set to obtain a processing result, wherein the target processing operation includes at least one of an up-and-down movement process, a left-and-right flip process, an up-and-down inversion process, and a contrast adjustment process;
将所述处理结果作为输入数据输入到预先确定的深度神经网络模型中,得到输出结果;Inputting the processing result as input data into a predetermined deep neural network model to obtain an output result;
根据所述输出结果、所述训练集所包括的B-Scan图像以及所述分界信息,分析计算联合损失,得到联合损失值;Analyze and calculate the joint loss according to the output result, the B-Scan image included in the training set, and the boundary information to obtain a joint loss value;
将所述联合损失值在所述深度神经网络模型中进行反向传播,并进行预设周期长度的迭代训练,得到训练好的深度神经网络模型;Back-propagating the joint loss value in the deep neural network model, and performing iterative training of a preset cycle length to obtain a trained deep neural network model;
其中,所述测试集用于验证训练好的所述深度神经网络模型的可靠性。The test set is used to verify the reliability of the trained deep neural network model.
作为一种可选的实施方式,在本发明第二方面中,所述目标特征为视网膜特征。As an optional embodiment, in the second aspect of the present invention, the target feature is a retinal feature.
本发明第三方面公开了另一种OCT图像的处理装置,所述装置包括:The third aspect of the present invention discloses another OCT image processing device, the device comprising:
存储有可执行程序代码的存储器;A memory storing executable program code;
与所述存储器耦合的处理器;a processor coupled to the memory;
与处理器耦合的输入接口以及输出接口;an input interface and an output interface coupled to the processor;
所述处理器调用所述存储器中存储的所述可执行程序代码,执行本发明第一方面公开的OCT图像的处理方法。The processor calls the executable program code stored in the memory to execute the OCT image processing method disclosed in the first aspect of the present invention.
与现有技术相比,本发明实施例具有以下有益效果:Compared with the prior art, the embodiments of the present invention have the following beneficial effects:
本发明实施例中,提供了一种OCT图像的处理方法及装置,该方法包括:获取目标特征对应的B-Scan图像,通过预设的图像处理算法对B-Scan图像执行图像分层处理,得到初始分层结果,再将B-Scan图像输入到预先训练好的深度神经网络模型中,得到输出结果,输出结果包括B-Scan图像的目标区域对应的每个像素点对应的概率,每个像素点对应的概率用于表示每个像素点属于初始分层结果所包括的某两个相邻分层的层间分界的可能性,目标区域为包括目标特征的区域,根据所有像素点对应的概率,确定B-Scan图像中目标特征对应的层间分界信息。可见,实施本发明能够智能化获取包括目标特征的B-Scan图像,有利于提高B-Scan图像的分类效率;还能够结合深度神经网络模型以及该初始分层结果,智能化确定B-Scan图像中目标特征对应的层间分界信息,有利于提高图像分层效率以及图像分层结果的准确率。In an embodiment of the present invention, a method and device for processing an OCT image are provided, the method comprising: obtaining a B-Scan image corresponding to a target feature, performing image stratification processing on the B-Scan image through a preset image processing algorithm to obtain an initial stratification result, and then inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, the output result including the probability corresponding to each pixel corresponding to the target area of the B-Scan image, the probability corresponding to each pixel is used to indicate the possibility that each pixel belongs to the interlayer boundary of two adjacent layers included in the initial stratification result, the target area is an area including the target feature, and according to the probabilities corresponding to all pixels, the interlayer boundary information corresponding to the target feature in the B-Scan image is determined. It can be seen that the implementation of the present invention can intelligently obtain a B-Scan image including the target feature, which is conducive to improving the classification efficiency of the B-Scan image; it can also combine the deep neural network model and the initial stratification result to intelligently determine the interlayer boundary information corresponding to the target feature in the B-Scan image, which is conducive to improving the image stratification efficiency and the accuracy of the image stratification result.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required for use in the description of the embodiments will be briefly introduced below. Obviously, the drawings described below are only some embodiments of the present invention. For ordinary technicians in this field, other drawings can be obtained based on these drawings without creative work.
图1是本发明实施例公开的一种OCT图像的处理方法的流程示意图;FIG1 is a schematic flow chart of an OCT image processing method disclosed in an embodiment of the present invention;
图2是本发明实施例公开的另一种OCT图像的处理方法的流程示意图;FIG2 is a schematic flow chart of another OCT image processing method disclosed in an embodiment of the present invention;
图3是本发明实施例公开的一种OCT图像的处理装置的结构示意图;FIG3 is a schematic diagram of the structure of an OCT image processing device disclosed in an embodiment of the present invention;
图4是本发明实施例公开的另一种OCT图像的处理装置的结构示意图;FIG4 is a schematic diagram of the structure of another OCT image processing device disclosed in an embodiment of the present invention;
图5是本发明实施例公开的又一种OCT图像的处理装置的结构示意图。FIG. 5 is a schematic diagram of the structure of yet another OCT image processing device disclosed in an embodiment of the present invention.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to enable those skilled in the art to better understand the scheme of the present invention, the technical scheme in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、装置、产品或端没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或端固有的其他步骤或单元。The terms "first", "second", etc. in the specification and claims of the present invention and the above-mentioned drawings are used to distinguish different objects rather than to describe a specific order. In addition, the terms "including" and "having" and any variations thereof are intended to cover non-exclusive inclusions. For example, a process, method, device, product or end including a series of steps or units is not limited to the listed steps or units, but may optionally include steps or units that are not listed, or may optionally include other steps or units inherent to these processes, methods, products or ends.
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本发明的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。Reference to "embodiments" herein means that a particular feature, structure, or characteristic described in conjunction with the embodiments may be included in at least one embodiment of the present invention. The appearance of the phrase in various places in the specification does not necessarily refer to the same embodiment, nor is it an independent or alternative embodiment that is mutually exclusive with other embodiments. It is explicitly and implicitly understood by those skilled in the art that the embodiments described herein may be combined with other embodiments.
本发明公开了一种OCT图像的处理方法及装置,智能化获取包括目标特征的B-Scan图像,有利于提高B-Scan图像的分类效率;还能够结合深度神经网络模型以及该初始分层结果,智能化确定B-Scan图像中目标特征对应的层间分界信息,有利于提高图像分层效率以及提高图像分层结果的准确率。以下分别进行详细说明。The present invention discloses an OCT image processing method and device, which can intelligently obtain a B-Scan image including target features, which is beneficial to improving the classification efficiency of the B-Scan image; it can also combine a deep neural network model and the initial stratification result to intelligently determine the inter-layer boundary information corresponding to the target feature in the B-Scan image, which is beneficial to improving the image stratification efficiency and the accuracy of the image stratification result. The following are detailed descriptions.
实施例一Embodiment 1
请参阅图1,图1是本发明实施例公开的一种OCT图像的处理方法的流程示意图。其中,图1所描述的OCT方法可以应用于视网膜B-Scan图像的分层处理,该方法处理得到的分层结果可以应用于医学类教材的编写,也可以作为视网膜研究的辅助资料,本发明实施例不做限定。如图1所示,该OCT图像的处理方法可以包括以下操作:Please refer to FIG. 1 , which is a flowchart of an OCT image processing method disclosed in an embodiment of the present invention. The OCT method described in FIG. 1 can be applied to the layered processing of retinal B-Scan images, and the layered results obtained by the method can be applied to the compilation of medical textbooks or as auxiliary materials for retinal research, which is not limited in the embodiment of the present invention. As shown in FIG. 1 , the OCT image processing method may include the following operations:
101、获取目标特征对应的B-Scan图像。101. Obtain a B-Scan image corresponding to the target feature.
本发明实施例中,该目标特征可以包括人眼视网膜特征(下文简述为视网膜特征),对应的,该B-Scan图像可以包括经过OCT技术处理得到的包括视网膜特征的B-Scan图像。其中,B-Scan图像可以通过直接获取,承载有OCT扫描技术的设备采集处理得到的包括视网膜特征的B-Scan图像的方法得到,或者,通过获取系统数据库中存储的包括视网膜特征的B-Scan图像的方法得到,本发明实施例不做限定。In an embodiment of the present invention, the target feature may include a human eye retinal feature (hereinafter referred to as a retinal feature), and correspondingly, the B-Scan image may include a B-Scan image including retinal features obtained through OCT technology processing. The B-Scan image may be obtained by directly acquiring a B-Scan image including retinal features acquired by a device carrying OCT scanning technology, or by acquiring a B-Scan image including retinal features stored in a system database, which is not limited in the embodiment of the present invention.
102、通过预设的图像处理算法对B-Scan图像执行图像分层处理,得到初始分层结果。102. Perform image layering processing on the B-Scan image using a preset image processing algorithm to obtain an initial layering result.
在本发明实施例中,当该B-Scan图像包括上述视网膜特征时,对应得到的初始分层结果为视网膜组织层对应的分层结果。该预设的图像处理算法包括由传统的B-Scan图像分层处理改进得到的算法(如经过改良的基于梯度代价图的最小代价路径算法)。In an embodiment of the present invention, when the B-Scan image includes the above-mentioned retinal features, the corresponding initial stratification result is the stratification result corresponding to the retinal tissue layer. The preset image processing algorithm includes an algorithm improved from the traditional B-Scan image stratification processing (such as an improved minimum cost path algorithm based on a gradient cost map).
需要说明的是,在本发明提供的OCT图像的处理方法中,由于视网膜所包括的组织层较多,在通过预设的图像处理算法对视网膜B-Scan图像执行图像分层处理时,并不对视网膜所包括的所有组织层进行分层,仅对视网膜组织层中相对其他视网膜组织层而言,边界更加明显的ILM(内界膜)层、ISOS(感光细胞的内节和外节)层以及BM(布鲁赫膜)层进行划分,从而在保证该OCT图像的处理算法性能鲁棒的前提下,提高B-Scan图像的视网膜组织层的分层效率和分层结果的准确率。It should be noted that in the OCT image processing method provided by the present invention, since the retina includes many tissue layers, when performing image stratification processing on the retinal B-Scan image through a preset image processing algorithm, not all tissue layers included in the retina are stratified, but only the ILM (inner limiting membrane) layer, ISOS (inner and outer segments of photoreceptor cells) layer and BM (Bruch's membrane) layer, which have more obvious boundaries than other retinal tissue layers, are divided. This improves the stratification efficiency of the retinal tissue layers of the B-Scan image and the accuracy of the stratification results while ensuring the robust performance of the processing algorithm of the OCT image.
可见,本发明实施例中,通过减少B-Scan图像中视网膜的组织层的分层层数的方式,减少了处理B-Scan图像时需要运算的数据量,从而实现了提高B-Scan图像的分层效率以及提高分层结果的准确率的目的。It can be seen that in the embodiment of the present invention, by reducing the number of stratification layers of the retinal tissue layers in the B-Scan image, the amount of data that needs to be calculated when processing the B-Scan image is reduced, thereby achieving the purpose of improving the stratification efficiency of the B-Scan image and improving the accuracy of the stratification results.
103、将B-Scan图像输入到预先训练好的深度神经网络模型中,得到输出结果。103. Input the B-Scan image into the pre-trained deep neural network model to obtain the output result.
在本发明实施例中,输出结果包括B-Scan图像的目标区域对应的每个像素点对应的概率,每个像素点对应的概率用于表示每个像素点属于上述初始分层结果所包括的某两个相邻分层的层间分界的可能性;目标区域为包括目标特征的区域。In an embodiment of the present invention, the output result includes the probability corresponding to each pixel point corresponding to the target area of the B-Scan image, and the probability corresponding to each pixel point is used to indicate the possibility that each pixel point belongs to the inter-layer boundary of two adjacent layers included in the above-mentioned initial stratification result; the target area is the area including the target feature.
进一步的,当该目标特征为上述的视网膜特征时,目标区域则为包括视网膜组织层的区域;假定上述初始分层结果包括三个组织层,且在图像竖直方向上,从上到下依次为ILM层、ISOS层以及BM层时,该某两个相邻分层则包括ILM层与ISOS层,以及ISOS层与BM层,其中的某两个相邻分层指代经过算法或人为划分得到的组织层中,在位置上相邻的两个组织层,并不指代实际视网膜所包括的所有组织层中相邻的两个组织层。Furthermore, when the target feature is the above-mentioned retinal feature, the target area is the area including the retinal tissue layer; assuming that the above-mentioned initial stratification result includes three tissue layers, and in the vertical direction of the image, from top to bottom they are the ILM layer, the ISOS layer and the BM layer, the two adjacent layers include the ILM layer and the ISOS layer, and the ISOS layer and the BM layer, wherein the two adjacent layers refer to two tissue layers that are adjacent in position among the tissue layers obtained by algorithm or artificial division, and do not refer to two adjacent tissue layers among all the tissue layers included in the actual retina.
104、根据所有像素点对应的概率,确定B-Scan图像中目标特征对应的层间分界信息。104. According to the corresponding probabilities of all pixel points, determine the inter-layer boundary information corresponding to the target features in the B-Scan image.
在本发明实施例中,当该目标特征为上述的视网膜特征时,每个像素点对应的概率指代该像素点属于某两个相邻视网膜分层的层间分界的概率;以及上述根据所有像素点对应的概率,确定B-Scan图像中目标特征对应的层间分界信息,具体可以包括以下步骤:In an embodiment of the present invention, when the target feature is the above-mentioned retinal feature, the probability corresponding to each pixel refers to the probability that the pixel belongs to the interlayer boundary of two adjacent retinal layers; and determining the interlayer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all pixels may specifically include the following steps:
对于所有像素点中的每列像素点,对该列像素点的概率分布执行归一化处理,得到该列像素点的归一化概率分布;For each column of pixels among all the pixels, a normalization process is performed on the probability distribution of the pixel columns to obtain a normalized probability distribution of the pixel columns;
对于所有像素点中的每列像素点,将该列像素点的归一化概率分布与该列像素点对应的行号分布进行点积运算,得到该列像素点对应的层间分布结果;For each column of all pixels, a dot product operation is performed between the normalized probability distribution of the column of pixels and the row number distribution corresponding to the column of pixels to obtain the inter-layer distribution result corresponding to the column of pixels;
根据所有像素点中的每列像素点对应的层间分布结果,确定B-Scan图像中目标特征对应的层间分界信息。According to the inter-layer distribution results corresponding to each column of pixels among all pixels, the inter-layer boundary information corresponding to the target features in the B-Scan image is determined.
需要说明的是,其中对该列像素点的概率分布执行归一化处理所用到的函数可以包括Softmax函数。It should be noted that the function used to perform normalization processing on the probability distribution of the column of pixels may include a Softmax function.
可见,实施图1所描述的OCT图像的处理方法,能够对获取到的包括目标特征的B-Scan图像有针对性的执行图像分层处理,通过减少B-Scan图像中视网膜的组织层的分层层数的方式,减少了处理B-Scan图像时需要运算的数据量,提高了B-Scan图像的分层效率;还能够结合预先训练好的深度神经网络模型确定B-Scan图像中目标特征对应的层间分界信息,提高了分层效率,同时进一步提高了分层结果的准确率。It can be seen that the implementation of the OCT image processing method described in Figure 1 can perform targeted image stratification processing on the acquired B-Scan image including the target features, and by reducing the number of stratification layers of the retinal tissue layers in the B-Scan image, the amount of data required for calculation when processing the B-Scan image is reduced, thereby improving the stratification efficiency of the B-Scan image; it can also combine with a pre-trained deep neural network model to determine the inter-layer boundary information corresponding to the target features in the B-Scan image, thereby improving the stratification efficiency and further improving the accuracy of the stratification results.
在一个可选的实施例中,上述通过预设的图像处理算法对B-Scan图像执行图像分层处理,得到初始分层结果,具体可以包括以下步骤:In an optional embodiment, performing image layering processing on the B-Scan image by using a preset image processing algorithm to obtain an initial layering result may specifically include the following steps:
通过预设的滤波函数对B-Scan图像执行滤波处理,得到滤波图像;Performing filtering processing on the B-Scan image through a preset filtering function to obtain a filtered image;
计算滤波图像在图像竖直方向上的正梯度,并根据正梯度构建得到第一代价函数;Calculate the positive gradient of the filtered image in the vertical direction of the image, and construct a first cost function based on the positive gradient;
根据预先确定的路径算法以及第一代价函数,确定滤波图像从左侧边缘至右侧边缘的第一最小代价路径,得到第一分层线;According to a predetermined path algorithm and a first cost function, a first minimum cost path from a left edge to a right edge of the filtered image is determined to obtain a first layering line;
根据路径算法以及第一代价函数,确定滤波图像从左侧边缘至右侧边缘的第二最小代价路径,得到第二分层线;According to the path algorithm and the first cost function, a second minimum cost path from the left edge to the right edge of the filtered image is determined to obtain a second layering line;
计算滤波图像在图像竖直方向上的负梯度,并根据负梯度构建得到第二代价函数;Calculate the negative gradient of the filtered image in the vertical direction of the image, and construct a second cost function based on the negative gradient;
确定搜索区域,搜索区域为第一分层线与第二分层线中位置相对在下的分层线对应的下方区域;Determine a search area, where the search area is a lower area corresponding to a lower layer line between the first layer line and the second layer line;
根据路径算法以及第二代价函数,确定搜索区域从区域左侧边缘至区域右侧边缘的第三最小代价路径,并对第三最小代价路径执行平滑滤波操作,得到第三分层线;According to the path algorithm and the second cost function, a third minimum cost path from the left edge of the search area to the right edge of the area is determined, and a smoothing filter operation is performed on the third minimum cost path to obtain a third layer line;
将第一分层线、第二分层线以及第三分层线确定为初始分层结果;Determine the first stratification line, the second stratification line and the third stratification line as initial stratification results;
进一步的,在根据路径算法以及第一代价函数,确定滤波图像从左侧边缘至右侧边缘的第二最小代价路径,得到第二分层线之前,该方法还包括以下步骤:Furthermore, before determining the second minimum cost path from the left edge to the right edge of the filtered image according to the path algorithm and the first cost function to obtain the second layer line, the method further includes the following steps:
在滤波图像中标记第一路径为不可达路径。The first path is marked as an unreachable path in the filtered image.
在该可选的实施例中,该预先确定的路径算法可以为在Dijkstra或Bellman-Ford算法的基础上改进得到的最小代价路径算法,本发明实施例不做限定;以及,第一代价函数对应的函数表达式可以为Cost1=a*exp(-G)或Cost1=a*(-G),第二代价函数对应的函数表达式可以为Cost2=a*exp(-G)或者Cost2=a*(-G),其中G为梯度值。In this optional embodiment, the predetermined path algorithm may be a minimum cost path algorithm improved on the basis of the Dijkstra or Bellman-Ford algorithm, which is not limited in the embodiment of the present invention; and, the function expression corresponding to the first cost function may be Cost1=a*exp(-G) or Cost1=a*(-G), and the function expression corresponding to the second cost function may be Cost2=a*exp(-G) or Cost2=a*(-G), where G is the gradient value.
在该可选的实施例中,在滤波图像中标记第一路径为不可达路径,用于在得到第一分层线之后,根据该标记的不可达路径,在经过预先确定的路径算法重新得到最小代价路径之后,得到不同于第一分层线的第二层线,确保能够得到两条不同的分层线。In this optional embodiment, the first path is marked as an unreachable path in the filtered image, so that after obtaining the first layer line, a second layer line different from the first layer line is obtained based on the marked unreachable path and after re-obtaining the minimum cost path through a predetermined path algorithm, thereby ensuring that two different layer lines can be obtained.
在该可选的实施例中,通过预设的滤波函数对B-Scan图像执行滤波处理,其中预设的滤波函数可以为中值滤波函数以及均值滤波函数;对第三最小代价路径执行平滑滤波操作时,实际处理时可以为对第三最小代价路径所包括的坐标点进行中值滤波和均值滤波平滑处理。In this optional embodiment, filtering processing is performed on the B-Scan image through a preset filtering function, wherein the preset filtering function may be a median filtering function and a mean filtering function; when a smoothing filtering operation is performed on the third minimum cost path, the actual processing may be to perform median filtering and mean filtering smoothing processing on the coordinate points included in the third minimum cost path.
可见,该可选的实施例提供了一种最小代价路径算法,能够在B-Scan图像中划分出所需的第一分层线、第二分层线以及第三分层线,通过B-Scan图像中视网膜的组织层的分层层数的方式,减少了处理B-Scan图像时需要运算的数据量,进而提高了图像分层算法的分层效率以及提高分层结果的准确率。It can be seen that this optional embodiment provides a minimum cost path algorithm, which can divide the required first layer line, second layer line and third layer line in the B-Scan image, and reduce the amount of data required to be calculated when processing the B-Scan image by means of the layering number of retinal tissue layers in the B-Scan image, thereby improving the layering efficiency of the image layering algorithm and improving the accuracy of the layering results.
在另一个可选的实施例中,在将B-Scan图像输入到预先训练好的深度神经网络模型中,得到输出结果之后,以及在根据所有像素点对应的概率,确定B-Scan图像中目标特征对应的层间分界信息之前,该方法还可以包括以下步骤:In another optional embodiment, after inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, and before determining the inter-layer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all pixel points, the method may further include the following steps:
判断所有像素点中是否存在落入到目标区域之外的目标像素点;Determine whether there is a target pixel point falling outside the target area among all the pixels;
当判断出所有像素点中不存在落入到目标区域之外的目标像素点时,触发执行根据所有像素点对应的概率,确定B-Scan图像中目标特征对应的层间分界信息的操作;When it is determined that there is no target pixel point falling outside the target area among all the pixels, an operation of determining the inter-layer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixels is triggered;
当判断出所有像素点中存在落入到目标区域之外的目标像素点时,对落入到目标区域之外的所有目标像素点执行概率更新操作,以更新所有目标像素点对应的概率,并触发执行根据所有像素点对应的概率,确定B-Scan图像中目标特征对应的层间分界信息的操作。When it is determined that there are target pixels falling outside the target area among all the pixels, a probability update operation is performed on all the target pixels falling outside the target area to update the probabilities corresponding to all the target pixels, and trigger the execution of an operation to determine the inter-layer boundary information corresponding to the target feature in the B-Scan image based on the probabilities corresponding to all the pixels.
在该可选的实施例中,判断所有像素点中是否存在落入到目标区域之外的目标像素点,包括:In this optional embodiment, determining whether there is a target pixel point falling outside the target area among all the pixel points includes:
判断所有像素点中每个像素点对应的坐标数值是否超过了目标区域所包括的坐标区间。Determine whether the coordinate value corresponding to each pixel point among all the pixels exceeds the coordinate interval included in the target area.
可见,该可选的实施例能够针对落入到目标区域之外的所有目标像素点执行概率更新操作,有利于后续根据所有像素点对应的概率,确定所述B-Scan图像中所述目标特征对应的层间分界信息时,提高确定出的层间分界信息的准确性。It can be seen that this optional embodiment can perform a probability update operation for all target pixel points falling outside the target area, which is beneficial to subsequently determine the inter-layer boundary information corresponding to the target features in the B-Scan image based on the probabilities corresponding to all pixel points, thereby improving the accuracy of the determined inter-layer boundary information.
在该可选的实施例中,上述判断所有像素点中是否存在落入到目标区域之外的目标像素点,还可以包括:In this optional embodiment, the above-mentioned determining whether there is a target pixel point falling outside the target area among all the pixel points may also include:
判断所有像素点中每个像素点对应的概率是否在预设概率阈值之内,其中预设概率阈值为预先确定出的落在目标区域内的像素点对应的概率阈值(如(0.5,1),不包括两个端点数值)。Determine whether the probability corresponding to each pixel among all pixels is within a preset probability threshold, where the preset probability threshold is a predetermined probability threshold corresponding to the pixel falling within the target area (such as (0.5, 1), excluding the two endpoint values).
可见,在该可选的实施例中,提供了另一种判断像素点是否落入到目标区域内的方法,区别于分析处理每个像素点对应的坐标数值,包括x轴、y轴甚至是z轴上的坐标数据,仅分析处理像素点对应的概率值,减少了分析处理的数据量,拓展了判断像素点是否落入到目标区域内的方法,同时提高了判断并得到结果的效率。It can be seen that in this optional embodiment, another method for determining whether a pixel point falls within the target area is provided. Instead of analyzing and processing the coordinate values corresponding to each pixel point, including the coordinate data on the x-axis, y-axis and even z-axis, only the probability values corresponding to the pixel points are analyzed and processed, which reduces the amount of data analyzed and processed, expands the method for determining whether a pixel point falls within the target area, and improves the efficiency of determining and obtaining results.
实施例二Embodiment 2
请参阅图2,图2是本发明实施例公开的另一种OCT图像的处理方法的流程示意图。其中,图2所描述的OCT方法可以应用于视网膜B-Scan图像的分层处理,该方法处理得到的分层结果可以应用于医学类教材的编写,也可以作为视网膜研究的辅助资料,本发明实施例不做限定。如图2所示,该OCT图像的处理方法可以包括以下操作:Please refer to FIG. 2, which is a flowchart of another OCT image processing method disclosed in an embodiment of the present invention. The OCT method described in FIG. 2 can be applied to the layered processing of retinal B-Scan images. The layered results obtained by the method can be applied to the compilation of medical textbooks or as auxiliary materials for retinal research, which is not limited in the embodiment of the present invention. As shown in FIG. 2, the OCT image processing method can include the following operations:
201、获取目标特征对应的B-Scan图像。201. Obtain a B-Scan image corresponding to the target feature.
202、通过预设的图像处理算法对B-Scan图像执行图像分层处理,得到初始分层结果。202. Perform image layering processing on the B-Scan image using a preset image processing algorithm to obtain an initial layering result.
203、将B-Scan图像输入到预先训练好的深度神经网络模型中,得到输出结果。203. Input the B-Scan image into a pre-trained deep neural network model to obtain an output result.
204、根据所有像素点对应的概率,确定B-Scan图像中目标特征对应的层间分界信息。204. Determine the inter-layer boundary information corresponding to the target feature in the B-Scan image according to the corresponding probabilities of all the pixel points.
本发明实施例中,针对步骤201-步骤204的其他描述请参阅实施例一中针对步骤101-步骤104的其他具体描述,本发明实施例不再赘述。In the embodiment of the present invention, for other descriptions of step 201 to step 204, please refer to other specific descriptions of step 101 to step 104 in embodiment 1, and the embodiment of the present invention will not be repeated here.
205、确定B-Scan图像中包括目标特征的目标区域。205. Determine a target region including target features in the B-Scan image.
在本发明实施例中,在将所述B-Scan图像输入到预先训练好的深度神经网络模型中,得到输出结果之前,确定B-Scan图像中包括目标特征的目标区域,具体可以包括以下步骤:In an embodiment of the present invention, before inputting the B-Scan image into a pre-trained deep neural network model to obtain an output result, determining a target area including target features in the B-Scan image may specifically include the following steps:
将第一分层线与第二分层线中位置相对在上的分层线,在图像竖直方向上向上偏移第一预设距离,得到第一边界线;The first stratification line and the stratification line located relatively above the second stratification line are shifted upward by a first preset distance in the vertical direction of the image to obtain a first boundary line;
将第三分层线在图像竖直方向上向下偏移第二预设距离,得到第二边界线;The third layering line is shifted downward by a second preset distance in the vertical direction of the image to obtain a second boundary line;
将第一边界线以下、第二边界线以上的区域确定为B-Scan图像中包括目标特征的目标区域。The area below the first boundary line and above the second boundary line is determined as a target area including target features in the B-Scan image.
可见,实施图2所描述的OCT图像的处理方法,能够对获取到的包括目标特征的B-Scan图像执行图像分层处理,通过减少B-Scan图像中视网膜的组织层的分层层数的方式,减少了处理B-Scan图像时需要运算的数据量,提高了B-Scan图像的分层效率;还能够结合预先训练好的深度神经网络模型确定B-Scan图像中目标特征对应的层间分界信息,提高了分层效率,同时进一步提高了分层结果的准确率;除此之外,还能够智能化确定出包括目标特征的目标区域,有利于后续针对目标区域执行图像处理操作时,降低图像处理算法需要处理的数据量,一定程度上提高了图像的分层效率;同时在明确包括目标特征的目标区域之后,有利于减少了不包括目标特征的冗余区域对图像分层处理的干扰,有利于提高图像分层结果的准确率。It can be seen that the OCT image processing method described in Figure 2 can perform image stratification processing on the acquired B-Scan image including the target feature, and by reducing the number of stratification layers of the retinal tissue layer in the B-Scan image, the amount of data required for calculation when processing the B-Scan image is reduced, thereby improving the stratification efficiency of the B-Scan image; it can also determine the inter-layer boundary information corresponding to the target feature in the B-Scan image in combination with a pre-trained deep neural network model, thereby improving the stratification efficiency and further improving the accuracy of the stratification results; in addition, it can also intelligently determine the target area including the target feature, which is beneficial to reduce the amount of data that the image processing algorithm needs to process when performing subsequent image processing operations on the target area, thereby improving the image stratification efficiency to a certain extent; at the same time, after clarifying the target area including the target feature, it is beneficial to reduce the interference of redundant areas that do not include the target feature on the image stratification processing, thereby improving the accuracy of the image stratification results.
在一个可选的实施例中,判断所有像素点中是否存在落入到目标区域之外的目标像素点,包括:In an optional embodiment, determining whether there is a target pixel point falling outside the target area among all the pixel points includes:
对于所有像素点中的每列像素点,判断该列像素点中是否存在落入到目标区域之外的目标像素点;For each column of all pixel points, determine whether there is a target pixel point that falls outside the target area in the column of pixel points;
以及,对落入到目标区域之外的所有目标像素点执行概率更新操作,以更新落入到目标区域之外的所有目标像素点对应的概率,包括:And, performing a probability update operation on all target pixel points falling outside the target area to update the probabilities corresponding to all target pixel points falling outside the target area, including:
对于所有像素点中的每列像素点,若该列像素点中存在落入到目标区域之外的目标像素点,则对该列像素点中落入到目标区域之外的每个目标像素点乘以与该目标像素点对应的预设数值,得到乘积结果,并根据乘积结果更新该目标像素点对应的概率。For each column of all pixels, if there is a target pixel that falls outside the target area in the column of pixels, each target pixel that falls outside the target area in the column of pixels is multiplied by a preset value corresponding to the target pixel to obtain a product result, and the probability corresponding to the target pixel is updated according to the product result.
在该可选的实施例中,该预设数值可以包括一个固定且数值极小的系数ε(如0.0001),该预设数值包括的系数ε也可以是一个随着位置变化的衰减值,例如,以第一边界线与第二边界线组成的目标区域中,处于目标区域中心,且与第一边界线距离和与第二边界线的距离相等的区域所在区域作为参考系,以该参考系所在区域为起点,定义该起点到达第一边界线或第二边界线距离最远,当系数ε对应的像素点所在的位置距离第一边界线或第二边界线越近,系数ε对应的数值越大,系数ε对应的像素点所在的位置距离第一边界线或第二边界线越远,系数ε对应的数值越小。In this optional embodiment, the preset value may include a fixed and extremely small coefficient ε (such as 0.0001), and the coefficient ε included in the preset value may also be an attenuation value that changes with position. For example, in the target area composed of the first boundary line and the second boundary line, the area that is located in the center of the target area and has an equal distance from the first boundary line and the second boundary line is used as a reference system, and the area where the reference system is located is used as the starting point, and the starting point is defined as the farthest distance from the first boundary line or the second boundary line. The closer the position of the pixel point corresponding to the coefficient ε is to the first boundary line or the second boundary line, the larger the value corresponding to the coefficient ε is, and the farther the position of the pixel point corresponding to the coefficient ε is from the first boundary line or the second boundary line, the smaller the value corresponding to the coefficient ε is.
可见,该可选的实施例中,提供了以列为单位处理像素点的步骤,既能逐个处理像素点,进一步也能以列为单位处理像素点,提高了针对像素点的处理效率,一定程度上提高了OCT图像的分层效率。It can be seen that in this optional embodiment, a step of processing pixels in columns is provided, which can not only process pixels one by one, but also process pixels in columns, thereby improving the processing efficiency for pixels and improving the layering efficiency of OCT images to a certain extent.
在另一个可选的实施例中,深度神经网络模型通过以下方式训练得到:In another optional embodiment, the deep neural network model is trained in the following manner:
获取包括标注信息的B-Scan图像集合,B-Scan图像集合中每个B-Scan图像对应的标注信息包括目标特征对应的标签信息,以及目标特征对应的分界信息;Acquire a B-Scan image set including annotation information, wherein the annotation information corresponding to each B-Scan image in the B-Scan image set includes label information corresponding to the target feature and boundary information corresponding to the target feature;
划分B-Scan图像集合,得到训练集和测试集,训练集用于训练深度神经网络模型,测试集用于验证训练好的深度神经网络模型的可靠性;Divide the B-Scan image set into a training set and a test set. The training set is used to train the deep neural network model, and the test set is used to verify the reliability of the trained deep neural network model.
对训练集所包括的所有B-Scan图像执行目标处理操作,得到处理结果,目标处理操作包括上下移动处理、左右翻转处理、上下反转处理以及对比度调整处理中的至少一种;Performing a target processing operation on all B-Scan images included in the training set to obtain a processing result, wherein the target processing operation includes at least one of an up-and-down movement process, a left-and-right flip process, an up-and-down inversion process, and a contrast adjustment process;
将处理结果作为输入数据输入到预先确定的深度神经网络模型中,得到输出结果;Input the processed results as input data into a predetermined deep neural network model to obtain output results;
根据输出结果、训练集所包括的B-Scan图像以及分界信息,分析计算联合损失,得到联合损失值;According to the output results, the B-Scan images included in the training set and the boundary information, the joint loss is analyzed and calculated to obtain the joint loss value;
将联合损失值在深度神经网络模型中进行反向传播,并进行预设周期长度的迭代训练,得到训练好的深度神经网络模型;The joint loss value is back-propagated in the deep neural network model, and iterative training is performed for a preset cycle length to obtain a trained deep neural network model;
其中,测试集用于验证训练好的深度神经网络模型的可靠性。Among them, the test set is used to verify the reliability of the trained deep neural network model.
在该可选的实施例中,分析计算联合损失,得到联合损失值包括:In this optional embodiment, analyzing and calculating the joint loss to obtain the joint loss value includes:
根据输出结果所包括的第一数据(记为M0)以及被标注的像素级标签,计算得到标签损失Llabel_dice,该像素级标签为经过预设编码方式(one-hot)编码后的标签;The label loss L label_dice is calculated based on the first data (denoted as M0) included in the output result and the labeled pixel-level label, where the pixel-level label is a label encoded by a preset encoding method (one-hot);
根据第一数据(M0)以及像素级标签,计算得到交叉熵损失Llabel_ce;According to the first data (M0) and the pixel-level label, a cross entropy loss L label_ce is calculated;
根据输出结果所包括的第二数据(记为B0)以及分界信息,计算得到分界交叉熵损失Lbd_ce;According to the second data (denoted as B0) and the boundary information included in the output result, the boundary cross entropy loss L bd — ce is calculated;
根据上述数学运算结果所包括的第三数据(记为B2)以及分界信息,计算得到平滑损失Lbd_l1;According to the third data (denoted as B2) and the boundary information included in the above mathematical operation result, the smoothing loss L bd — l1 is calculated;
将标签损失Llabel_dice、交叉熵损失Llabel_ce、分界交叉熵损失Lbd_ce以及平滑损失Lbd_l1分别乘以一个系数得到各自的乘积结果,并对所有的乘积结果进行求和,得到联合损失对应的数值结果,作为联合损失值。Multiply the label loss L label_dice , the cross entropy loss L label_ce , the boundary cross entropy loss L bd_ce , and the smoothing loss L bd_l1 by a coefficient respectively to obtain their respective product results, and sum all the product results to obtain the numerical result corresponding to the joint loss as the joint loss value.
实际应用中联合损失的计算公式如下:The calculation formula of joint loss in practical application is as follows:
L=λlabel_diceLlabel_dice+λlabel_ceLlabel_ce+λbd_ceLbd_ce+λbd_l1Lbd_l1 L=λ label_dice L label_dice +λ label_ce L label_ce +λ bd_ce L bd_ce +λ bd_l1 L bd_l1
可见,通过训练得到的深度神经网络模型,再结合以列为单位对所有像素点的每列像素点对应的概率分布执行相关的处理操作(包括归一化处理和点积运算),确定出B-Scan图像中所述目标特征对应的层间分界信息,实现了提高OCT图像的分层效率和提高分层结果的准确率的目的It can be seen that the deep neural network model obtained through training, combined with the probability distribution corresponding to each column of all pixels in units of columns, performs relevant processing operations (including normalization and dot product operations) to determine the inter-layer boundary information corresponding to the target features in the B-Scan image, thereby achieving the purpose of improving the stratification efficiency of OCT images and improving the accuracy of stratification results.
实施例三Embodiment 3
请参阅图3,图3是本发明实施例公开的一种OCT图像的处理装置的结构示意图。其中,该OCT图像的处理装置可以是OCT图像的处理终端、OCT图像的处理设备、OCT图像的处理系统或者OCT图像的处理服务器,OCT图像的处理服务器可以是本地服务器,也可以是远端服务器,还可以是云服务器(又称云端服务器),当OCT图像的处理服务器为非云服务器时,该非云服务器能够与云服务器进行通信连接,本发明实施例不做限定。如图3所示,该OCT图像的处理装置可以包括获取模块301、第一处理模块302、第二处理模块303以及第一确定模块304,其中:Please refer to FIG3 , which is a schematic diagram of the structure of an OCT image processing device disclosed in an embodiment of the present invention. The OCT image processing device may be an OCT image processing terminal, an OCT image processing device, an OCT image processing system or an OCT image processing server. The OCT image processing server may be a local server, a remote server, or a cloud server (also known as a cloud server). When the OCT image processing server is a non-cloud server, the non-cloud server can communicate with the cloud server, which is not limited in the embodiment of the present invention. As shown in FIG3 , the OCT image processing device may include an acquisition module 301, a first processing module 302, a second processing module 303 and a first determination module 304, wherein:
获取模块301,用于获取目标特征对应的B-Scan图像。The acquisition module 301 is used to acquire a B-Scan image corresponding to the target feature.
第一处理模块302,用于通过预设的图像处理算法对获取模块301获取到的B-Scan图像执行图像分层处理,得到初始分层结果。The first processing module 302 is used to perform image layering processing on the B-Scan image acquired by the acquisition module 301 through a preset image processing algorithm to obtain an initial layering result.
第二处理模块303,用于将获取模块301获取到的B-Scan图像输入到预先训练好的深度神经网络模型中,得到输出结果,该输出结果包括B-Scan图像的目标区域对应的每个像素点对应的概率,每个像素点对应的概率用于表示每个像素点属于初始分层结果所包括的某两个相邻分层的层间分界的可能性,目标区域为包括目标特征的区域。The second processing module 303 is used to input the B-Scan image acquired by the acquisition module 301 into a pre-trained deep neural network model to obtain an output result, which includes the probability corresponding to each pixel point corresponding to the target area of the B-Scan image. The probability corresponding to each pixel point is used to indicate the possibility that each pixel point belongs to the inter-layer boundary of two adjacent layers included in the initial stratification result. The target area is the area including the target feature.
第一确定模块304,用于根据第二处理模块303得到的所有像素点对应的概率,确定获取模块301获取到的B-Scan图像中目标特征对应的层间分界信息。The first determination module 304 is used to determine the inter-layer boundary information corresponding to the target feature in the B-Scan image acquired by the acquisition module 301 according to the probabilities corresponding to all the pixel points obtained by the second processing module 303 .
可见,实施图3所描述的OCT图像的处理装置,能够对获取到的包括目标特征的B-Scan图像有针对性的执行图像分层处理,通过减少B-Scan图像中视网膜的组织层的分层层数的方式,减少了处理B-Scan图像时需要运算的数据量,提高了B-Scan图像的分层效率;还能够结合预先训练好的深度神经网络模型确定B-Scan图像中目标特征对应的层间分界信息,提高了分层效率,同时进一步提高了分层结果的准确率。It can be seen that the OCT image processing device described in Figure 3 can perform targeted image stratification processing on the acquired B-Scan image including the target features, and by reducing the number of stratification layers of the retinal tissue layers in the B-Scan image, the amount of data required for calculation when processing the B-Scan image is reduced, thereby improving the stratification efficiency of the B-Scan image; it can also combine with a pre-trained deep neural network model to determine the inter-layer boundary information corresponding to the target features in the B-Scan image, thereby improving the stratification efficiency and further improving the accuracy of the stratification results.
在一个可选的实施例中,如图4所示,第一处理模块302可以包括滤波子模块3021、函数构建子模块3022、第一确定子模块3023以及第二确定子模块3024,其中:In an optional embodiment, as shown in FIG4 , the first processing module 302 may include a filtering submodule 3021, a function building submodule 3022, a first determining submodule 3023, and a second determining submodule 3024, wherein:
滤波子模块3021,用于通过预设的滤波函数对B-Scan图像执行滤波处理,得到滤波图像。The filtering submodule 3021 is used to perform filtering processing on the B-Scan image through a preset filtering function to obtain a filtered image.
函数构建子模块3022,用于计算滤波子模块3021得到的滤波图像在图像竖直方向上的正梯度,并根据该正梯度构建得到第一代价函数。The function construction submodule 3022 is used to calculate the positive gradient of the filtered image obtained by the filtering submodule 3021 in the vertical direction of the image, and construct a first cost function based on the positive gradient.
第一确定子模块3023,根据预先确定的路径算法以及函数构建子模块3022得到的第一代价函数,确定滤波子模块3021得到的滤波图像从左侧边缘至右侧边缘的第一最小代价路径,得到第一分层线。The first determination submodule 3023 determines the first minimum cost path from the left edge to the right edge of the filtered image obtained by the filtering submodule 3021 according to a predetermined path algorithm and the first cost function obtained by the function construction submodule 3022, and obtains a first layering line.
第一确定子模块3023,还用于根据预先确定的路径算法以及函数构建子模块3022得到的第一代价函数,确定滤波子模块3021得到的滤波图像从左侧边缘至右侧边缘的第二最小代价路径,得到第二分层线。The first determination submodule 3023 is further used to determine the second minimum cost path from the left edge to the right edge of the filtered image obtained by the filtering submodule 3021 according to the predetermined path algorithm and the first cost function obtained by the function construction submodule 3022, and obtain the second layer line.
函数构建子模块3022,还用于计算滤波子模块3021得到的滤波图像在图像竖直方向上的负梯度,并根据负梯度构建得到第二代价函数。The function construction submodule 3022 is further used to calculate the negative gradient of the filtered image obtained by the filtering submodule 3021 in the vertical direction of the image, and to construct a second cost function based on the negative gradient.
第二确定子模块3024,用于确定搜索区域,搜索区域为第一确定子模块3023得到的第一分层线与第二分层线中位置相对在下的分层线对应的下方区域。The second determining submodule 3024 is used to determine a search area, where the search area is a lower area corresponding to the first layer line obtained by the first determining submodule 3023 and the layer line located relatively below the second layer line.
第一确定子模块3023,还用于根据预先确定的路径算法以及函数构建子模块3022得到的第二代价函数,确定第二确定子模块3024确定出的搜索区域从区域左侧边缘至区域右侧边缘的第三最小代价路径,并对第三最小代价路径执行平滑滤波操作,得到第三分层线。The first determination submodule 3023 is also used to determine the third minimum cost path from the left edge of the search area to the right edge of the area determined by the second determination submodule 3024 according to the predetermined path algorithm and the second cost function obtained by the function construction submodule 3022, and perform a smoothing filtering operation on the third minimum cost path to obtain a third layer line.
第二确定子模块3024,还用于将第一确定子模块3023得到的第一分层线、第二分层线以及第三分层线确定为初始分层结果。The second determining submodule 3024 is further used to determine the first stratification line, the second stratification line and the third stratification line obtained by the first determining submodule 3023 as initial stratification results.
进一步的,第一处理模块302还可以包括标记子模块3025,其中:Furthermore, the first processing module 302 may also include a marking submodule 3025, wherein:
标记子模块3025,用于在第一确定子模块3023根据预先确定的路径算法以及第一代价函数,确定滤波子模块3021得到的滤波图像从左侧边缘至右侧边缘的第二最小代价路径,得到第二分层线之前,在滤波子模块3021得到的滤波图像中标记第一路径为不可达路径。The marking submodule 3025 is used to determine the second minimum cost path from the left edge to the right edge of the filtered image obtained by the filtering submodule 3021 according to a predetermined path algorithm and a first cost function in the first determination submodule 3023, and mark the first path as an unreachable path in the filtered image obtained by the filtering submodule 3021 before obtaining the second layer line.
可见,该可选的实施例提供了一种最小代价路径算法,能够在B-Scan图像中划分出所需的第一分层线、第二分层线以及第三分层线,通过B-Scan图像中视网膜的组织层的分层层数的方式,减少了处理B-Scan图像时需要运算的数据量,进而提高了图像分层算法的分层效率以及提高分层结果的准确率。It can be seen that this optional embodiment provides a minimum cost path algorithm, which can divide the required first layer line, second layer line and third layer line in the B-Scan image, and reduces the amount of data required to be calculated when processing the B-Scan image by means of the layering number of retinal tissue layers in the B-Scan image, thereby improving the layering efficiency of the image layering algorithm and improving the accuracy of the layering results.
在另一个可选的实施例中,如图4所示,该OCT图像的处理装置还包括第二确定模块305,其中:In another optional embodiment, as shown in FIG4 , the OCT image processing device further includes a second determination module 305, wherein:
第二确定模块305,用于在第二处理模块303将获取到的B-Scan图像输入到预先训练好的深度神经网络模型中,得到输出结果之前,确定B-Scan图像中包括目标特征的目标区域。The second determination module 305 is used to input the acquired B-Scan image into the pre-trained deep neural network model in the second processing module 303 to determine the target area including the target feature in the B-Scan image before obtaining the output result.
其中,第二确定模块305确定B-Scan图像中包括目标特征的目标区域的方式具体包括:The manner in which the second determining module 305 determines the target area including the target feature in the B-Scan image specifically includes:
将第一分层线与第二分层线中位置相对在上的分层线在图像竖直方向上向上偏移第一预设距离,得到第一边界线;The first stratification line and the stratification line located relatively above the second stratification line are shifted upward by a first preset distance in the vertical direction of the image to obtain a first boundary line;
将第三分层线在图像竖直方向上向下偏移第二预设距离,得到第二边界线;The third layering line is shifted downward by a second preset distance in the vertical direction of the image to obtain a second boundary line;
将第一边界线以下、第二边界线以上的区域确定为B-Scan图像中包括目标特征的目标区域。The area below the first boundary line and above the second boundary line is determined as a target area including target features in the B-Scan image.
可见,该可选的实施例能够智能化确定出包括目标特征的目标区域,有利于后续针对目标区域执行图像处理操作时,降低图像处理算法需要处理的数据量,一定程度上提高了图像的分层效率;同时在明确包括目标特征的目标区域之后,有利于减少了不包括目标特征的冗余区域对图像分层处理的干扰,有利于提高图像分层结果的准确率。It can be seen that this optional embodiment can intelligently determine the target area including the target features, which is beneficial to reducing the amount of data that needs to be processed by the image processing algorithm when subsequent image processing operations are performed on the target area, thereby improving the image stratification efficiency to a certain extent; at the same time, after clarifying the target area including the target features, it is beneficial to reduce the interference of redundant areas that do not include the target features on the image stratification processing, which is beneficial to improving the accuracy of the image stratification results.
在又一个可选的实施例中,如图4所示,该OCT图像的处理装置还包括判断模块306以及第三处理模块307,其中:In yet another optional embodiment, as shown in FIG4 , the OCT image processing device further includes a judgment module 306 and a third processing module 307, wherein:
判断模块306,用于在第二处理模块303将B-Scan图像输入到预先训练好的深度神经网络模型中,得到输出结果之后,以及第一确定模块304根据所有像素点对应的概率,确定B-Scan图像中目标特征对应的层间分界信息之前,判断所有像素点中是否存在落入到目标区域之外的目标像素点,当判断出所有像素点中不存在落入到目标区域之外的目标像素点时,触发第一确定模块304执行的根据所有像素点对应的概率,确定B-Scan图像中目标特征对应的层间分界信息的操作。The judgment module 306 is used to judge whether there are target pixels falling outside the target area among all the pixels after the second processing module 303 inputs the B-Scan image into a pre-trained deep neural network model to obtain the output result and before the first determination module 304 determines the inter-layer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixels. When it is determined that there are no target pixels falling outside the target area among all the pixels, the first determination module 304 is triggered to execute an operation of determining the inter-layer boundary information corresponding to the target feature in the B-Scan image according to the probabilities corresponding to all the pixels.
第三处理模块307,用于当判断模块306判断出所有像素点中存在落入到目标区域之外的目标像素点时,对落入到目标区域之外的所有目标像素点执行概率更新操作,以更新所有目标像素点对应的概率,并触发第一确定模块304执行的根据所有像素点对应的概率,确定B-Scan图像中目标特征对应的层间分界信息的操作。The third processing module 307 is used to perform a probability update operation on all target pixel points that fall outside the target area when the judgment module 306 determines that there are target pixel points that fall outside the target area among all the pixel points, so as to update the probabilities corresponding to all the target pixel points, and trigger the first determination module 304 to perform an operation to determine the inter-layer boundary information corresponding to the target feature in the B-Scan image based on the probabilities corresponding to all the pixel points.
可见,该可选的实施例能够针对落入到目标区域之外的所有目标像素点执行概率更新操作,有利于后续根据所有像素点对应的概率,确定所述B-Scan图像中所述目标特征对应的层间分界信息时,提高确定出的层间分界信息的准确性。It can be seen that this optional embodiment can perform a probability update operation for all target pixel points falling outside the target area, which is beneficial to subsequently determine the inter-layer boundary information corresponding to the target features in the B-Scan image based on the probabilities corresponding to all pixel points, thereby improving the accuracy of the determined inter-layer boundary information.
在该可选的实施例中,判断模块306判断所有像素点中是否存在落入到目标区域之外的目标像素点的方式具体包括:In this optional embodiment, the manner in which the determination module 306 determines whether there is a target pixel point falling outside the target area among all the pixel points specifically includes:
对于所有像素点中的每列像素点,判断该列像素点中是否存在落入到目标区域之外的目标像素点。For each column of all the pixels, it is determined whether there is a target pixel that falls outside the target area in the column of pixels.
以及,第三处理模块对落入到目标区域之外的所有目标像素点执行概率更新操作,以更新落入到目标区域之外的所有目标像素点对应的概率的方式具体包括:And, the third processing module performs a probability update operation on all target pixels falling outside the target area, so as to update the probabilities corresponding to all target pixels falling outside the target area, specifically including:
对于所有像素点中的每列像素点,若该列像素点中存在落入到目标区域之外的目标像素点,则对该列像素点中落入到目标区域之外的每个目标像素点乘以与该目标像素点对应的预设数值,得到乘积结果,并根据乘积结果更新该目标像素点对应的概率。For each column of all pixels, if there is a target pixel that falls outside the target area in the column of pixels, each target pixel that falls outside the target area in the column of pixels is multiplied by a preset value corresponding to the target pixel to obtain a product result, and the probability corresponding to the target pixel is updated according to the product result.
可见,该可选的实施例中,提供了以列为单位处理像素点的步骤,既能逐个处理像素点,也能以列为单位处理像素点,提高了针对像素点的处理效率,一定程度上提高了图像的分层效率;此外,还能通过概率更新操作减少后续执行根据所有所述像素点对应的概率,确定所述B-Scan图像中所述目标特征对应的层间分界信息的操作时,所有像素点对应的概率包括异常概率的情况。It can be seen that in this optional embodiment, a step of processing pixel points in columns is provided, which can process pixel points one by one or in columns, thereby improving the processing efficiency of pixel points and improving the image stratification efficiency to a certain extent; in addition, the probability update operation can also reduce the subsequent execution of the operation of determining the inter-layer boundary information corresponding to the target feature in the B-Scan image based on the probabilities corresponding to all the pixel points, and the situation where the probabilities corresponding to all the pixel points include abnormal probabilities.
在又一个可选的实施例中,第一确定模块304根据所有像素点对应的概率,确定B-Scan图像中目标特征对应的层间分界信息的方式具体包括:In another optional embodiment, the first determination module 304 determines the inter-layer boundary information corresponding to the target feature in the B-Scan image according to the corresponding probabilities of all pixels, specifically including:
对于所有像素点中的每列像素点,对该列像素点的概率分布执行归一化处理,得到该列像素点的归一化概率分布;For each column of pixels among all the pixels, a normalization process is performed on the probability distribution of the pixel columns to obtain a normalized probability distribution of the pixel columns;
对于所有像素点中的每列像素点,将该列像素点的归一化概率分布与该列像素点对应的行号分布进行点积运算,得到该列像素点对应的层间分布结果;For each column of all pixels, a dot product operation is performed between the normalized probability distribution of the column of pixels and the row number distribution corresponding to the column of pixels to obtain the inter-layer distribution result corresponding to the column of pixels;
根据所有像素点中的每列像素点对应的层间分布结果,确定B-Scan图像中目标特征对应的层间分界信息。According to the inter-layer distribution results corresponding to each column of pixels among all pixels, the inter-layer boundary information corresponding to the target features in the B-Scan image is determined.
以及,深度神经网络模型通过以下方式训练得到:And, the deep neural network model is trained in the following way:
获取包括标注信息的B-Scan图像集合,B-Scan图像集合中每个B-Scan图像对应的标注信息包括目标特征对应的标签信息,以及目标特征对应的分界信息;Acquire a B-Scan image set including annotation information, wherein the annotation information corresponding to each B-Scan image in the B-Scan image set includes label information corresponding to the target feature and boundary information corresponding to the target feature;
划分B-Scan图像集合,得到训练集和测试集,训练集用于训练深度神经网络模型,测试集用于验证训练好的深度神经网络模型的可靠性;Divide the B-Scan image set into a training set and a test set. The training set is used to train the deep neural network model, and the test set is used to verify the reliability of the trained deep neural network model.
对训练集所包括的所有B-Scan图像执行目标处理操作,得到处理结果,目标处理操作包括上下移动处理、左右翻转处理、上下反转处理以及对比度调整处理中的至少一种;Performing a target processing operation on all B-Scan images included in the training set to obtain a processing result, wherein the target processing operation includes at least one of an up-and-down movement process, a left-and-right flip process, an up-and-down inversion process, and a contrast adjustment process;
将处理结果作为输入数据输入到预先确定的深度神经网络模型中,得到输出结果;Input the processed results as input data into a predetermined deep neural network model to obtain output results;
根据输出结果、训练集所包括的B-Scan图像以及分界信息,分析计算联合损失,得到联合损失值;According to the output results, the B-Scan images included in the training set and the boundary information, the joint loss is analyzed and calculated to obtain the joint loss value;
将联合损失值在深度神经网络模型中进行反向传播,并进行预设周期长度的迭代训练,得到训练好的深度神经网络模型;The joint loss value is back-propagated in the deep neural network model, and iterative training is performed for a preset cycle length to obtain a trained deep neural network model;
其中,测试集用于验证训练好的深度神经网络模型的可靠性。Among them, the test set is used to verify the reliability of the trained deep neural network model.
可见,通过训练得到的深度神经网络模型,再结合以列为单位对所有像素点的每列像素点对应的概率分布执行相关的处理操作(包括归一化处理和点积运算),确定出B-Scan图像中所述目标特征对应的层间分界信息,实现了提高OCT图像的分层效率和提高分层结果的准确率的目的。It can be seen that the deep neural network model obtained through training, combined with the relevant processing operations (including normalization and dot product operations) performed on the probability distribution corresponding to each column of all pixels in units of columns, determines the inter-layer boundary information corresponding to the target features in the B-Scan image, thereby achieving the purpose of improving the stratification efficiency of OCT images and improving the accuracy of stratification results.
实施例四Embodiment 4
请参阅图5,图5是本发明实施例公开的又一种OCT图像的处理装置的结构示意图。如图5所示,该OCT图像的处理装置包括:Please refer to FIG5 , which is a schematic diagram of the structure of another OCT image processing device disclosed in an embodiment of the present invention. As shown in FIG5 , the OCT image processing device includes:
存储有可执行程序代码的存储器401;A memory 401 storing executable program codes;
与存储器401耦合的处理器402;a processor 402 coupled to the memory 401;
进一步的,还可以包括与处理器402耦合的输入接口403以及输出接口404;Furthermore, it may also include an input interface 403 and an output interface 404 coupled to the processor 402;
其中,处理器402调用存储器401中存储的可执行程序代码,执行本发明实施例一或本发明实施例二所描述的OCT图像的处理方法中的步骤。The processor 402 calls the executable program code stored in the memory 401 to execute the steps in the OCT image processing method described in the first embodiment of the present invention or the second embodiment of the present invention.
实施例五Embodiment 5
本发明实施例公开了一种计算机程序产品,该计算机程序产品包括存储了计算机程序的非瞬时性计算机存储介质,且该计算机程序可操作来使计算机执行实施例一或实施例二中所描述的OCT图像的处理方法中的步骤。An embodiment of the present invention discloses a computer program product, which includes a non-transitory computer storage medium storing a computer program, and the computer program is operable to enable a computer to execute the steps in the OCT image processing method described in Embodiment 1 or Embodiment 2.
以上所描述的装置实施例仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。The device embodiments described above are only illustrative, wherein the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in one place, or they may be distributed on multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the scheme of this embodiment. Those of ordinary skill in the art may understand and implement it without paying creative labor.
通过以上的实施例的具体描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机存储介质中,存储介质包括只读存储器(Read-OnlyMemory,ROM)、随机存储器(Random Access Memory,RAM)、可编程只读存储器(Programmable Read-only Memory,PROM)、可擦除可编程只读存储器(ErasableProgrammable Read Only Memory,EPROM)、一次可编程只读存储器(One-timeProgrammable Read-Only Memory,OTPROM)、电子抹除式可复写只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(CompactDisc Read-Only Memory,CD-ROM)或其他光盘存储器、磁盘存储器、磁带存储器、或者能够用于携带或存储数据的计算机可读的任何其他介质。Through the specific description of the above embodiments, those skilled in the art can clearly understand that each implementation method can be implemented by means of software plus a necessary general hardware platform, and of course, it can also be implemented by hardware. Based on such an understanding, the above technical solution can be essentially or partly contributed to the prior art in the form of a software product, and the computer software product can be stored in a computer storage medium, and the storage medium includes a read-only memory (ROM), a random access memory (RAM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), a one-time programmable read-only memory (OTPROM), an electronically erasable rewritable read-only memory (EEPROM), a compact disc (CD-ROM) or other optical disc storage, a magnetic disk storage, a magnetic tape storage, or any other computer-readable medium that can be used to carry or store data.
最后应说明的是:本发明实施例公开的一种OCT图像的处理方法及装置所揭露的仅为本发明较佳实施例而已,仅用于说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解;其依然可以对前述各项实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或替换,并不使相应的技术方案的本质脱离本发明各项实施例技术方案的精神和范围。Finally, it should be noted that the OCT image processing method and device disclosed in the embodiments of the present invention only disclose the preferred embodiments of the present invention, which are only used to illustrate the technical solution of the present invention, rather than to limit it. Although the present invention has been described in detail with reference to the aforementioned embodiments, it should be understood by those skilled in the art that the technical solutions described in the aforementioned embodiments can still be modified, or some of the technical features therein can be replaced by equivalents. However, these modifications or replacements do not deviate the essence of the corresponding technical solutions from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111435331.4A CN114092464B (en) | 2021-11-29 | 2021-11-29 | OCT image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111435331.4A CN114092464B (en) | 2021-11-29 | 2021-11-29 | OCT image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114092464A CN114092464A (en) | 2022-02-25 |
CN114092464B true CN114092464B (en) | 2024-06-07 |
Family
ID=80305758
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111435331.4A Active CN114092464B (en) | 2021-11-29 | 2021-11-29 | OCT image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114092464B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117173124A (en) * | 2023-09-01 | 2023-12-05 | 视微影像(河南)科技有限公司 | Layering result display method of OCT (optical coherence tomography) image |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105374028A (en) * | 2015-10-12 | 2016-03-02 | 中国科学院上海光学精密机械研究所 | Optical coherence tomography retina image layering method |
CN110390650A (en) * | 2019-07-23 | 2019-10-29 | 中南大学 | OCT Image Denoising Method Based on Dense Connection and Generative Adversarial Network |
CN111462160A (en) * | 2019-01-18 | 2020-07-28 | 北京京东尚科信息技术有限公司 | Image processing method, device and storage medium |
CN112330638A (en) * | 2020-11-09 | 2021-02-05 | 苏州大学 | Horizontal registration and image enhancement method for retina OCT (optical coherence tomography) image |
CN112700390A (en) * | 2021-01-14 | 2021-04-23 | 汕头大学 | Cataract OCT image repairing method and system based on machine learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3365870B1 (en) * | 2015-10-19 | 2020-08-26 | The Charles Stark Draper Laboratory, Inc. | System and method for the segmentation of optical coherence tomography slices |
-
2021
- 2021-11-29 CN CN202111435331.4A patent/CN114092464B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105374028A (en) * | 2015-10-12 | 2016-03-02 | 中国科学院上海光学精密机械研究所 | Optical coherence tomography retina image layering method |
CN111462160A (en) * | 2019-01-18 | 2020-07-28 | 北京京东尚科信息技术有限公司 | Image processing method, device and storage medium |
CN110390650A (en) * | 2019-07-23 | 2019-10-29 | 中南大学 | OCT Image Denoising Method Based on Dense Connection and Generative Adversarial Network |
CN112330638A (en) * | 2020-11-09 | 2021-02-05 | 苏州大学 | Horizontal registration and image enhancement method for retina OCT (optical coherence tomography) image |
CN112700390A (en) * | 2021-01-14 | 2021-04-23 | 汕头大学 | Cataract OCT image repairing method and system based on machine learning |
Also Published As
Publication number | Publication date |
---|---|
CN114092464A (en) | 2022-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109166124B (en) | A Quantitative Method for Retinal Vascular Morphology Based on Connected Regions | |
Balakrishna et al. | Automatic detection of lumen and media in the IVUS images using U-Net with VGG16 Encoder | |
JP7531946B2 (en) | Method and device for determining fetal slices based on ultrasonic video images | |
JP6842481B2 (en) | 3D quantitative analysis of the retinal layer using deep learning | |
CN111145206A (en) | Liver image segmentation quality evaluation method and device and computer equipment | |
CN109829894A (en) | Parted pattern training method, OCT image dividing method, device, equipment and medium | |
CN110807427B (en) | Sight tracking method and device, computer equipment and storage medium | |
CN111862044A (en) | Ultrasound image processing method, apparatus, computer equipment and storage medium | |
CN107578413B (en) | Method, apparatus, equipment and the readable storage medium storing program for executing of retinal images layering | |
CN110598652B (en) | Fundus data prediction method and device | |
KR20190105180A (en) | Apparatus for Lesion Diagnosis Based on Convolutional Neural Network and Method thereof | |
CN111696100A (en) | Method and device for determining smoking degree based on fundus image | |
Baid et al. | Detection of pathological myopia and optic disc segmentation with deep convolutional neural networks | |
US11393085B2 (en) | Image analysis using machine learning and human computation | |
CN113724203B (en) | Model training method and device applied to target feature segmentation in OCT image | |
CN114092464B (en) | OCT image processing method and device | |
CN115954101A (en) | Health degree management system and management method based on AI tongue diagnosis image processing | |
CN113658165A (en) | Cup-to-tray ratio determining method, device, equipment and storage medium | |
CN113256670A (en) | Image processing method and device, and network model training method and device | |
CN115730269A (en) | Multimodal neurobiological signal processing method, device, server and storage medium | |
CN118398208A (en) | Atrial fibrillation risk assessment method based on machine learning | |
CN112750110A (en) | Evaluation system for evaluating lung lesion based on neural network and related products | |
CN115148365B (en) | Methods and systems for predicting prognosis of CNS germ cell tumors | |
CN116894817A (en) | A tumor progression assessment method based on two-stage multi-task learning | |
EP4252179A1 (en) | Quality maps for optical coherence tomography angiography |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Processing method and device for OCT images Granted publication date: 20240607 Pledgee: Shenyang Aimu Trading Co.,Ltd. Pledgor: GUANGDONG WEIREN MEDICAL TECHNOLOGY Co.,Ltd.|Weiren medical (Foshan) Co.,Ltd.|Weizhi medical technology (Foshan) Co.,Ltd. Registration number: Y2024980047105 |