WO2023240674A1 - Fundus image quality optimization method based on deep learning - Google Patents

Fundus image quality optimization method based on deep learning Download PDF

Info

Publication number
WO2023240674A1
WO2023240674A1 PCT/CN2022/100938 CN2022100938W WO2023240674A1 WO 2023240674 A1 WO2023240674 A1 WO 2023240674A1 CN 2022100938 W CN2022100938 W CN 2022100938W WO 2023240674 A1 WO2023240674 A1 WO 2023240674A1
Authority
WO
WIPO (PCT)
Prior art keywords
fundus image
convolution
center
fundus
circular array
Prior art date
Application number
PCT/CN2022/100938
Other languages
French (fr)
Chinese (zh)
Inventor
高荣玉
张�杰
Original Assignee
潍坊眼科医院有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 潍坊眼科医院有限责任公司 filed Critical 潍坊眼科医院有限责任公司
Publication of WO2023240674A1 publication Critical patent/WO2023240674A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the present invention relates to the technical field of fundus image analysis, and in particular to a fundus image quality optimization method based on deep learning.
  • Fundus images are an important reference for examining diseases of the vitreous body, retina, choroid and optic nerve. Many systemic diseases such as hypertension and diabetes can cause fundus lesions, so fundus images are an important diagnostic data;
  • the present invention provides a fundus image quality optimization method based on deep learning that can quickly identify fundus images, accurately locate fundus disease lesions, and optimize the processing process of fundus images.
  • a fundus image quality optimization method based on deep learning of the present invention includes the following steps:
  • Step 1 Obtain the fundus image and perform sharpening and grayscale processing on the fundus image
  • Step 2 Determine the center of the circle on the grayscale processed image in step 1, and segment concentric rings based on the determined center of the circle to obtain several groups of concentric and equidistant rings; and divide each group of concentric rings evenly along the radial direction. into several groups of pixel blocks;
  • Step 3 Calculate the average grayscale value of each group of pixel blocks in step 2, and perform grayscale assignment on this pixel block to obtain a circular array;
  • Step 4 Perform convolution calculation on the circular array in step 3, and obtain the corresponding lesion area image based on the convolution calculation result.
  • step 2 the circle center of the grayscale processed fundus image is confirmed through MATLAB.
  • the circle center positioning procedure includes the following steps:
  • center_x min(x)+(max(x)-min(x))/2;
  • center_y min(y)+(max(y)-min(y))/2;
  • center [center_x,center_y];% This is the center coordinate of the fundus image
  • fundus image formats include BMP, GIF, HDF, JPEG, PCX, PNG, TIFF and XWD.
  • step 3 the average gray value of each group of pixel blocks is calculated through MATLAB.
  • the calculation program includes the following steps:
  • I2 rgb2gray(II2)
  • aver1 mean(mean(I1(startY:endY,startX:endX)))% select the average gray value of the pixel block
  • aver2 mean(mean(I2(startY:endY,startX:endX)))%90 degree selected pixel block average gray value
  • p1 (aver1-aver2)/(aver1+aver2)% This is the average gray value of the pixel block.
  • step 4 includes the following steps:
  • the preliminary screening convolution kernel adopts a sector-shaped structure with the same radius as the circular array.
  • the center of the preliminary screening convolution kernel coincides with the center of the circular array, clockwise. Rotate, each rotation angle is the arc angle corresponding to the pixel block. Each rotation can obtain a set of convolution results, until the circular array is traversed, and the calculated single-ring array is the primary convolution result.
  • the positioning convolution kernel adopts a partial ring structure with the same radius as the circular array.
  • the center of the positioning convolution kernel coincides with the center of the circle where the abnormal block is located, and rotates clockwise. , convolute and calculate the pixel blocks in the same annulus in the abnormal block one by one until all the annulus in the same abnormal block are calculated, that is, the second-level convolution result is obtained.
  • the annulus width in step 2 can be adjusted according to different fundus lesions. The smaller the lesion range, the narrower the annulus width.
  • a fundus image template is preset based on the starting position relative to the optic papilla during convolution calculation.
  • the angle of the fundus image sample is adjusted based on the fundus image template until the sample optic papilla coincides with the template optic papilla.
  • the beneficial effects of the present invention are: due to the circular characteristics of the fundus image edge, the traditional matrix convolution calculation is optimized and designed into a circular matrix, and a sector-shaped convolution kernel and a partial annular convolution are used.
  • the convolution calculation of the convolution kernel can more accurately locate the location of fundus lesions; at the same time, the lateral movement and vertical movement of the convolution kernel in the matrix convolution calculation are designed as rotational movements, which can improve the convolution calculation efficiency, thereby improving The positioning and extraction rate of the fundus image lesion area; the second-order convolution calculation is performed through the initial screening convolution kernel and the positioning convolution kernel.
  • this method can quickly identify fundus images, accurately locate fundus disease lesions, and optimize the processing of fundus images.
  • Figure 1 is the logic flow chart of the first-level convolution operation
  • Figure 2 is a logic flow chart of the two-level convolution operation.
  • the fundus image quality optimization method based on deep learning of the present invention includes the following steps:
  • Step 1 Obtain the fundus image and perform sharpening and grayscale processing on the fundus image
  • Step 2 Use MATLAB to confirm the center of the circle on the grayscale processed fundus image.
  • the circle center positioning procedure includes the following steps:
  • center_x min(x)+(max(x)-min(x))/2;
  • center_y min(y)+(max(y)-min(y))/2;
  • center [center_x,center_y];% This is the center coordinate of the fundus image
  • the fundus image formats are BMP, GIF, HDF, JPEG, PCX, PNG, TIFF and
  • the group of rings is evenly divided into several groups of pixel blocks, as shown in the fundus image in Figure 1;
  • Step 3 Calculate the average gray value of each group of pixel blocks through MATLAB.
  • the calculation procedure includes the following steps:
  • I2 rgb2gray(II2)
  • aver1 mean(mean(I1(startY:endY,startX:endX)))% select the average gray value of the pixel block
  • aver2 mean(mean(I2(startY:endY,startX:endX)))%90 degree selected pixel block average gray value
  • p1 (aver1-aver2)/(aver1+aver2)% This is the average gray value of the pixel block;
  • Step 4 Perform convolution calculation on the circular array in S3 through the following steps:
  • the preliminary screening convolution kernel uses the preliminary screening convolution kernel to traverse and calculate the circular array in S1.
  • the preliminary screening convolution kernel adopts a sector-shaped structure with the same radius as the circular array.
  • the preliminary screening convolution kernel makes the initial screening
  • the center of the convolution kernel coincides with the center of the circular array. Rotate clockwise. Each rotation angle is the arc angle corresponding to the pixel block.
  • a set of convolution results can be obtained with each rotation until the circular array is traversed.
  • the calculated single-ring array is the primary convolution result;
  • the positioning convolution kernel uses The center of the circle of the kernel coincides with the center of the circle where the abnormal block is located. Turn clockwise to convolve and calculate the pixel blocks in the same annulus in the abnormal block one by one until all the annulus in the same abnormal block are calculated, that is, the second level is obtained. convolution result;
  • the traditional matrix convolution calculation is optimized and designed into a circular matrix, and a sector-shaped convolution kernel and a partial annular convolution kernel are used to perform the convolution calculation.
  • a sector-shaped convolution kernel and a partial annular convolution kernel are used to perform the convolution calculation.
  • the lateral movement and vertical movement of the convolution kernel in the matrix convolution calculation are designed as rotational movements, which can improve the convolution calculation efficiency, thereby improving the positioning and extraction of the fundus image lesion area.
  • Speed the second-order convolution calculation is performed through the initial screening convolution kernel and the positioning convolution kernel.
  • the initial screening convolution kernel and positioning convolution kernel in this method are obtained through deep learning of a large number of normal fundus images; through the above settings, this method can quickly identify fundus images. Accurately locate fundus disease lesions and optimize the processing of fundus images.
  • a fundus image quality optimization method based on deep learning of the present invention includes the following steps:
  • Step 1 Obtain the fundus image and perform sharpening and grayscale processing on the fundus image
  • Step 2 Calculate the starting position relative to the optic papilla based on the convolution, preset the fundus image template, rely on the fundus image template to adjust the angle of the fundus image obtained in step 1 until the sample optic papilla coincides with the template optic papilla, and then pass MATLAB confirms the center of the circle on the grayscale processed fundus image.
  • the circle center positioning procedure includes the following steps:
  • center_x min(x)+(max(x)-min(x))/2;
  • center_y min(y)+(max(y)-min(y))/2;
  • center [center_x,center_y];% This is the center coordinate of the fundus image
  • the fundus image formats are BMP, GIF, HDF, JPEG, PCX, PNG, TIFF and
  • the group of annulus is evenly divided into several groups of pixel blocks.
  • the width of the concentric and equidistant annulus and the arc angle of the pixel block can be adjusted according to different fundus lesions.
  • the smaller the shape angle conversely, the larger the lesion range, the wider the annulus width, and the larger the arc angle of the pixel block;
  • Step 3 Calculate the average gray value of each group of pixel blocks through MATLAB.
  • the calculation procedure includes the following steps:
  • I2 rgb2gray(II2)
  • aver1 mean(mean(I1(startY:endY,startX:endX)))% select the average gray value of the pixel block
  • aver2 mean(mean(I2(startY:endY,startX:endX)))%90 degree selected pixel block average gray value
  • p1 (aver1-aver2)/(aver1+aver2)% This is the average gray value of the pixel block;
  • Step 4 Perform convolution calculation on the circular array in S3 through the following steps:
  • the preliminary screening convolution kernel uses the preliminary screening convolution kernel to traverse and calculate the circular array in S1.
  • the preliminary screening convolution kernel adopts a sector-shaped structure with the same radius as the circular array.
  • the preliminary screening convolution kernel makes the initial screening
  • the center of the convolution kernel coincides with the center of the circular array. Rotate clockwise. Each rotation angle is the arc angle corresponding to the pixel block.
  • a set of convolution results can be obtained with each rotation until the circular array is traversed.
  • the calculated single-ring array is the primary convolution result;
  • the positioning convolution kernel uses The center of the circle of the kernel coincides with the center of the circle where the abnormal block is located. Turn clockwise to convolve and calculate the pixel blocks in the same annulus in the abnormal block one by one until all the annulus in the same abnormal block are calculated, that is, the second level is obtained. convolution result;
  • setting concentric and equidistant ring widths and adjustable arc angles of pixel blocks can make this method suitable for image positioning of different fundus lesions while ensuring the positioning speed, and improve the flexible operation of this method.
  • preliminary angle adjustment of the fundus image can reduce errors in the calculation process and improve the accuracy of the method.

Abstract

The present invention relates to the technical field of fundus image analysis, and in particular, to a fundus image quality optimization method based on deep learning, which can quickly recognize a fundus image, accurately position a lesion position of a fundus disease, and optimize a processing process of the fundus image. The method comprises the following steps: step 1, acquiring the fundus image, and performing sharpening and grayscale processing on the fundus image; step 2, performing circle center determination on the image subjected to the grayscale processing in step 1, and performing concentric ring band segmentation according to the determined circle center, so as to obtain a plurality of groups of concentric equidistant ring bands; and uniformly segmenting each group of concentric ring bands in the radial direction into a plurality of groups of pixel blocks; step 3, calculating an average grayscale value of each group of pixel blocks in step 2, and performing grayscale value assignment on the pixel blocks to obtain a circular numerical matrix; and step 4, performing convolution calculation on the circular numerical matrix in step 3, and obtaining a corresponding lesion area image according to a convolution calculation result.

Description

一种基于深度学习的眼底图像质量优化方法A fundus image quality optimization method based on deep learning 技术领域Technical field
本发明涉及眼底图像分析的技术领域,特别是涉及一种基于深度学习的眼底图像质量优化方法。The present invention relates to the technical field of fundus image analysis, and in particular to a fundus image quality optimization method based on deep learning.
背景技术Background technique
眼底图像是检查玻璃体、视网膜、脉络膜和视神经疾病的重要参照影响,许多全身性疾病如高血压病、糖尿病等均会发生眼底病变,因此眼底图像是一种重要诊断资料;Fundus images are an important reference for examining diseases of the vitreous body, retina, choroid and optic nerve. Many systemic diseases such as hypertension and diabetes can cause fundus lesions, so fundus images are an important diagnostic data;
现有的眼底图像处理方法采用传统的矩阵卷积计算,由于眼底图像的形状特殊性,在使用矩阵卷积计算进行识别时,病变位置的定位效率与定位精度均比较低下。Existing fundus image processing methods use traditional matrix convolution calculations. Due to the particular shape of the fundus image, when using matrix convolution calculations for identification, the positioning efficiency and positioning accuracy of the lesion location are relatively low.
发明内容Contents of the invention
为解决上述技术问题,本发明提供一种能够对眼底图像进行快速识别,精确定位眼底疾病病变位置,优化眼底图像的处理过程的基于深度学习的眼底图像质量优化方法。In order to solve the above technical problems, the present invention provides a fundus image quality optimization method based on deep learning that can quickly identify fundus images, accurately locate fundus disease lesions, and optimize the processing process of fundus images.
本发明的一种基于深度学习的眼底图像质量优化方法,包括以下步骤:A fundus image quality optimization method based on deep learning of the present invention includes the following steps:
步骤1、获取眼底图像,并对眼底图像进行锐化,灰度处理; Step 1. Obtain the fundus image and perform sharpening and grayscale processing on the fundus image;
步骤2、对步骤1中经过灰度处理的图像进行圆心确定,并根据确定的圆心进行同心环带分割,得到若干组同心等距环带;并沿径向将同心的每组环带均匀分割成若干组像素块;Step 2: Determine the center of the circle on the grayscale processed image in step 1, and segment concentric rings based on the determined center of the circle to obtain several groups of concentric and equidistant rings; and divide each group of concentric rings evenly along the radial direction. into several groups of pixel blocks;
步骤3、计算步骤2中每组像素块的平均灰度值,并对此像素块进行灰度赋值,得到圆形数阵;Step 3. Calculate the average grayscale value of each group of pixel blocks in step 2, and perform grayscale assignment on this pixel block to obtain a circular array;
步骤4、对步骤3中的圆形数阵进行卷积计算,根据卷积计算结果得到相对应的病变区域图像。Step 4: Perform convolution calculation on the circular array in step 3, and obtain the corresponding lesion area image based on the convolution calculation result.
进一步地,其中步骤2中通过MATLAB对灰度处理后的眼底图像进行圆心确认,圆心定位程序包括以下步骤:Further, in step 2, the circle center of the grayscale processed fundus image is confirmed through MATLAB. The circle center positioning procedure includes the following steps:
B=imread(‘眼底图像’);%读取原图B=imread(‘fundus image’); % read the original image
A=im2bw(B);%二值化A=im2bw(B);% binarization
[x,y]=find(A==0);%眼底图像边缘像素的坐标集合[x,y]=find(A==0);%Coordinate set of edge pixels in the fundus image
center_x=min(x)+(max(x)-min(x))/2;center_x=min(x)+(max(x)-min(x))/2;
center_y=min(y)+(max(y)-min(y))/2;center_y=min(y)+(max(y)-min(y))/2;
center=[center_x,center_y];%此为眼底图像的圆心坐标center=[center_x,center_y];% This is the center coordinate of the fundus image
其中,眼底图像格式采用BMP、GIF、HDF、JPEG、PCX、PNG、TIFF和XWD。Among them, fundus image formats include BMP, GIF, HDF, JPEG, PCX, PNG, TIFF and XWD.
进一步地,其中步骤3中通过MATLAB计算每组像素块的平均灰度值,计算程序包括以下步骤:Further, in step 3, the average gray value of each group of pixel blocks is calculated through MATLAB. The calculation program includes the following steps:
II1=imread(‘像素块’);%读取图像II1=imread(‘pixel block’); % read image
II2=imread(‘像素块’);%读取图像II2=imread(‘pixel block’); % read image
I1=rgb2gray(II1);I1=rgb2gray(II1);
I2=rgb2gray(II2);I2=rgb2gray(II2);
startX=350;endX=400;%设置像素块横坐标起始坐标及终点坐标startX=350; endX=400; % Set the starting coordinate and end coordinate of the abscissa of the pixel block
startY=300;endY=350;%设置像素块纵坐标起始坐标及终点坐标startY=300; endY=350; % Set the starting coordinate and end coordinate of the vertical coordinate of the pixel block
aver1=mean(mean(I1(startY:endY,startX:endX)))%选择像素块平均灰度值aver1=mean(mean(I1(startY:endY,startX:endX)))% select the average gray value of the pixel block
aver2=mean(mean(I2(startY:endY,startX:endX)))%90度选择像素块平均灰度值aver2=mean(mean(I2(startY:endY,startX:endX)))%90 degree selected pixel block average gray value
p1=(aver1-aver2)/(aver1+aver2)%此为像素块平均灰度值。p1=(aver1-aver2)/(aver1+aver2)% This is the average gray value of the pixel block.
进一步地,步骤4的卷积计算中,包括以下步骤:Further, the convolution calculation in step 4 includes the following steps:
S1、输入步骤3中灰度赋值后的圆形数阵;S1. Enter the circular array after grayscale assignment in step 3;
S2、使用初筛卷积核遍历计算S1中的圆形数阵,得到初级卷积结果;S2. Use the preliminary convolution kernel to traverse and calculate the circular array in S1 to obtain the primary convolution result;
S3、将S2中的初级卷积结果与预先存储的正常眼底图像初级卷积结果进行逐一映射作差,提取初级卷积结果中差值绝对值大于预先设定阈值部分所对应的圆形数阵像素块,即眼底图像中的异常区块;S3. Map and compare the primary convolution results in S2 with the pre-stored primary convolution results of the normal fundus image one by one, and extract the circular array corresponding to the part where the absolute value of the difference in the primary convolution result is greater than the preset threshold. Pixel blocks, i.e. abnormal blocks in fundus images;
S4、使用定位卷积核遍历计算S3中提取的异常区块,得到二级卷积结果;S4. Use the positioning convolution kernel to traverse and calculate the abnormal blocks extracted in S3 to obtain the secondary convolution result;
S5、将S4中的病变卷积结果与预先存储的正常眼底图像二级卷积结果进行逐一映射作差,提取二级卷积结果中差值绝对值大于预先设定阈值部分所对应的圆形数阵区 域,即得到眼底图像中的病变区域图像。S5. Map and compare the lesion convolution results in S4 and the pre-stored secondary convolution results of the normal fundus image one by one, and extract the circles corresponding to the parts of the secondary convolution results where the absolute value of the difference is greater than the preset threshold. The array area is used to obtain the image of the lesion area in the fundus image.
进一步地,其中初筛卷积核采用与圆形数阵相同半径的扇形结构,初筛卷积核在进行遍历计算时,使初筛卷积核的圆心与圆形数阵圆心重合,顺时针转动,每次转动角度为像素块所对应的弧度角角度,每转动一次即可得到一组卷积结果,直至将圆形数阵遍历,计算得到的单环数阵即为初级卷积结果。Furthermore, the preliminary screening convolution kernel adopts a sector-shaped structure with the same radius as the circular array. When the preliminary screening convolution kernel performs the traversal calculation, the center of the preliminary screening convolution kernel coincides with the center of the circular array, clockwise. Rotate, each rotation angle is the arc angle corresponding to the pixel block. Each rotation can obtain a set of convolution results, until the circular array is traversed, and the calculated single-ring array is the primary convolution result.
进一步地,其中定位卷积核采用与圆形数阵相同半径的部分环带结构,定位卷积核在进行遍历计算时,使定位卷积核的圆心与异常区块所在圆心重合,顺时针转动,对异常区块中相同环带内的像素块逐一卷积计算,直至将同一异常区块内的所有环带计算完成,即得到二级卷积结果。Furthermore, the positioning convolution kernel adopts a partial ring structure with the same radius as the circular array. When the positioning convolution kernel performs traversal calculation, the center of the positioning convolution kernel coincides with the center of the circle where the abnormal block is located, and rotates clockwise. , convolute and calculate the pixel blocks in the same annulus in the abnormal block one by one until all the annulus in the same abnormal block are calculated, that is, the second-level convolution result is obtained.
进一步地,步骤2中的环带宽度能够根据不同的眼底病变进行调整,病变范围越小,环带宽度越窄。Furthermore, the annulus width in step 2 can be adjusted according to different fundus lesions. The smaller the lesion range, the narrower the annulus width.
进一步地,根据卷积计算时相对于视乳头的起始位置,预先设定眼底图像模板,在步骤2中依靠眼底图像模板对眼底图像样本进行角度调整至样本视乳头与模板视乳头重合。Further, a fundus image template is preset based on the starting position relative to the optic papilla during convolution calculation. In step 2, the angle of the fundus image sample is adjusted based on the fundus image template until the sample optic papilla coincides with the template optic papilla.
与现有技术相比本发明的有益效果为:由于眼底图像边缘呈圆形的特性,将传统的矩阵卷积计算,优化设计为圆形数阵,并采用扇形卷积核和部分环带卷积核对其进行卷积计算,能够更加精准的定位眼底病变所在位置;同时将矩阵卷积计算中卷积核的横向移动和竖向移动,设计成旋转移动,能够提升卷积计算效率,从而提升眼底图像病变区域的定位提取速率;通过初筛卷积核和定位卷积核来进行二阶卷积计算,相对于直接采用定位卷积核对圆形数阵进行一阶卷积计算来说,计算速度更快,在时间方面实现“1+1<1”的效果;通过上述设置,此方法能够对眼底图像进行快速识别,精确定位眼底疾病病变位置,优化眼底图像的处理过程。Compared with the existing technology, the beneficial effects of the present invention are: due to the circular characteristics of the fundus image edge, the traditional matrix convolution calculation is optimized and designed into a circular matrix, and a sector-shaped convolution kernel and a partial annular convolution are used. The convolution calculation of the convolution kernel can more accurately locate the location of fundus lesions; at the same time, the lateral movement and vertical movement of the convolution kernel in the matrix convolution calculation are designed as rotational movements, which can improve the convolution calculation efficiency, thereby improving The positioning and extraction rate of the fundus image lesion area; the second-order convolution calculation is performed through the initial screening convolution kernel and the positioning convolution kernel. Compared with directly using the positioning convolution kernel to perform the first-order convolution calculation on the circular array, the calculation It is faster and achieves the "1+1<1" effect in terms of time; through the above settings, this method can quickly identify fundus images, accurately locate fundus disease lesions, and optimize the processing of fundus images.
附图说明Description of the drawings
图1是一级卷积运算的逻辑流程图;Figure 1 is the logic flow chart of the first-level convolution operation;
图2是二级卷积运算的逻辑流程图。Figure 2 is a logic flow chart of the two-level convolution operation.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, rather than all the embodiments.
实施例1Example 1
如图1、2所示,本发明的一种基于深度学习的眼底图像质量优化方法,包括以下步骤:As shown in Figures 1 and 2, the fundus image quality optimization method based on deep learning of the present invention includes the following steps:
步骤1、获取眼底图像,并对眼底图像进行锐化,灰度处理; Step 1. Obtain the fundus image and perform sharpening and grayscale processing on the fundus image;
步骤2、通过MATLAB对灰度处理后的眼底图像进行圆心确认,圆心定位程序包括以下步骤: Step 2. Use MATLAB to confirm the center of the circle on the grayscale processed fundus image. The circle center positioning procedure includes the following steps:
B=imread(‘眼底图像’);%读取原图B=imread(‘fundus image’); % read the original image
A=im2bw(B);%二值化A=im2bw(B);% binarization
[x,y]=find(A==0);%眼底图像边缘像素的坐标集合[x,y]=find(A==0);%Coordinate set of edge pixels in the fundus image
center_x=min(x)+(max(x)-min(x))/2;center_x=min(x)+(max(x)-min(x))/2;
center_y=min(y)+(max(y)-min(y))/2;center_y=min(y)+(max(y)-min(y))/2;
center=[center_x,center_y];%此为眼底图像的圆心坐标center=[center_x,center_y];% This is the center coordinate of the fundus image
其中,眼底图像格式采用BMP、GIF、HDF、JPEG、PCX、PNG、TIFF和XWD;并根据确定的圆心进行同心环带分割,得到若干组同心等距环带;并沿径向将同心的每组环带均匀分割成若干组像素块,如图1中的眼底图像所示;Among them, the fundus image formats are BMP, GIF, HDF, JPEG, PCX, PNG, TIFF and The group of rings is evenly divided into several groups of pixel blocks, as shown in the fundus image in Figure 1;
步骤3、通过MATLAB计算每组像素块的平均灰度值,计算程序包括以下步骤:Step 3. Calculate the average gray value of each group of pixel blocks through MATLAB. The calculation procedure includes the following steps:
II1=imread(‘像素块’);%读取图像II1=imread(‘pixel block’); % read image
II2=imread(‘像素块’);%读取图像II2=imread(‘pixel block’); % read image
I1=rgb2gray(II1);I1=rgb2gray(II1);
I2=rgb2gray(II2);I2=rgb2gray(II2);
startX=350;endX=400;%设置像素块横坐标起始坐标及终点坐标startX=350; endX=400; % Set the starting coordinate and end coordinate of the abscissa of the pixel block
startY=300;endY=350;%设置像素块纵坐标起始坐标及终点坐标startY=300; endY=350; % Set the starting coordinate and end coordinate of the vertical coordinate of the pixel block
aver1=mean(mean(I1(startY:endY,startX:endX)))%选择像素块平均灰度值aver1=mean(mean(I1(startY:endY,startX:endX)))% select the average gray value of the pixel block
aver2=mean(mean(I2(startY:endY,startX:endX)))%90度选择像素块平均灰度值aver2=mean(mean(I2(startY:endY,startX:endX)))%90 degree selected pixel block average gray value
p1=(aver1-aver2)/(aver1+aver2)%此为像素块平均灰度值;p1=(aver1-aver2)/(aver1+aver2)% This is the average gray value of the pixel block;
将得到的像素块平均灰度值赋入至图1中对应的眼底图像上,得到圆形数阵;Assign the obtained average gray value of the pixel block to the corresponding fundus image in Figure 1 to obtain a circular array;
步骤4、通过以下步骤对S3中的圆形数阵进行卷积计算: Step 4. Perform convolution calculation on the circular array in S3 through the following steps:
S1、输入步骤3中灰度赋值后的圆形数阵;S1. Enter the circular array after grayscale assignment in step 3;
S2、使用初筛卷积核遍历计算S1中的圆形数阵,其中初筛卷积核采用与圆形数阵相同半径的扇形结构,初筛卷积核在进行遍历计算时,使初筛卷积核的圆心与圆形数阵圆心重合,顺时针转动,每次转动角度为像素块所对应的弧度角角度,每转动一次即可得到一组卷积结果,直至将圆形数阵遍历,计算得到的单环数阵即为初级卷积结果;S2. Use the preliminary screening convolution kernel to traverse and calculate the circular array in S1. The preliminary screening convolution kernel adopts a sector-shaped structure with the same radius as the circular array. When performing the traversal calculation, the preliminary screening convolution kernel makes the initial screening The center of the convolution kernel coincides with the center of the circular array. Rotate clockwise. Each rotation angle is the arc angle corresponding to the pixel block. A set of convolution results can be obtained with each rotation until the circular array is traversed. , the calculated single-ring array is the primary convolution result;
S3、将S2中的初级卷积结果与预先存储的正常眼底图像初级卷积结果进行逐一映射作差,提取初级卷积结果中差值绝对值大于预先设定阈值部分所对应的圆形数阵像素块,即眼底图像中的异常区块;S3. Map and compare the primary convolution results in S2 with the pre-stored primary convolution results of the normal fundus image one by one, and extract the circular array corresponding to the part where the absolute value of the difference in the primary convolution result is greater than the preset threshold. Pixel blocks, i.e. abnormal blocks in fundus images;
S4、使用定位卷积核遍历计算S3中提取的异常区块,其中定位卷积核采用与圆形数阵相同半径的部分环带结构,定位卷积核在进行遍历计算时,使定位卷积核的圆心与异常区块所在圆心重合,顺时针转动,对异常区块中相同环带内的像素块逐一卷积计算,直至将同一异常区块内的所有环带计算完成,即得到二级卷积结果;S4. Use the positioning convolution kernel to traverse the abnormal blocks extracted in S3. The positioning convolution kernel adopts a partial ring structure with the same radius as the circular array. When performing the traversal calculation, the positioning convolution kernel uses The center of the circle of the kernel coincides with the center of the circle where the abnormal block is located. Turn clockwise to convolve and calculate the pixel blocks in the same annulus in the abnormal block one by one until all the annulus in the same abnormal block are calculated, that is, the second level is obtained. convolution result;
S5、将S4中的病变卷积结果与预先存储的正常眼底图像二级卷积结果进行逐一映射作差,提取二级卷积结果中差值绝对值大于预先设定阈值部分所对应的圆形数阵区域,即得到眼底图像中的病变区域图像。S5. Map and compare the lesion convolution results in S4 and the pre-stored secondary convolution results of the normal fundus image one by one, and extract the circles corresponding to the parts of the secondary convolution results where the absolute value of the difference is greater than the preset threshold. The array area is used to obtain the image of the lesion area in the fundus image.
在本实施例中,由于眼底图像边缘呈圆形的特性,将传统的矩阵卷积计算,优化设计为圆形数阵,并采用扇形卷积核和部分环带卷积核对其进行卷积计算,能够更加精准的定位眼底病变所在位置;同时将矩阵卷积计算中卷积核的横向移动和竖向移动,设计成旋转移动,能够提升卷积计算效率,从而提升眼底图像病变区域的定位提取速率;通过初筛卷积核和定位卷积核来进行二阶卷积计算,相对于直接采用定位卷积核对圆形数阵进行一阶卷积计算来说,计算速度更快,在时间方面实现“1+1<1”的效果,本方法中初筛卷积核和定位卷积核通过对大量正常眼底图像的深度学习后获得;通过上述设置,此方法能够对眼底图像进行快速识别,精确定位眼底疾病病变位置,优化眼底图像的处理过程。In this embodiment, due to the rounded edge characteristics of the fundus image, the traditional matrix convolution calculation is optimized and designed into a circular matrix, and a sector-shaped convolution kernel and a partial annular convolution kernel are used to perform the convolution calculation. , can more accurately locate the location of fundus lesions; at the same time, the lateral movement and vertical movement of the convolution kernel in the matrix convolution calculation are designed as rotational movements, which can improve the convolution calculation efficiency, thereby improving the positioning and extraction of the fundus image lesion area. Speed; the second-order convolution calculation is performed through the initial screening convolution kernel and the positioning convolution kernel. Compared with directly using the positioning convolution kernel to perform the first-order convolution calculation on the circular array, the calculation speed is faster, in terms of time To achieve the effect of "1+1<1", the initial screening convolution kernel and positioning convolution kernel in this method are obtained through deep learning of a large number of normal fundus images; through the above settings, this method can quickly identify fundus images. Accurately locate fundus disease lesions and optimize the processing of fundus images.
实施例2Example 2
本发明的一种基于深度学习的眼底图像质量优化方法,包括以下步骤:A fundus image quality optimization method based on deep learning of the present invention includes the following steps:
步骤1、获取眼底图像,并对眼底图像进行锐化,灰度处理; Step 1. Obtain the fundus image and perform sharpening and grayscale processing on the fundus image;
步骤2、根据卷积计算相对于视乳头的起始位置,预先设定眼底图像模板,依靠眼底图像模板对步骤1中获取的眼底图像进行角度调整至样本视乳头与模板视乳头重合,之后通过MATLAB对灰度处理后的眼底图像进行圆心确认,圆心定位程序包括以下步骤:Step 2: Calculate the starting position relative to the optic papilla based on the convolution, preset the fundus image template, rely on the fundus image template to adjust the angle of the fundus image obtained in step 1 until the sample optic papilla coincides with the template optic papilla, and then pass MATLAB confirms the center of the circle on the grayscale processed fundus image. The circle center positioning procedure includes the following steps:
B=imread(‘眼底图像’);%读取原图B=imread(‘fundus image’); % read the original image
A=im2bw(B);%二值化A=im2bw(B);% binarization
[x,y]=find(A==0);%眼底图像边缘像素的坐标集合[x,y]=find(A==0);%Coordinate set of edge pixels in the fundus image
center_x=min(x)+(max(x)-min(x))/2;center_x=min(x)+(max(x)-min(x))/2;
center_y=min(y)+(max(y)-min(y))/2;center_y=min(y)+(max(y)-min(y))/2;
center=[center_x,center_y];%此为眼底图像的圆心坐标center=[center_x,center_y];% This is the center coordinate of the fundus image
其中,眼底图像格式采用BMP、GIF、HDF、JPEG、PCX、PNG、TIFF和XWD;并根据确定的圆心进行同心环带分割,得到若干组同心等距环带;并沿径向将同心的每组环带均匀分割成若干组像素块,其中同心等距的环带宽度和像素块的弧形角度可根据不同的眼底病变进行调整,病变范围越小,环带宽度越窄,像素块的弧形角度越小,反之病变范围越大,环带宽度越宽,像素块的弧形角度越大;Among them, the fundus image formats are BMP, GIF, HDF, JPEG, PCX, PNG, TIFF and The group of annulus is evenly divided into several groups of pixel blocks. The width of the concentric and equidistant annulus and the arc angle of the pixel block can be adjusted according to different fundus lesions. The smaller the lesion range, the narrower the width of the annulus, and the arc angle of the pixel block. The smaller the shape angle, conversely, the larger the lesion range, the wider the annulus width, and the larger the arc angle of the pixel block;
步骤3、通过MATLAB计算每组像素块的平均灰度值,计算程序包括以下步骤:Step 3. Calculate the average gray value of each group of pixel blocks through MATLAB. The calculation procedure includes the following steps:
II1=imread(‘像素块’);%读取图像II1=imread(‘pixel block’); % read image
II2=imread(‘像素块’);%读取图像II2=imread(‘pixel block’); % read image
I1=rgb2gray(II1);I1=rgb2gray(II1);
I2=rgb2gray(II2);I2=rgb2gray(II2);
startX=350;endX=400;%设置像素块横坐标起始坐标及终点坐标startX=350; endX=400; % Set the starting coordinate and end coordinate of the abscissa of the pixel block
startY=300;endY=350;%设置像素块纵坐标起始坐标及终点坐标startY=300; endY=350; % Set the starting coordinate and end coordinate of the vertical coordinate of the pixel block
aver1=mean(mean(I1(startY:endY,startX:endX)))%选择像素块平均灰度值aver1=mean(mean(I1(startY:endY,startX:endX)))% select the average gray value of the pixel block
aver2=mean(mean(I2(startY:endY,startX:endX)))%90度选择像素块平均灰度值aver2=mean(mean(I2(startY:endY,startX:endX)))%90 degree selected pixel block average gray value
p1=(aver1-aver2)/(aver1+aver2)%此为像素块平均灰度值;p1=(aver1-aver2)/(aver1+aver2)% This is the average gray value of the pixel block;
将得到的像素块平均灰度值赋入至图1中对应的眼底图像上,得到圆形数阵;Assign the obtained average gray value of the pixel block to the corresponding fundus image in Figure 1 to obtain a circular array;
步骤4、通过以下步骤对S3中的圆形数阵进行卷积计算: Step 4. Perform convolution calculation on the circular array in S3 through the following steps:
S1、输入步骤3中灰度赋值后的圆形数阵;S1. Enter the circular array after grayscale assignment in step 3;
S2、使用初筛卷积核遍历计算S1中的圆形数阵,其中初筛卷积核采用与圆形数阵相同半径的扇形结构,初筛卷积核在进行遍历计算时,使初筛卷积核的圆心与圆形数阵圆心重合,顺时针转动,每次转动角度为像素块所对应的弧度角角度,每转动一次即可得到一组卷积结果,直至将圆形数阵遍历,计算得到的单环数阵即为初级卷积结果;S2. Use the preliminary screening convolution kernel to traverse and calculate the circular array in S1. The preliminary screening convolution kernel adopts a sector-shaped structure with the same radius as the circular array. When performing the traversal calculation, the preliminary screening convolution kernel makes the initial screening The center of the convolution kernel coincides with the center of the circular array. Rotate clockwise. Each rotation angle is the arc angle corresponding to the pixel block. A set of convolution results can be obtained with each rotation until the circular array is traversed. , the calculated single-ring array is the primary convolution result;
S3、将S2中的初级卷积结果与预先存储的正常眼底图像初级卷积结果进行逐一映射作差,提取初级卷积结果中差值绝对值大于预先设定阈值部分所对应的圆形数阵像素块,即眼底图像中的异常区块;S3. Map and compare the primary convolution results in S2 with the pre-stored primary convolution results of the normal fundus image one by one, and extract the circular array corresponding to the part where the absolute value of the difference in the primary convolution result is greater than the preset threshold. Pixel blocks, i.e. abnormal blocks in fundus images;
S4、使用定位卷积核遍历计算S3中提取的异常区块,其中定位卷积核采用与圆形数阵相同半径的部分环带结构,定位卷积核在进行遍历计算时,使定位卷积核的圆心与异常区块所在圆心重合,顺时针转动,对异常区块中相同环带内的像素块逐一卷积计算,直至将同一异常区块内的所有环带计算完成,即得到二级卷积结果;S4. Use the positioning convolution kernel to traverse the abnormal blocks extracted in S3. The positioning convolution kernel adopts a partial ring structure with the same radius as the circular array. When performing the traversal calculation, the positioning convolution kernel uses The center of the circle of the kernel coincides with the center of the circle where the abnormal block is located. Turn clockwise to convolve and calculate the pixel blocks in the same annulus in the abnormal block one by one until all the annulus in the same abnormal block are calculated, that is, the second level is obtained. convolution result;
S5、将S4中的病变卷积结果与预先存储的正常眼底图像二级卷积结果进行逐一映射作差,提取二级卷积结果中差值绝对值大于预先设定阈值部分所对应的圆形数阵区域,即得到眼底图像中的病变区域图像。S5. Map and compare the lesion convolution results in S4 and the pre-stored secondary convolution results of the normal fundus image one by one, and extract the circles corresponding to the parts of the secondary convolution results where the absolute value of the difference is greater than the preset threshold. The array area is used to obtain the image of the lesion area in the fundus image.
在本实施例中,设置同心等距的环带宽度和像素块的弧形角度可调,能够使本方法在保证定位速率的同时,适用于不同眼底病变的图像定位,提升该方法的灵活运行程度;同时,对眼底图像进行初步角度调整,能够减少运算过程中的误差,提升该方法的精准度。In this embodiment, setting concentric and equidistant ring widths and adjustable arc angles of pixel blocks can make this method suitable for image positioning of different fundus lesions while ensuring the positioning speed, and improve the flexible operation of this method. At the same time, preliminary angle adjustment of the fundus image can reduce errors in the calculation process and improve the accuracy of the method.
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明技术原理的前提下,还可以做出若干改进和变型,这些改进和变型也应视为本发明的保护范围。The above are only preferred embodiments of the present invention. It should be noted that those of ordinary skill in the art can also make several improvements and modifications without departing from the technical principles of the present invention. These improvements and modifications It should also be regarded as the protection scope of the present invention.

Claims (8)

  1. 一种基于深度学习的眼底图像质量优化方法,其特征在于,包括以下步骤:A fundus image quality optimization method based on deep learning, which is characterized by including the following steps:
    步骤1、获取眼底图像,并对眼底图像进行锐化,灰度处理;Step 1. Obtain the fundus image and perform sharpening and grayscale processing on the fundus image;
    步骤2、对步骤1中经过灰度处理的图像进行圆心确定,并根据确定的圆心进行同心环带分割,得到若干组同心等距环带;并沿径向将同心的每组环带均匀分割成若干组像素块;Step 2: Determine the center of the circle on the grayscale processed image in step 1, and segment concentric rings based on the determined center of the circle to obtain several groups of concentric and equidistant rings; and divide each group of concentric rings evenly along the radial direction. into several groups of pixel blocks;
    步骤3、计算步骤2中每组像素块的平均灰度值,并对此像素块进行灰度赋值,得到圆形数阵;Step 3. Calculate the average grayscale value of each group of pixel blocks in step 2, and perform grayscale assignment on this pixel block to obtain a circular array;
    步骤4、对步骤3中的圆形数阵进行卷积计算,根据卷积计算结果得到相对应的病变区域图像。Step 4: Perform convolution calculation on the circular array in step 3, and obtain the corresponding lesion area image based on the convolution calculation result.
  2. 如权利要求1所述的一种基于深度学习的眼底图像质量优化方法,其特征在于,其中步骤2中通过MATLAB对灰度处理后的眼底图像进行圆心确认,圆心定位程序包括以下步骤:A fundus image quality optimization method based on deep learning as claimed in claim 1, characterized in that in step 2, the circle center of the grayscale processed fundus image is confirmed through MATLAB, and the circle center positioning program includes the following steps:
    B=imread(‘眼底图像’);%读取原图B=imread(‘fundus image’); % read the original image
    A=im2bw(B);%二值化A=im2bw(B);% binarization
    [x,y]=find(A==0);%眼底图像边缘像素的坐标集合[x,y]=find(A==0);%Coordinate set of edge pixels in the fundus image
    center_x=min(x)+(max(x)-min(x))/2;center_x=min(x)+(max(x)-min(x))/2;
    center_y=min(y)+(max(y)-min(y))/2;center_y=min(y)+(max(y)-min(y))/2;
    center=[center_x,center_y];%此为眼底图像的圆心坐标center=[center_x,center_y];% This is the center coordinate of the fundus image
    其中,眼底图像格式采用BMP、GIF、HDF、JPEG、PCX、PNG、TIFF和XWD。Among them, fundus image formats include BMP, GIF, HDF, JPEG, PCX, PNG, TIFF and XWD.
  3. 如权利要求1所述的一种基于深度学习的眼底图像质量优化方法,其特征在于,其中步骤3中通过MATLAB计算每组像素块的平均灰度值,计算程序包括以下步骤:A fundus image quality optimization method based on deep learning as claimed in claim 1, wherein in step 3, the average gray value of each group of pixel blocks is calculated through MATLAB, and the calculation program includes the following steps:
    II1=imread(‘像素块’);%读取图像II1=imread(‘pixel block’); % read image
    II2=imread(‘像素块’);%读取图像II2=imread(‘pixel block’); % read image
    I1=rgb2gray(II1);I1=rgb2gray(II1);
    I2=rgb2gray(II2);I2=rgb2gray(II2);
    startX=350;endX=400;%设置像素块横坐标起始坐标及终点坐标startX=350; endX=400; % Set the starting coordinate and end coordinate of the abscissa of the pixel block
    startY=300;endY=350;%设置像素块纵坐标起始坐标及终点坐标startY=300; endY=350; % Set the starting coordinate and end coordinate of the vertical coordinate of the pixel block
    aver1=mean(mean(I1(startY:endY,startX:endX)))%选择像素块平均灰度值aver1=mean(mean(I1(startY:endY,startX:endX)))% select the average gray value of the pixel block
    aver2=mean(mean(I2(startY:endY,startX:endX)))%90度选择像素块平均灰度值aver2=mean(mean(I2(startY:endY,startX:endX)))%90 degree selected pixel block average gray value
    p1=(aver1-aver2)/(aver1+aver2)%此为像素块平均灰度值。p1=(aver1-aver2)/(aver1+aver2)% This is the average gray value of the pixel block.
  4. 如权利要求1所述的一种基于深度学习的眼底图像质量优化方法,其特征在于,步骤4的卷积计算中,包括以下步骤:A fundus image quality optimization method based on deep learning as claimed in claim 1, characterized in that the convolution calculation in step 4 includes the following steps:
    S1、输入步骤3中灰度赋值后的圆形数阵;S1. Enter the circular array after grayscale assignment in step 3;
    S2、使用初筛卷积核遍历计算S1中的圆形数阵,得到初级卷积结果;S2. Use the preliminary convolution kernel to traverse and calculate the circular array in S1 to obtain the primary convolution result;
    S3、将S2中的初级卷积结果与预先存储的正常眼底图像初级卷积结果进行逐一映射作差,提取初级卷积结果中差值绝对值大于预先设定阈值部分所对应的圆形数阵像素块,即眼底图像中的异常区块;S3. Map and compare the primary convolution results in S2 with the pre-stored primary convolution results of the normal fundus image one by one, and extract the circular array corresponding to the part where the absolute value of the difference in the primary convolution result is greater than the preset threshold. Pixel blocks, i.e. abnormal blocks in fundus images;
    S4、使用定位卷积核遍历计算S3中提取的异常区块,得到二级卷积结果;S4. Use the positioning convolution kernel to traverse and calculate the abnormal blocks extracted in S3 to obtain the secondary convolution result;
    S5、将S4中的病变卷积结果与预先存储的正常眼底图像二级卷积结果进行逐一映射作差,提取二级卷积结果中差值绝对值大于预先设定阈值部分所对应的圆形数阵区域,即得到眼底图像中的病变区域图像。S5. Map and compare the lesion convolution results in S4 and the pre-stored secondary convolution results of the normal fundus image one by one, and extract the circles corresponding to the parts of the secondary convolution results where the absolute value of the difference is greater than the preset threshold. The array area is used to obtain the image of the lesion area in the fundus image.
  5. 如权利要求4所述的一种基于深度学习的眼底图像质量优化方法,其特征在于,其中初筛卷积核采用与圆形数阵相同半径的扇形结构,初筛卷积核在进行遍历计算时,使初筛卷积核的圆心与圆形数阵圆心重合,顺时针转动,每次转动角度为像素块所对应的弧度角角度,每转动一次即可得到一组卷积结果,直至将圆形数阵遍历,计算得到的单环数阵即为初级卷积结果。A fundus image quality optimization method based on deep learning as claimed in claim 4, characterized in that the preliminary screening convolution kernel adopts a sector-shaped structure with the same radius as the circular array, and the preliminary screening convolution kernel performs traversal calculations. When , make the center of the circle of the initial screening convolution kernel coincide with the center of the circular array, rotate clockwise, and the angle of each rotation is the arc angle corresponding to the pixel block. A set of convolution results can be obtained with each rotation until the The circular array is traversed, and the calculated single-ring array is the primary convolution result.
  6. 如权利要求5所述的一种基于深度学习的眼底图像质量优化方法, 其特征在于,其中定位卷积核采用与圆形数阵相同半径的部分环带结构,定位卷积核在进行遍历计算时,使定位卷积核的圆心与异常区块所在圆心重合,顺时针转动,对异常区块中相同环带内的像素块逐一卷积计算,直至将同一异常区块内的所有环带计算完成,即得到二级卷积结果。A fundus image quality optimization method based on deep learning as claimed in claim 5, characterized in that the positioning convolution kernel adopts a partial ring structure with the same radius as the circular array, and the positioning convolution kernel performs traversal calculations. When , make the center of the circle positioning the convolution kernel coincide with the center of the circle where the abnormal block is located, rotate clockwise, and convolve the pixel blocks in the same annulus in the abnormal block one by one until all the annulus in the same abnormal block are calculated When completed, the second-level convolution result is obtained.
  7. 如权利要求1所述的一种基于深度学习的眼底图像质量优化方法,其特征在于,步骤2中的环带宽度能够根据不同的眼底病变进行调整,病变范围越小,环带宽度越窄。A fundus image quality optimization method based on deep learning as claimed in claim 1, characterized in that the annular band width in step 2 can be adjusted according to different fundus lesions. The smaller the lesion range, the narrower the annular band width.
  8. 如权利要求1所述的一种基于深度学习的眼底图像质量优化方法,其特征在于,根据卷积计算时相对于视乳头的起始位置,预先设定眼底图像模板,在步骤2中依靠眼底图像模板对眼底图像样本进行角度调整至样本视乳头与模板视乳头重合。A fundus image quality optimization method based on deep learning as claimed in claim 1, characterized in that the fundus image template is preset according to the starting position relative to the optic nerve head during convolution calculation, and in step 2, the fundus image template is preset. The image template adjusts the angle of the fundus image sample until the sample optic head coincides with the template optic head.
PCT/CN2022/100938 2022-06-15 2022-06-24 Fundus image quality optimization method based on deep learning WO2023240674A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210670669.6 2022-06-15
CN202210670669.6A CN115018799B (en) 2022-06-15 2022-06-15 Fundus image quality optimization method based on deep learning

Publications (1)

Publication Number Publication Date
WO2023240674A1 true WO2023240674A1 (en) 2023-12-21

Family

ID=83075087

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/100938 WO2023240674A1 (en) 2022-06-15 2022-06-24 Fundus image quality optimization method based on deep learning

Country Status (2)

Country Link
CN (1) CN115018799B (en)
WO (1) WO2023240674A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009124679A1 (en) * 2008-04-09 2009-10-15 Carl Zeiss Meditec Ag Method for the automatised detection and segmentation of papilla in fundus images
US20160364611A1 (en) * 2015-06-15 2016-12-15 Morpho Method for Identifying and/or Authenticating an Individual by Iris Recognition
CN106408564A (en) * 2016-10-10 2017-02-15 北京新皓然软件技术有限责任公司 Depth-learning-based eye-fundus image processing method, device and system
JP2018121885A (en) * 2017-01-31 2018-08-09 株式会社ニデック Image processing device, image processing system, and image processing program
CN111127425A (en) * 2019-12-23 2020-05-08 北京至真互联网技术有限公司 Target detection positioning method and device based on retina fundus image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018215855A1 (en) * 2017-05-23 2018-11-29 Indian Institute Of Science Automated fundus image processing techniques for glaucoma prescreening
WO2019230643A1 (en) * 2018-05-31 2019-12-05 キヤノン株式会社 Information processing device, information processing method, and program
CN114240823A (en) * 2021-10-29 2022-03-25 深圳莫廷医疗科技有限公司 Real-time tear film break-up detection method, computer-readable storage medium, and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009124679A1 (en) * 2008-04-09 2009-10-15 Carl Zeiss Meditec Ag Method for the automatised detection and segmentation of papilla in fundus images
US20160364611A1 (en) * 2015-06-15 2016-12-15 Morpho Method for Identifying and/or Authenticating an Individual by Iris Recognition
CN106408564A (en) * 2016-10-10 2017-02-15 北京新皓然软件技术有限责任公司 Depth-learning-based eye-fundus image processing method, device and system
JP2018121885A (en) * 2017-01-31 2018-08-09 株式会社ニデック Image processing device, image processing system, and image processing program
CN111127425A (en) * 2019-12-23 2020-05-08 北京至真互联网技术有限公司 Target detection positioning method and device based on retina fundus image

Also Published As

Publication number Publication date
CN115018799A (en) 2022-09-06
CN115018799B (en) 2022-11-11

Similar Documents

Publication Publication Date Title
CN115018853B (en) Mechanical component defect detection method based on image processing
CN110706233A (en) Retina fundus image segmentation method and device
CN109064470B (en) Image segmentation method and device based on self-adaptive fuzzy clustering
Agurto et al. Detection of neovascularization in the optic disc using an AM-FM representation, granulometry, and vessel segmentation
US11915428B2 (en) Method for determining severity of skin disease based on percentage of body surface area covered by lesions
CN111192251B (en) Follicle ultrasonic processing method and system based on level set image segmentation
KR20220076507A (en) Retinal blood vessel arteriovenous classification method and device, device
CN115578389A (en) Defect detection method of groove MOS device
WO2023240674A1 (en) Fundus image quality optimization method based on deep learning
CN116862912B (en) Raw oil impurity detection method based on machine vision
CN109087310B (en) Meibomian gland texture region segmentation method and system, storage medium and intelligent terminal
CN111507932A (en) High-specificity diabetic retinopathy characteristic detection method and storage equipment
CN111899267A (en) Retina blood vessel segmentation algorithm based on level set
CN112435235A (en) Seed cotton impurity content detection method based on image analysis
CN115439523A (en) Method and equipment for detecting pin size of semiconductor device and storage medium
CN109447948B (en) Optic disk segmentation method based on focus color retina fundus image
CN111079802B (en) Matching method based on gradient information
JP2015175739A (en) Defect inspection device, defect inspection method, defect inspection program, and recording medium
Huang et al. An effective tooth isolation method for bitewing dental X-ray images
CN113362280B (en) Dynamic target tracking method based on medical radiography
CN113096116B (en) Automatic detection method for performance of medical ultrasonic equipment based on gray scale image
CN115908361A (en) Method for identifying decayed tooth of oral panoramic film
Ashame et al. Abnormality Detection in Eye Fundus Retina
CN113974667A (en) Automatic positioning device and method for TAVI preoperative key target
CN112258533A (en) Method for segmenting earthworm cerebellum in ultrasonic image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22946353

Country of ref document: EP

Kind code of ref document: A1