WO2021169128A1 - Method and apparatus for recognizing and quantifying fundus retina vessel, and device and storage medium - Google Patents

Method and apparatus for recognizing and quantifying fundus retina vessel, and device and storage medium Download PDF

Info

Publication number
WO2021169128A1
WO2021169128A1 PCT/CN2020/099538 CN2020099538W WO2021169128A1 WO 2021169128 A1 WO2021169128 A1 WO 2021169128A1 CN 2020099538 W CN2020099538 W CN 2020099538W WO 2021169128 A1 WO2021169128 A1 WO 2021169128A1
Authority
WO
WIPO (PCT)
Prior art keywords
blood vessel
feature map
segmentation
arteriovenous
image
Prior art date
Application number
PCT/CN2020/099538
Other languages
French (fr)
Chinese (zh)
Inventor
柳杨
王瑞
王立龙
吕彬
吕传峰
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021169128A1 publication Critical patent/WO2021169128A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

A method and apparatus for recognizing and quantifying a fundus retina vessel, and a device and a storage medium, applicable to the field of precision medicines. The method comprises: inputting an original fundus image into a pre-trained U-shaped convolutional neural network model for processing, to obtain a target feature map; performing optic disk segmentation based on the target feature map; segmenting the original fundus image to obtain the arteriovenous vessel recognition result; carrying out region-of-interest positioning based on the optic disk segmentation result; extracting a vessel centerline according to the arteriovenous vessel recognition result, detecting key points in the vessel centerline, removing the key points to obtain a plurality of mutually independent vessel segments, and correcting arteriovenous category information on each vessel segment; and obtaining the vessel diameter of each vessel segment after category information correction based on the extracted vessel centerline, and then quantifying arteriovenous vessels in a region of interest. The method facilitates improving the fundus retina artery and vein vessel identification precision, thereby improving the quantization precision.

Description

眼底视网膜血管识别及量化方法、装置、设备及存储介质Method, device, equipment and storage medium for recognizing and quantifying fundus retinal blood vessels
本申请要求于2020年2月29日提交中国专利局、申请号为202010134390.7,发明名称为“眼底视网膜血管识别及量化方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed with the Chinese Patent Office on February 29, 2020, the application number is 202010134390.7, and the invention title is "Fundus Retinal Vessel Recognition and Quantification Method, Apparatus, Equipment, and Storage Medium", and its entire content Incorporated in this application by reference.
技术领域Technical field
本申请涉及计算机视觉技术领域,尤其涉及一种眼底视网膜血管识别及量化方法、装置、设备及存储介质。This application relates to the field of computer vision technology, and in particular to a method, device, equipment and storage medium for recognizing and quantifying retinal blood vessels in the fundus.
背景技术Background technique
眼底视网膜动静脉血管一直是医学研究的重点领域,尤其是距离视盘中心1pd-1.5pd(Papillary Diameter,视盘直径)范围内的眼底视网膜动静脉血管,二者的管径比或形态的变化是许多系统性和血液性疾病早期诊断的依据,例如心血管疾病、糖尿病、高血压等。计算眼底视网膜动静脉血管的管径比需要对动静脉血管进行精确分类,传统的眼底视网膜诊断,是凭借医生观察眼底图像,通过自身医学经验得出诊断结果,这种方式耗时较长,且工作量大,动静脉血管的识别和分类受主观性影响较大。随着计算机图像处理技术的发展,目前利用眼底彩照进行眼底视网膜血管提取的较多,由于眼底彩照具有亮度不均匀、血管及背景色彩交错复杂、动静脉差别小等特点,给动静脉血管识别和分类造成一定困难,而现有动静脉血管自动识别的研究主要依据色彩或血管的部分结构实现了局部动静脉血管识别,但发明人意识到其识别精度仍然有限,进而导致量化精度较低。Fundus retinal arteriovenous blood vessels have always been a key area of medical research, especially those within the range of 1pd-1.5pd (Papillary Diameter) from the center of the optic disc. The ratio of the diameters of the two or the morphological changes are many. The basis for early diagnosis of systemic and hematological diseases, such as cardiovascular disease, diabetes, and hypertension. Calculating the diameter ratio of the fundus retinal arteriovenous vessels requires precise classification of arteriovenous vessels. The traditional fundus retinal diagnosis relies on the doctor to observe the fundus image and obtain the diagnosis result through his own medical experience. This method takes a long time and The workload is large, and the recognition and classification of arteriovenous vessels is greatly affected by subjectivity. With the development of computer image processing technology, the fundus color photographs are currently used to extract the retinal blood vessels of the fundus. Because the color fundus photographs have the characteristics of uneven brightness, interlaced blood vessels and background colors, and small differences between arteries and veins, they can be used to identify arteries and veins. Classification causes certain difficulties, and the existing research on automatic recognition of arteriovenous blood vessels mainly realizes local arteriovenous blood vessel recognition based on color or partial structure of blood vessels, but the inventor realizes that the recognition accuracy is still limited, which leads to lower quantification accuracy.
发明内容Summary of the invention
本申请实施例提供了一种眼底视网膜血管识别及量化方法、装置、设备及存储介质,有利于提高眼底视网膜动静脉血管识别精度,进而提高量化精度。The embodiments of the present application provide a method, device, equipment, and storage medium for recognizing and quantifying retinal blood vessels in the fundus, which is beneficial to improve the recognition accuracy of retinal arteriovenous blood vessels in the fundus, thereby improving the quantification accuracy.
第一方面,本申请实施例提供了一种眼底视网膜血管识别及量化方法,该方法包括:In the first aspect, the embodiments of the present application provide a method for identifying and quantifying retinal blood vessels in the fundus, and the method includes:
将原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个尺度的目标特征图;Input the original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales;
基于所述目标特征图进行视盘分割,获得视盘分割结果;Performing segmentation of the optic disc based on the target feature map, and obtaining a segmentation result of the optic disc;
采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果;Segmenting the original fundus image by using a pre-trained cascade segmentation network model to obtain an arteriovenous and vascular recognition result;
基于视盘分割结果进行感兴趣区域定位,获得感兴趣区域定位结果;Locate the region of interest based on the results of the segmentation of the optic disc, and obtain the region of interest location results;
根据动静脉血管识别结果提取血管中心线,采用邻域连通性判定方法检测血管中心线中的关键点,去除所述关键点以得到多个相互独立的血管段,并对各血管段上的动静脉类别信息进行修正,得到动静脉类别信息修正后的各血管段;The blood vessel centerline is extracted according to the result of arteriovenous blood vessel identification, the neighborhood connectivity determination method is used to detect the key points in the blood vessel centerline, the key points are removed to obtain multiple independent blood vessel segments, and the movement on each blood vessel segment is determined. Revise the vein category information to obtain each blood vessel segment after the correction of the arteriovenous category information;
基于动静脉类别信息修正后的各血管段的血管中心线,采用边界探测的方法获取动静脉类别信息修正后的各血管段的血管直径,根据所述血管直径计算感兴趣区域内动脉血管和静脉血管的直径比值。Based on the blood vessel center line of each blood vessel segment corrected by the arteriovenous category information, the method of boundary detection is used to obtain the blood vessel diameter of each blood vessel segment corrected by the arteriovenous category information, and the arterial blood vessels and veins in the region of interest are calculated based on the blood vessel diameters. The ratio of the diameters of blood vessels.
第二方面,本申请实施例提供了一种眼底视网膜血管识别及量化装置,该装置包括:In a second aspect, an embodiment of the present application provides a device for identifying and quantifying retinal blood vessels in the fundus, which includes:
特征提取模块,用于将原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个尺度的目标特征图;The feature extraction module is used to input the original fundus image into the pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales;
视盘分割模块,用于基于所述目标特征图进行视盘分割,获得视盘分割结果;A video disc segmentation module, configured to perform video disc segmentation based on the target feature map to obtain a video disc segmentation result;
血管识别模块,用于采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果;The blood vessel recognition module is used to segment the original fundus image by using a pre-trained cascade segmentation network model to obtain an arteriovenous blood vessel recognition result;
区域定位模块,用于基于视盘分割结果进行感兴趣区域定位,获得感兴趣区域定位结果;The region positioning module is used to locate the region of interest based on the result of the segmentation of the optic disc, and obtain the region of interest positioning result;
中心线提取模块,用于根据动静脉血管识别结果提取血管中心线,采用邻域连通性判定方法检测血管中心线中的关键点,去除所述关键点以得到多个相互独立的血管段,并对各血管段上的动静脉类别信息进行修正,得到动静脉类别信息修正后的各血管段;The centerline extraction module is used to extract the centerline of the blood vessel according to the result of the arteriovenous blood vessel identification, use the neighborhood connectivity determination method to detect the key points in the centerline of the blood vessel, remove the key points to obtain multiple independent blood vessel segments, and Correct the arterial and venous category information on each blood vessel segment, and obtain each blood vessel segment with the revised arterial and venous category information;
直径比计算模块,用于基于动静脉类别信息修正后的各血管段的血管中心线,采用边界探测的方法获取动静脉类别信息修正后的各血管段的血管直径,根据所述血管直径计算感兴趣区域内动脉血管和静脉血管的直径比值。The diameter ratio calculation module is used to obtain the blood vessel diameter of each blood vessel segment corrected by the arteriovenous category information by using the method of boundary detection based on the corrected blood vessel center line of the arteriovenous category information, and calculate the sensation based on the blood vessel diameter The ratio of the diameters of arteries and veins in the region of interest.
第三方面,本申请实施例提供了一种电子设备,电子设备包括:处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现:In a third aspect, an embodiment of the present application provides an electronic device. The electronic device includes a processor, a memory, and a computer program stored on the memory and running on the processor, and the processor executes the Realize in computer program:
将原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个尺度的目标特征图;Input the original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales;
基于所述目标特征图进行视盘分割,获得视盘分割结果;Performing segmentation of the optic disc based on the target feature map, and obtaining a segmentation result of the optic disc;
采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果;Segmenting the original fundus image by using a pre-trained cascade segmentation network model to obtain an arteriovenous and vascular recognition result;
基于视盘分割结果进行感兴趣区域定位,获得感兴趣区域定位结果;Locate the region of interest based on the results of the segmentation of the optic disc, and obtain the region of interest location results;
根据动静脉血管识别结果提取血管中心线,采用邻域连通性判定方法检测血管中心线中的关键点,去除所述关键点以得到多个相互独立的血管段,并对各血管段上的动静脉类别信息进行修正,得到动静脉类别信息修正后的各血管段;The blood vessel centerline is extracted according to the result of arteriovenous blood vessel identification, the neighborhood connectivity determination method is used to detect the key points in the blood vessel centerline, the key points are removed to obtain multiple independent blood vessel segments, and the movement on each blood vessel segment is determined. Revise the vein category information to obtain each blood vessel segment after the correction of the arteriovenous category information;
基于动静脉类别信息修正后的各血管段的血管中心线,采用边界探测的方法获取动静脉类别信息修正后的各血管段的血管直径,根据所述血管直径计算感兴趣区域内动脉血管和静脉血管的直径比值。Based on the blood vessel center line of each blood vessel segment corrected by the arteriovenous category information, the method of boundary detection is used to obtain the blood vessel diameter of each blood vessel segment corrected by the arteriovenous category information, and the arterial blood vessels and veins in the region of interest are calculated based on the blood vessel diameters. The ratio of the diameters of blood vessels.
第四方面,本申请实施例提供了一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至8中任一项所述方法中的步骤。In the fourth aspect, the embodiments of the present application provide a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium. The steps in the method.
本申请实施例通过对原始眼底图像进行特征提取,得到多个尺度的目标特征图,基于得到的目标特征图进行视盘分割,获得视盘分割结果,采用预训练的级联分割网络模型对原始眼底图像进行分割,获得动静脉血管识别结果,基于视盘分割结果进行感兴趣区域定位,根据动静脉血管识别结果提取血管中心线,采用邻域连通性判定方法检测血管中心线中的关键点,去除该关键点得到多个相互独立的血管段,并对各血管段上的动静脉类别信息进行修正,基于提取出的血管中心线,采用边界探测的方法获取类别信息修正后的各血管段的血管直径,根据获取的血管直径计算感兴趣区域内动脉血管和静脉血管的直径比值,从而提高了眼底视网膜动静脉血管识别精度,进而提高量化精度。In the embodiment of the present application, feature extraction of the original fundus image is performed to obtain target feature maps of multiple scales, and the optic disc segmentation is performed based on the obtained target feature maps to obtain the results of the optic disc segmentation. The pre-trained cascaded segmentation network model is used to analyze the original fundus image. Perform segmentation to obtain the results of arteriovenous and vascular recognition, locate the region of interest based on the results of the optic disc segmentation, extract the centerline of the blood vessel based on the result of the arteriovenous and vascular recognition, and use the neighborhood connectivity determination method to detect the key points in the centerline of the blood vessel and remove the key Point to obtain multiple independent blood vessel segments, and correct the arterial and vein category information on each blood vessel segment. Based on the extracted blood vessel centerline, the boundary detection method is used to obtain the blood vessel diameter of each blood vessel segment after the category information is corrected. According to the obtained blood vessel diameter, the ratio of the diameter of the arterial blood vessel to the venous blood vessel in the region of interest is calculated, thereby improving the recognition accuracy of the fundus retinal arteriovenous blood vessel, thereby improving the quantification accuracy.
附图说明Description of the drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.
图1为本申请实施例提供的一种应用架构图;Figure 1 is an application architecture diagram provided by an embodiment of the application;
图2本申请实施例提供的一种眼底视网膜血管识别及量化方法的流程示意图;FIG. 2 is a schematic flowchart of a method for identifying and quantifying fundus retinal blood vessels according to an embodiment of the present application;
图3为本申请实施例提供的一种视盘分割网络的结构示意图;FIG. 3 is a schematic structural diagram of a video disk segmentation network provided by an embodiment of this application;
图4-a为本申请实施例提供的一种下采样的流程示意图;Figure 4-a is a schematic diagram of a down-sampling process provided by an embodiment of the application;
图4-b为本申请实施例提供的一种U型卷积神经网络模型编码器部分的示例图;Fig. 4-b is an example diagram of a part of a U-shaped convolutional neural network model encoder provided by an embodiment of the application;
图5-a为本申请实施例提供的一种上采样的流程示意图;Figure 5-a is a schematic diagram of an up-sampling process provided by an embodiment of the application;
图5-b为本申请实施例提供的一种U型卷积神经网络模型解码器部分的示例图;Fig. 5-b is an example diagram of a decoder part of a U-shaped convolutional neural network model provided by an embodiment of the application;
图6为本申请实施例提供的一种视盘分割的示例图;FIG. 6 is an example diagram of a video disc segmentation provided by an embodiment of the application; FIG.
图7为本申请实施例提供的一种级联分割网络模型的示例图;FIG. 7 is an example diagram of a cascaded segmentation network model provided by an embodiment of this application;
图8为本申请实施例提供的一种U型分割网络期望输出区域A与实际输出区域B的示例图;FIG. 8 is an example diagram of a desired output area A and an actual output area B of a U-shaped segmentation network provided by an embodiment of the application;
图9为本申请实施例提供的一种基于原始眼底图像进行动静脉血管识别的流程示意图;FIG. 9 is a schematic diagram of a process of performing arteriovenous and vascular recognition based on original fundus images according to an embodiment of the application;
图10为本申请实施例提供的一种切分眼底图像块的示例图;FIG. 10 is an example diagram of segmenting fundus image blocks according to an embodiment of this application;
图11-a为本申请实施例提供的一种感兴趣区域定位结果的示例图;FIG. 11-a is an example diagram of a location result of a region of interest provided by an embodiment of the application; FIG.
图11-b为本申请实施例提供的一种血管中心线的示例图;Fig. 11-b is an example diagram of a blood vessel centerline provided by an embodiment of the application;
图11-c为本申请实施例提供的一种血管中心线关键点检测的示例图;FIG. 11-c is an example diagram of detection of key points on the centerline of a blood vessel according to an embodiment of the application; FIG.
图12为本申请实施例提供的一种眼底视网膜血管识别及量化装置的结构示意图;FIG. 12 is a schematic structural diagram of a device for identifying and quantifying fundus retinal blood vessels according to an embodiment of the application;
图13为本申请实施例提供的一种电子设备的结构示意图。FIG. 13 is a schematic structural diagram of an electronic device provided by an embodiment of this application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be described clearly and completely in conjunction with the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, rather than all of them. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
本申请说明书、权利要求书和附图中出现的术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。此外,术语“第一”、“第二”和“第三”等是用于区别不同的对象,而并非用于描述特定的顺序。The terms "including" and "having" appearing in the specification, claims, and drawings of this application and any variations thereof are intended to cover non-exclusive inclusions. For example, a process, method, system, product, or device that includes a series of steps or units is not limited to the listed steps or units, but optionally includes unlisted steps or units, or optionally also includes Other steps or units inherent to these processes, methods, products or equipment. In addition, the terms "first", "second", "third", etc. are used to distinguish different objects, but not to describe a specific sequence.
首先结合相关附图来举例介绍下本申请实施例的方案可能应用到的网络系统架构。请参见图1,图1为本申请实施例提供的一种应用架构图,如图1所示,包括电子设备、图像采集设备和数据库,图像采集设备和数据库分别通过网络与电子设备连接通信,电子设备中包括处理器,该电子设备可以是任何能够实现本申请实施例提供的眼底视网膜血管识别及量化方法的设备,例如:医学研究室的超级计算机、医院检查室的电脑、服务器等。图像采集设备可以是任何能够采集眼底图像的设备,例如:彩色眼底相机等,一种应用场景中,在医院眼科检查室,当图像采集设备拍摄到被检查者的眼底图像后,通过网络向电子设备传输该眼底图像,电子设备执行本申请提供的眼底视网膜血管识别及量化方法对该眼底图像进行识别和量化,输出量化结果。数据库可以是本地数据库,也可以是外界数据库,所谓本地数据库即企业、医院、研究室等自己的数据库,所谓外界数据库即国内外公开常用的眼底图像数据库,例如:STARE(structured analysis of the retinal)数据库、DRIVE(digital retinal images for vessel extraction)数据库等等,另一种应用场景中,当研究室的工作人员需要对本申请提供的眼底视网膜血管识别及量化方法进行测试时,便可通过网络从数据库中获取用于测试的眼底图像,由电子设备执行测试操作。当然,在上述数据库是本地数据库时,图像采集设备可与数据库建立连接,将拍摄的眼底图像存储在数据库中。First, in conjunction with related drawings, an example of the network system architecture to which the solution of the embodiment of the present application may be applied is introduced. Please refer to FIG. 1. FIG. 1 is an application architecture diagram provided by an embodiment of the application. As shown in FIG. 1, it includes an electronic device, an image acquisition device, and a database. The image acquisition device and the database are respectively connected and communicated with the electronic device through a network. The electronic device includes a processor. The electronic device can be any device that can realize the fundus retinal blood vessel identification and quantification method provided in the embodiments of the present application, such as a supercomputer in a medical research room, a computer in a hospital examination room, a server, etc. The image capture device can be any device that can capture fundus images, such as a color fundus camera, etc. In an application scenario, in the ophthalmological examination room of a hospital, when the image capture device captures the fundus image of the examinee, it will send it to the electronics through the network The device transmits the fundus image, and the electronic device executes the fundus retinal blood vessel identification and quantification method provided in this application to identify and quantify the fundus image, and output the quantization result. The database can be a local database or an external database. The so-called local database refers to the own database of enterprises, hospitals, and research laboratories. The so-called external database refers to the fundus image database commonly used at home and abroad, for example: STARE (structured analysis of the retinal) Database, DRIVE (digital retinal images for vessel extraction) database, etc., in another application scenario, when the staff in the laboratory need to test the fundus retinal blood vessel identification and quantification method provided in this application, they can use the network from the database The fundus image for testing is acquired in the, and the test operation is performed by the electronic device. Of course, when the above-mentioned database is a local database, the image acquisition device can establish a connection with the database and store the captured fundus images in the database.
请参见图2,图2为本申请实施例提供的一种眼底视网膜血管识别及量化方法的流程示意图,如图2所示,包括步骤S21-S27:Please refer to FIG. 2. FIG. 2 is a schematic flowchart of a method for identifying and quantifying fundus retinal blood vessels according to an embodiment of the application. As shown in FIG. 2, it includes steps S21-S27:
S21,获取原始眼底图像。S21: Obtain an original fundus image.
本申请具体实施例中,原始眼底图像可以是任何地方的图像采集设备实时采集的眼底图像,例如:医学实验室采集的实验对象的眼底图像、医院检查室采集的病人的眼底图像等等,当然,原始眼底图像还可以是DRIVE等开源数据库中的眼底图像,具体不作限定。In the specific embodiment of the present application, the original fundus image may be a fundus image collected in real time by an image acquisition device anywhere, for example: fundus images of experimental subjects collected in a medical laboratory, fundus images of patients collected in a hospital examination room, etc., of course The original fundus image can also be a fundus image in an open source database such as DRIVE, which is not specifically limited.
S22,将所述原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个 尺度的目标特征图。S22: Input the original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales.
本申请具体实施例中,图3为一种视盘分割网络的结构示意图,其中,该视盘分割网络的主体部分为左侧(从原始眼底图像的输入到目标特征图的输出)的U型卷积神经网络模型,除了最基本的输入层和输出层,该U型卷积神经网络模型包括多个隐藏层,多个隐藏层呈对称结构,构成U型卷积神经网络模型的编码器部分(下采样部分)和解码器部分(上采样部分)。In the specific embodiment of the present application, FIG. 3 is a schematic structural diagram of a disc segmentation network, where the main part of the disc segmentation network is a U-shaped convolution on the left side (from the input of the original fundus image to the output of the target feature map) The neural network model, in addition to the most basic input layer and output layer, the U-shaped convolutional neural network model includes multiple hidden layers, and the multiple hidden layers are symmetrical, forming the encoder part of the U-shaped convolutional neural network model (below Sampling part) and decoder part (upsampling part).
在一实施例中,所述将原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个尺度的目标特征图,包括:In an embodiment, the input of the original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales includes:
A:将所述原始眼底图像输入所述U型卷积神经网络模型的编码器部分进行关键特征提取,得到一高维特征图;A: Input the original fundus image into the encoder part of the U-shaped convolutional neural network model for key feature extraction to obtain a high-dimensional feature map;
其中,如图4-a所示,步骤A包括:Among them, as shown in Figure 4-a, step A includes:
S41,对所述原始眼底图像进行卷积处理以提取关键特征,得到与所述原始眼底图像尺寸相同的特征图;S41: Perform convolution processing on the original fundus image to extract key features, and obtain a feature map with the same size as the original fundus image;
S42,对经过卷积处理得到的特征图进行最大池化操作,逐层缩小特征图尺寸,经过若干卷积层和池化层的交替处理得到所述高维特征图。S42: Perform a maximum pooling operation on the feature map obtained through convolution processing, reduce the size of the feature map layer by layer, and obtain the high-dimensional feature map through alternating processing of several convolutional layers and pooling layers.
本申请具体实施例中,关键特征即为表征能力较强的特征,例如:像素值较大的特征。如图4-b所示,原始眼底图像输入U型卷积神经网络模型的编码器部分,首先经过卷积层conv进行卷积处理,得到原始眼底图像初次卷积操作后与原始眼底图像尺寸相同的特征图,如图3中的C1,然后特征图C1经过最大池化层Max-pooling进行下采样最大池化操作,缩小特征图C1的尺寸得到特征图C2,特征图C2再经过卷积层conv和最大池化层Max-pooling的处理,得到特征图C3,如此经过卷积层conv和最大池化层Max-pooling的交替处理得到分辨率较低的高维特征图,形成一个从低维到高维的特征金字塔。其中,编码器部分的卷积处理可以采用3*3的卷积核,步长可以取2,最大池化操作时,特征图以预设倍数缩小尺寸,举例说明,若以2倍进行下采样,特征图C1的尺寸为48*48,经过一次最大池化操作后得到特征图C2,那么特征图C2的尺寸为24*24,同理特征图C3的尺寸为12*12。In the specific embodiment of the present application, the key feature is a feature with strong characterization ability, for example, a feature with a large pixel value. As shown in Figure 4-b, the original fundus image is input into the encoder part of the U-shaped convolutional neural network model, and first passes the conv layer conv for convolution processing, and the original fundus image is the same size as the original fundus image after the initial convolution operation. The feature map of C1 in Figure 3, and then the feature map C1 goes through the maximum pooling layer Max-pooling for down-sampling and maximum pooling operation, reducing the size of the feature map C1 to get the feature map C2, and the feature map C2 goes through the convolutional layer Conv and the maximum pooling layer Max-pooling are processed to obtain the feature map C3, so after the convolutional layer conv and the maximum pooling layer Max-pooling alternate processing, a low-resolution high-dimensional feature map is obtained, forming a low-dimensional feature map To the high-dimensional feature pyramid. Among them, the convolution processing of the encoder part can use a 3*3 convolution kernel, and the step size can be 2. During the maximum pooling operation, the feature map is reduced in size by a preset multiple. For example, if the downsampling is performed by 2 times , The size of the feature map C1 is 48*48, the feature map C2 is obtained after a maximum pooling operation, then the size of the feature map C2 is 24*24, and the size of the feature map C3 is 12*12 in the same way.
B:将该高维特征图输入所述U型卷积神经网络模型的解码器部分进行上采样操作,输出多个尺度的目标特征图。B: Input the high-dimensional feature map into the decoder part of the U-shaped convolutional neural network model to perform an up-sampling operation, and output multiple scale target feature maps.
其中,如图5-a所示,步骤B包括:Among them, as shown in Figure 5-a, step B includes:
S51,对该高维特征图进行上采样操作,逐层放大该高维特征图尺寸;S51: Perform an up-sampling operation on the high-dimensional feature map, and enlarge the size of the high-dimensional feature map layer by layer;
S52,通过跳跃连接层将编码阶段每个网络层提取的低维特征与解码阶段对称提取的高维特征进行合并,得到每个网络层的初始特征图,所述每个网络层的初始特征图尺度不同;S52. Combine the low-dimensional features extracted from each network layer in the encoding stage with the high-dimensional features extracted symmetrically in the decoding stage through the jump connection layer, to obtain an initial feature map of each network layer, and the initial feature map of each network layer Different scales;
S53,通过每个网络层的输出分支对所述每个网络层的初始特征图进行输出得到多个尺度的目标特征图,所述每个网络层的输出分支中加入有注意力机制。S53: Output the initial feature map of each network layer through the output branch of each network layer to obtain target feature maps of multiple scales, and an attention mechanism is added to the output branch of each network layer.
本申请具体实施例中,如图5-b所示,U型卷积神经网络模型的解码器部分对编码器部分下采样得到的高维特征图进行上采样操作,同样以预设倍数放大高维特征图的尺寸,如图3中的特征图P5经过一次上采样操作其被放大为特征图P4的尺寸,若特征图P5的原有尺寸为12*12,则特征图P4的尺寸为24*24,具体可通过常用的插值方法进行上采样,例如:最近邻插值、双线性插值、均值插值等,具体不作限定。在每次得到尺寸放大的高维特征图后,将同一网络层在编码阶段提取的低维特征图与对应高维特征图进行合并,例如图3中的特征图C2与特征图P2属于同一网络层在编码阶段和解码阶段对称提取的特征图,通过跳跃连接层(skip-connection)将C2与P2进行合并便得到该网络层的初始特征图,由于会有多个网络层的合并操作,因此,将得到多个尺度的初始特征图。此时,我们通过添加有注意力机制的输出分支对每个网络层的初始特征图进行输出,便得到多个尺度的目 标特征图,其中,该注意力机制为SE模块,SE模块通过Squeeze操作、Excitation操作、Reweight操作完成在通道维度上对多个尺度的初始特征图的重标定,使输出的目标特征图的效果更好,可以很好地解决视盘边界不明确、视盘区域大小不一的问题。In the specific embodiment of the present application, as shown in Figure 5-b, the decoder part of the U-shaped convolutional neural network model performs an up-sampling operation on the high-dimensional feature map obtained by the down-sampling of the encoder part, and also enlarges the high by a preset multiple. The size of the dimensional feature map, such as the feature map P5 in Figure 3, is enlarged to the size of the feature map P4 after an upsampling operation. If the original size of the feature map P5 is 12*12, the size of the feature map P4 is 24 *24, the specific up-sampling can be performed by commonly used interpolation methods, such as nearest neighbor interpolation, bilinear interpolation, mean interpolation, etc. The specifics are not limited. After each high-dimensional feature map with an enlarged size is obtained, the low-dimensional feature map extracted in the encoding stage of the same network layer and the corresponding high-dimensional feature map are merged. For example, the feature map C2 and the feature map P2 in Figure 3 belong to the same network Layer feature maps extracted symmetrically in the encoding stage and the decoding stage, through the skip-connection layer (skip-connection) to merge C2 and P2 to get the initial feature map of the network layer, because there will be multiple network layer merging operations, so , Will get the initial feature map of multiple scales. At this point, we output the initial feature map of each network layer by adding the output branch of the attention mechanism to obtain multiple scales of target feature maps. Among them, the attention mechanism is the SE module, and the SE module is operated by Squeeze , Excitation operation, Reweight operation complete the recalibration of the initial feature map of multiple scales in the channel dimension, so that the output target feature map has a better effect, and it can solve the problem of unclear optic disc boundaries and different sizes of optic disc areas. problem.
S23,基于所述目标特征图进行视盘分割,获得视盘分割结果。S23: Perform optic disc segmentation based on the target feature map to obtain a optic disc segmentation result.
本申请具体实施例中,采用两阶段的视盘分割网络进行视盘分割,第一阶段采用U型卷积神经网络模型进行特征提取,第二阶段在第一阶段提取出的特征的基础上进行视盘分割。具体的,所述基于输出的所述目标特征图进行视盘分割,获得视盘分割结果,包括:将输出的所述目标特征图进行融合以得到待分割图像,对所述待分割图像进行候选框回归处理,以从所述待分割图像中定位视盘位置,并输出视盘的边界框信息,根据视盘的边界框信息裁剪出标定的视盘区域图像块输入预训练的U型分割网络中,经过特征提取和上采样操作输出视盘分割结果。In the specific embodiment of this application, a two-stage optic disc segmentation network is used for optic disc segmentation, the first stage adopts U-shaped convolutional neural network model for feature extraction, and the second stage performs optic disc segmentation based on the features extracted in the first stage . Specifically, the segmentation of the optic disc based on the output target feature map to obtain the result of the optic disc segmentation includes: fusing the output target feature map to obtain an image to be segmented, and performing candidate frame regression on the image to be segmented Processing to locate the position of the disc from the image to be segmented, and output the bounding box information of the disc, cut out the calibrated disc area image block according to the bounding box information of the disc, and input it into the pre-trained U-shaped segmentation network, after feature extraction and The up-sampling operation outputs the result of video disc segmentation.
如图3所示,在U型卷积神经网络模型输出多个尺度的目标特征图后,对其进行融合得到分辨率较佳的待分割图像,待分割图像经过候选框回归模块ROI Align进行特征提取,根据提取出的特征图遍历每一个候选区域,且不对浮点数边界做量化,再如图6所示,将候选区域分割成n*n个矩形单元,且不对每个单元边界做量化,按照固定规则在每个矩形单元中确定四个坐标位置,采用双线性内插的方法计算出这四个位置的值,进行最大池化操作,得到图6右侧的特征图,然后经过Flatten模块进行压平处理,输出视盘的边界框box,根据视盘边界框box裁剪出视盘区域图像块,将裁剪出的视盘区域图像块输入U型分割网络中进行视盘分割,得到最终的视盘分割图像。两阶段的视盘分割模型设计有助于排除拍摄环境或拍摄技术不佳造成的眼底高亮噪声对视盘分割产生的干扰,获取精度更高的视盘分割结果。As shown in Figure 3, after the U-shaped convolutional neural network model outputs multiple scales of target feature maps, they are fused to obtain a better resolution image to be segmented, and the image to be segmented is characterized by the candidate frame regression module ROIAlign Extraction, traverse each candidate area according to the extracted feature map, and do not quantize the floating-point number boundary, and then divide the candidate area into n*n rectangular units as shown in Figure 6, and do not quantize each unit boundary. Determine the four coordinate positions in each rectangular cell according to the fixed rules, calculate the values of these four positions by bilinear interpolation, and perform the maximum pooling operation to obtain the feature map on the right side of Figure 6, and then go through the Flatten The module performs flattening processing, outputs the bounding box box of the video disc, crops out the image block of the video disc area according to the bounding box box of the video disc, and inputs the cut out image block of the video disc area into the U-shaped segmentation network for video disc segmentation to obtain the final segmented video disc image. The two-stage optic disc segmentation model design helps to eliminate the interference of the fundus highlight noise caused by the shooting environment or poor shooting technology on the optic disc segmentation, and to obtain a higher precision optic disc segmentation result.
S24,采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果。S24: Use a pre-trained cascaded segmentation network model to segment the original fundus image to obtain an arteriovenous and blood vessel recognition result.
本申请具体实施例中,通过级联分割网络模型进行动静脉血管分割和识别,为了使动静脉血管识别精度更高,如图7所示,此处采用三次U型分割网络级联的方式,分别为第一U型分割网络、第二U型分割网络和第三U型分割网络,每一U型分割网络的结构如图7下侧的放大图所示,左侧为特征提取部分,右侧为上采样部分,左侧采用2*2的最大池化,右侧采用2*2的逆卷积,且两侧均采用了3*3的卷积核进行特征提取,整个U型分割网络没有全连接层,只对必要的卷积层进行了连接。在整个级联分割网络模型的训练过程中,需要考虑每一U型分割网络的损失函数DiceLoss,首先计算Dice值,采用如下公式:In the specific embodiment of the present application, the arteriovenous and venous blood vessels are segmented and identified through the cascaded segmentation network model. In order to make the recognition accuracy of arteriovenous blood vessels higher, as shown in FIG. 7, a three-time U-shaped segmentation network cascade method is used here. They are the first U-shaped segmentation network, the second U-shaped segmentation network, and the third U-shaped segmentation network. The structure of each U-shaped segmentation network is shown in the enlarged view on the lower side of Figure 7. The left side is the feature extraction part, and the right The side is the up-sampling part, the left side uses 2*2 maximum pooling, the right side uses 2*2 inverse convolution, and both sides use 3*3 convolution kernels for feature extraction, the entire U-shaped segmentation network There is no fully connected layer, only the necessary convolutional layers are connected. In the entire training process of the cascaded segmentation network model, the loss function DiceLoss of each U-shaped segmentation network needs to be considered. First, calculate the Dice value, using the following formula:
Figure PCTCN2020099538-appb-000001
Figure PCTCN2020099538-appb-000001
其中,如图8所示,Dice表示每一U型分割网络期望输出区域A与实际输出区域B之间的重叠度,smooth为平滑系数,默认设为1;根据Dice值计算每一U型分割网络的损失函数,采用公式:L K=DiceLoss=1-Dice,L k表示每一U型分割网络的损失函数值,最后,计算每一U型分割网络的损失函数L k的加权平均和,以得到整个预设级联分割网络模型的损失值,采用如下公式: Among them, as shown in Figure 8, Dice represents the degree of overlap between the expected output area A and the actual output area B of each U-shaped segmentation network. Smooth is the smoothing coefficient, which is set to 1 by default; each U-shaped segmentation is calculated according to the Dice value The loss function of the network uses the formula: L K = DiceLoss = 1-Dice, L k represents the value of the loss function of each U-shaped segmentation network, and finally, calculates the weighted average sum of the loss function L k of each U-shaped segmentation network, To obtain the loss value of the entire preset cascade segmentation network model, the following formula is used:
Figure PCTCN2020099538-appb-000002
Figure PCTCN2020099538-appb-000002
其中,Loss表示整个预设级联分割网络模型的损失值,L k表示每一U型分割网络的 损失函数,K表示预设级联分割网络模型中U型分割网络的数量,这里K为3个。此处计算整个预设级联分割网络模型的损失值是为了指导预设级联分割网络模型进行优化训练,获取更准确的动静脉血管识别结果,Loss值越小表示级联分割网络模型的动静脉血管识别结果更精确。 Among them, Loss represents the loss value of the entire preset cascade segmentation network model, L k represents the loss function of each U-shaped segmentation network, K represents the number of U-shaped segmentation networks in the preset cascade segmentation network model, where K is 3. indivual. The loss value of the entire preset cascade segmentation network model is calculated here to guide the optimization training of the preset cascade segmentation network model to obtain more accurate arteriovenous and vascular recognition results. The smaller the Loss value, the dynamic of the cascade segmentation network model. The venous vessel recognition result is more accurate.
在一实施例中,如图9所示,所述采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果,包括步骤S91-S93:In one embodiment, as shown in FIG. 9, the segmentation of the original fundus image using the pre-trained cascaded segmentation network model to obtain the arteriovenous vessel recognition result includes steps S91-S93:
S91,提取所述原始眼底图像的绿色通道图像,对该绿色通道图像做直方图均衡化处理,得到对比度增强的绿色通道图像;S91, extracting the green channel image of the original fundus image, and performing histogram equalization processing on the green channel image to obtain a green channel image with enhanced contrast;
S92,将该对比度增强的绿色通道图像切成多个眼底图像块;S92: Cut the contrast-enhanced green channel image into multiple fundus image blocks;
S93,将该多个眼底图像块输入预设级联分割网络模型进行分割,获取动静脉血管识别结果。S93: Input the multiple fundus image blocks into a preset cascade segmentation network model for segmentation, and obtain an arteriovenous blood vessel recognition result.
针对获取到的RGB格式的原始眼底图像,首先选取血管结构明显的绿色通道图像做直方图均衡化处理,增强对比度,之后如图10所示,采用切patch操作将对比度增强的绿色通道图切成Patch1、Patch2两个眼底图像块,然后将Patch1、Patch2两个眼底图像块输入预设级联分割网络模型进行分割,当然,实际操作中切出的眼底图像块数量要大得多,例如还有Patch3、Patch4、Patch5等等,每个切出的眼底图像块可以有重叠的部分。将切好的眼底图像块输入第一U型分割网络进行特征提取和上采样得到第一输出结果,将第一输出结果作为第二U型分割网络的输入,进行特征提取和上采样得到第二输出结果,将第二输出结果作为第三U型分割网络的输入,进行特征提取和上采样得到动静脉血管识别结果,该动静脉血管识别结果包括有动静脉血管识别的类别信息,例如:动脉血管、静脉血管标签等。For the acquired original fundus image in RGB format, first select the green channel image with obvious blood vessel structure for histogram equalization to enhance the contrast, and then use the patch operation to cut the contrast-enhanced green channel image into Two fundus image blocks Patch1 and Patch2, and then the two fundus image blocks Patch1 and Patch2 are input into the preset cascade segmentation network model for segmentation. Of course, the number of fundus image blocks cut out in actual operation is much larger, for example, there are Patch3, Patch4, Patch5, etc., each of the cut out fundus image blocks can have overlapping parts. Input the cut fundus image block into the first U-shaped segmentation network for feature extraction and up-sampling to obtain the first output result, use the first output result as the input of the second U-shaped segmentation network, perform feature extraction and up-sampling to obtain the second output result Output the result, take the second output result as the input of the third U-shaped segmentation network, perform feature extraction and up-sampling to obtain the arteriovenous blood vessel recognition result. The arteriovenous blood vessel recognition result includes the classification information of the arteriovenous blood vessel, for example: artery Blood vessels, venous blood vessel labels, etc.
S25,基于视盘分割结果进行感兴趣区域定位,获得感兴趣区域定位结果。S25: Perform locating the region of interest based on the result of the segmentation of the optic disc, and obtain the region of interest locating result.
本申请具体实施例中,感兴趣区域(region of interest,ROI),即机器视觉、图像处理中,从被处理的图像以方框、圆、椭圆、不规则多边形等方式勾勒出需要处理的区域,此处特指距离视盘中心1pd-1.5pd(Papillary Diameter即视盘直径)范围内的眼底区域。在视盘分割结果的基础上,对视盘边界进行椭圆拟合,具体可采用最小二乘法进行椭圆拟合,然后确定视盘的中心,以视盘中心为圆心,将1pd-1.5pd范围内的眼底区域定位为感兴趣区域,得到感兴区域定位结果,作为后续管径测量的候选区域,如图11-a所示。In the specific embodiment of this application, the region of interest (ROI), that is, in machine vision and image processing, outlines the area to be processed from the processed image in the form of boxes, circles, ellipses, irregular polygons, etc. , Here specifically refers to the fundus area within the range of 1pd-1.5pd (Papillary Diameter) from the center of the optic disc. Based on the results of optic disc segmentation, ellipse fitting is performed on the optic disc boundary. Specifically, the least squares method can be used for ellipse fitting, and then the center of the optic disc is determined, and the fundus area within the range of 1pd-1.5pd is located with the center of the optic disc as the center. For the region of interest, the location result of the region of interest is obtained as a candidate region for subsequent pipe diameter measurement, as shown in Figure 11-a.
S26,根据动静脉血管识别结果提取血管中心线,采用邻域连通性判定方法检测血管中心线中的关键点,去除所述关键点以得到多个相互独立的血管段,并对各血管段上的动静脉类别信息进行修正,得到动静脉类别信息修正后的各血管段。S26: Extract the blood vessel centerline according to the result of the arteriovenous blood vessel identification, use the neighborhood connectivity determination method to detect the key points in the blood vessel centerline, remove the key points to obtain a plurality of independent blood vessel segments, and compare each blood vessel segment The type information of the artery and vein is corrected, and each blood vessel segment after the correction of the type information of the artery and vein is obtained.
本申请具体实施例中,首先对动静脉血管识别结果进行二值化生成眼底血管二值化图,将该眼底血管二值化图输入U型分割网络中进行血管中心线提取,输出细化后的血管中心线图,如图11-b所示。然后采用邻域连通性判定方法检测血管中心线中的各关键点,例如:分支点和交叉点,邻域连通性判定方法可以是基于8邻域连通的判定方法,用N8(p)表示像素p的8邻域,对于具有像素值x的像素p和像素q,若q在N8(p)这个集合中,则判定像素p和像素q是8连通的,可判定出如图11-c所示的交叉点、分支点,在检测出交叉点和分支点后,将检测出的交叉点和分支点从血管中心线中去除,以根据血管中心线分离出各血管段,且各血管段相互独立。对各血管段进行连通性判定,在确定各血管段具有连通性后对各血管段上的动静脉类别信息进行统计,之后采用投票表决法对各血管段上的动静脉类别信息进行修正,得到动静脉类别信息修正后的各血管段。例如,一血管段上标有动脉信息标签的像素占多数,则将其它标有静脉信息的像素同样标为动脉信息,以保证同一血管段上的像素点类别信息一致。基于深度学习的血管中心线提取方法相较于传统的形态学细化操作,避免了人工设计复杂的细化规则,同时减少了血管中心线中的假阳性 分支,能够获得更精确的中心线提取结果。In the specific embodiment of this application, firstly, the results of arteriovenous blood vessel recognition are binarized to generate a fundus blood vessel binarized map, and the fundus blood vessel binarized map is input into the U-shaped segmentation network for the extraction of the centerline of the blood vessel, and the refined output is The centerline diagram of the blood vessel is shown in Figure 11-b. Then use the neighborhood connectivity determination method to detect the key points in the centerline of the blood vessel, such as branch points and intersections. The neighborhood connectivity determination method can be based on the 8-neighbor connectivity determination method, and N8(p) is used to represent the pixel. In the 8-neighborhood of p, for the pixel p and pixel q with pixel value x, if q is in the set N8(p), then it is determined that pixel p and pixel q are 8 connected, and it can be determined as shown in Figure 11-c. After the intersection and branch points are detected, the detected intersection and branch points are removed from the center line of the blood vessel to separate each blood vessel segment based on the blood vessel center line. Independence. The connectivity of each blood vessel segment is determined. After the connectivity of each blood vessel segment is determined, the arteriovenous type information on each blood vessel segment is counted, and then the arteriovenous type information on each blood vessel segment is corrected by voting method, and the information is obtained. Each blood vessel segment after the correction of the arteriovenous category information. For example, if a blood vessel segment has a majority of pixels marked with arterial information labels, other pixels marked with vein information are also marked as arterial information to ensure that the pixel category information on the same blood vessel segment is consistent. Compared with the traditional morphological refinement operation, the deep learning-based method for extracting the centerline of blood vessels avoids the manual design of complex thinning rules, while reducing false positive branches in the centerline of the blood vessel, enabling more accurate centerline extraction result.
S27,基于动静脉类别信息修正后的各血管段的血管中心线,采用边界探测的方法获取动静脉类别信息修正后的各血管段的血管直径,根据所述血管直径计算感兴趣区域内动脉血管和静脉血管的直径比值。S27: Based on the blood vessel center line of each blood vessel segment corrected by the arteriovenous category information, the method of boundary detection is used to obtain the blood vessel diameter of each blood vessel segment corrected by the arteriovenous category information, and the arterial blood vessel in the region of interest is calculated according to the blood vessel diameter. The ratio of the diameter of the vein to the diameter of the vein.
本申请具体实施例中,在步骤S26提取出血管中心线的基础上,采用边界探测的方法计算感兴趣区域内动静脉类别信息修正后的各血管段的血管直径,具体的,以血管中心点为圆心,在像素范围40*40大小的矩形区域内进行遍历,寻找距离动静脉类别信息修正后的各血管段中心线距离最近的边界点,以该边界点与血管中心点的距离为半径r,得到动静脉类别信息修正后的各血管段的血管直径2r。In the specific embodiment of the present application, based on the extraction of the blood vessel centerline in step S26, the method of boundary detection is used to calculate the blood vessel diameter of each blood vessel segment after the correction of the arteriovenous category information in the region of interest. Specifically, the blood vessel center point Is the center of the circle, traverse in a rectangular area with a pixel range of 40*40 to find the boundary point closest to the center line of each vessel segment after the correction of the arterial and vein category information, and the distance between the boundary point and the vessel center point is the radius r , The vessel diameter 2r of each vessel segment after the correction of the arteriovenous category information is obtained.
根据感兴趣区域内各血管段的血管直径,采用医学上的Parr-Hubbard-knudtson公式分别计算CRAE(centralretinalarteryequivalent,视网膜中央动脉血管直径等效值)和CRVE(centralretinalveinequivalent,视网膜中央静脉血管直径等效值),然后采用以下公式计算感兴趣区域内动脉血管和静脉血管的直径比值:According to the blood vessel diameter of each blood vessel segment in the region of interest, the medical Parr-Hubbard-knudtson formula is used to calculate CRAE (centralretinalarteryequivalent, the equivalent value of central retinal artery blood vessel diameter) and CRVE (centralretinalveinequivalent, the equivalent value of central retinal vein blood vessel diameter) ), and then use the following formula to calculate the ratio of the diameters of arterial vessels and venous vessels in the region of interest:
Figure PCTCN2020099538-appb-000003
Figure PCTCN2020099538-appb-000003
其中,AVR(ArterioletoVenule Ratio)表示感兴趣区域内动脉血管和静脉血管的直径比值,
Figure PCTCN2020099538-appb-000004
A i和A j分别表示获取到的感兴趣区域最大动脉血管直径和最小动脉血管直径,0.88为固定系数;
Figure PCTCN2020099538-appb-000005
V i和V j分别表示获取到的感兴趣区域最大静脉血管直径和最小静脉血管直径,0.95为固定系数,基于计算出的AVR值便可进行疾病预测和评估,具有一定的临床指导意义。
Among them, AVR (ArterioletoVenule Ratio) represents the ratio of the diameter of arteries and veins in the region of interest,
Figure PCTCN2020099538-appb-000004
A i and A j respectively represent the largest arterial vessel diameter and the smallest arterial vessel diameter of the region of interest obtained, and 0.88 is a fixed coefficient;
Figure PCTCN2020099538-appb-000005
V i and V j respectively represent the maximum and minimum venous diameter of the region of interest obtained, 0.95 is a fixed coefficient, disease prediction and assessment can be made based on the calculated AVR value, which has certain clinical guiding significance.
可以看出,本申请具体实施例通过对原始眼底图像进行特征提取,得到多个尺度的目标特征图,基于得到的目标特征图进行视盘分割,获得视盘分割结果,采用预训练的级联分割网络模型对原始眼底图像进行分割,获得动静脉血管识别结果,基于视盘分割结果进行感兴趣区域定位,根据动静脉血管识别结果提取血管中心线,采用邻域连通性判定方法检测血管中心线中的关键点,去除该关键点得到多个相互独立的血管段,并对各血管段上的动静脉类别信息进行修正,基于提取出的血管中心线,采用边界探测的方法获取类别信息修正后的各血管段的血管直径,根据获取的血管直径计算感兴趣区域内动脉血管和静脉血管的直径比值,从而提高了眼底视网膜动静脉血管识别精度,进而提高量化精度。It can be seen that the specific embodiment of the present application obtains target feature maps of multiple scales by extracting features from the original fundus image, performs optic disc segmentation based on the obtained target feature maps, and obtains the result of optic disc segmentation, using a pre-trained cascaded segmentation network The model segmented the original fundus image to obtain the results of arteriovenous and vascular recognition. Based on the results of the optic disc segmentation, the region of interest was located. The centerline of the blood vessel was extracted based on the result of the arteriovenous and vascular recognition. The neighborhood connectivity determination method was used to detect the key points in the centerline of the blood vessel. Point, remove the key point to obtain multiple independent blood vessel segments, and correct the arterial and vein category information on each blood vessel segment. Based on the extracted blood vessel centerline, use the method of boundary detection to obtain the corrected blood vessel category information Based on the obtained vessel diameter, the ratio of the diameter of the arterial vessel and the venous vessel in the region of interest is calculated based on the obtained vessel diameter, thereby improving the recognition accuracy of the fundus retinal arteriovenous blood vessel, thereby improving the quantification accuracy.
本申请还提供一种眼底视网膜血管识别及量化装置,所述眼底视网膜血管识别及量化装置可以是运行于终端中的一个计算机程序(包括程序代码)。该眼底视网膜血管识别及量化装置可以执行图2所示的方法。请参见图12,该装置包括:The present application also provides a device for identifying and quantifying fundus retinal blood vessels. The device for identifying and quantifying fundus retinal blood vessels may be a computer program (including program code) running in a terminal. The device for recognizing and quantifying retinal blood vessels in the fundus can execute the method shown in FIG. 2. Please refer to Figure 12, the device includes:
特征提取模块1201,用于将原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个尺度的目标特征图;The feature extraction module 1201 is used to input the original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales;
视盘分割模块1202,用于基于所述目标特征图进行视盘分割,获得视盘分割结果;The optic disc segmentation module 1202 is configured to perform optic disc segmentation based on the target feature map to obtain a optic disc segmentation result;
血管识别模块1203,用于采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果;The blood vessel recognition module 1203 is configured to use a pre-trained cascade segmentation network model to segment the original fundus image to obtain an arteriovenous blood vessel recognition result;
区域定位模块1204,用于基于视盘分割结果进行感兴趣区域定位,获得感兴趣区域定位结果;The region positioning module 1204 is used for locating the region of interest based on the result of the segmentation of the optic disc, and obtaining the region of interest locating result;
中心线提取模块1205,用于根据动静脉血管识别结果提取血管中心线,采用邻域连通性判定方法检测血管中心线中的关键点,去除所述关键点以得到多个相互独立的血管段, 并对各血管段上的动静脉类别信息进行修正,得到动静脉类别信息修正后的各血管段;The centerline extraction module 1205 is used to extract the centerline of the blood vessel according to the result of the arteriovenous blood vessel identification, use the neighborhood connectivity determination method to detect the key points in the centerline of the blood vessel, and remove the key points to obtain multiple independent blood vessel segments, And revise the arteriovenous type information on each blood vessel segment, and obtain each blood vessel segment after the arteriovenous type information is revised;
直径比计算模块1206,用于基于动静脉类别信息修正后的各血管段的血管中心线,采用边界探测的方法获取动静脉类别信息修正后的各血管段的血管直径,根据所述血管直径计算感兴趣区域内动脉血管和静脉血管的直径比值。The diameter ratio calculation module 1206 is used to obtain the blood vessel diameter of each blood vessel segment corrected by the arteriovenous category information by using the method of boundary detection based on the corrected blood vessel center line of the arteriovenous type information, and calculate according to the blood vessel diameter The ratio of the diameters of arteries and veins in the region of interest.
可选的,所述特征提取模块1201在所述将原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个尺度的目标特征图方面,具体用于:Optionally, the feature extraction module 1201 is specifically used to input the original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales:
将所述原始眼底图像输入所述U型卷积神经网络模型的编码器部分进行关键特征提取,得到一高维特征图;Input the original fundus image into the encoder part of the U-shaped convolutional neural network model for key feature extraction to obtain a high-dimensional feature map;
将所述高维特征图输入所述U型卷积神经网络模型的解码器部分进行上采样操作,输出多个尺度的目标特征图。The high-dimensional feature map is input into the decoder part of the U-shaped convolutional neural network model to perform an up-sampling operation, and target feature maps of multiple scales are output.
可选的,所述特征提取模块1201在所述将所述原始眼底图像输入所述U型卷积神经网络模型的编码器部分进行关键特征提取,得到一高维特征图方面,具体用于:Optionally, the feature extraction module 1201 performs key feature extraction in the encoder part of the U-shaped convolutional neural network model that inputs the original fundus image to obtain a high-dimensional feature map, which is specifically used for:
对所述原始眼底图像进行卷积处理以提取关键特征,得到与所述原始眼底图像尺寸相同的特征图;Performing convolution processing on the original fundus image to extract key features to obtain a feature map with the same size as the original fundus image;
对经过卷积处理得到的特征图进行最大池化操作,逐层缩小特征图尺寸,经过若干卷积层和池化层的交替处理得到所述高维特征图。A maximum pooling operation is performed on the feature map obtained through convolution processing, the size of the feature map is reduced layer by layer, and the high-dimensional feature map is obtained through alternate processing of several convolutional layers and pooling layers.
可选的,所述特征提取模块1201在所述将该高维特征图输入所述U型卷积神经网络模型的解码器部分进行上采样操作,输出多个尺度的目标特征图方面,具体用于:Optionally, the feature extraction module 1201 specifically uses the high-dimensional feature map to input the high-dimensional feature map into the decoder part of the U-shaped convolutional neural network model to perform an upsampling operation and output target feature maps of multiple scales. At:
对所述高维特征图进行上采样操作,逐层放大所述高维特征图尺寸;Perform an up-sampling operation on the high-dimensional feature map, and enlarge the size of the high-dimensional feature map layer by layer;
通过跳跃连接层将编码阶段每个网络层提取的低维特征与解码阶段对称提取的高维特征进行合并,得到每个网络层的初始特征图,所述每个网络层的初始特征图尺度不同;The low-dimensional features extracted from each network layer in the encoding stage and the high-dimensional features extracted symmetrically in the decoding stage are merged through the jump connection layer to obtain the initial feature map of each network layer, and the initial feature map of each network layer has a different scale ;
通过每个网络层的输出分支对所述每个网络层的初始特征图进行输出得到多个尺度的目标特征图,所述每个网络层的输出分支中加入有注意力机制。The initial feature map of each network layer is output through the output branch of each network layer to obtain target feature maps of multiple scales, and an attention mechanism is added to the output branch of each network layer.
可选的,所述视盘分割模块1202在所述基于所述目标特征图进行视盘分割,获得视盘分割结果方面,具体用于:Optionally, the video disc segmentation module 1202 is specifically configured to perform video disc segmentation based on the target feature map to obtain a video disc segmentation result:
将所述目标特征图进行融合以得到待分割图像;Fusing the target feature maps to obtain an image to be segmented;
对所述待分割图像进行候选框回归处理,以从所述待分割图像中定位视盘位置,并输出视盘的边界框信息;Performing candidate frame regression processing on the image to be segmented, so as to locate the position of the optic disc from the image to be segmented, and output bounding box information of the optic disc;
根据视盘的边界框信息裁剪出标定的视盘区域图像块输入预训练的U型分割网络中,经过特征提取和上采样操作输出视盘分割结果。According to the bounding box information of the video disc, cut out the calibrated video disc area image block and input it into the pre-trained U-shaped segmentation network, and output the segmentation result of the video disc after feature extraction and up-sampling operations.
可选的,所述血管识别模块1203在所述采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果方面,具体用于:Optionally, the blood vessel identification module 1203 is specifically used for segmenting the original fundus image by using a pre-trained cascaded segmentation network model to obtain an arteriovenous vessel recognition result:
提取所述原始眼底图像的绿色通道图像,对所述绿色通道图像做直方图均衡化处理,得到对比度增强的绿色通道图像;Extracting the green channel image of the original fundus image, and performing histogram equalization processing on the green channel image to obtain a green channel image with enhanced contrast;
将所述对比度增强的绿色通道图像切成多个眼底图像块;Cutting the contrast-enhanced green channel image into multiple fundus image blocks;
将所述多个眼底图像块输入预设级联分割网络模型进行分割,获取动静脉血管识别结果。The multiple fundus image blocks are input into a preset cascade segmentation network model for segmentation, and an arteriovenous blood vessel recognition result is obtained.
可选的,所述直径比计算模块1206在所述采用边界探测的方法获取动静脉类别信息修正后的各血管段的血管直径方面,具体用于:Optionally, the diameter ratio calculation module 1206 is specifically used to obtain the diameter of each blood vessel segment after the correction of the arteriovenous category information using the method of boundary detection:
以血管中心点为圆心,在像素范围40*40大小的矩形区域内进行遍历,寻找距离动静脉类别信息修正后的各血管段中心线距离最近的边界点,以所述距离最近的边界点与血管中心点的距离为半径r,得到动静脉类别信息修正后的各血管段的血管直径2r。Take the blood vessel center point as the center, traverse in a rectangular area with a pixel range of 40*40 to find the boundary point closest to the center line of each blood vessel segment after the correction of the arteriovenous category information, and use the closest boundary point with the The distance of the blood vessel center point is the radius r, and the blood vessel diameter 2r of each blood vessel segment after the correction of the arteriovenous type information is obtained.
可选的,直径比计算模块1206采用以下公式计算感兴趣区域内动脉血管和静脉血管的直径比值:Optionally, the diameter ratio calculation module 1206 uses the following formula to calculate the diameter ratio of arterial blood vessels and venous blood vessels in the region of interest:
Figure PCTCN2020099538-appb-000006
Figure PCTCN2020099538-appb-000006
其中,AVR表示感兴趣区域内动脉血管和静脉血管的直径比值,CRAE表示视网膜中央动脉血管直径等效值,
Figure PCTCN2020099538-appb-000007
A i和A j分别表示获取到的感兴趣区域最大动脉血管直径和最小动脉血管直径,0.88为固定系数;CRVE表示视网膜中央静脉血管直径等效值,
Figure PCTCN2020099538-appb-000008
V i和V j分别表示获取到的感兴趣区域最大静脉血管直径和最小静脉血管直径,0.95为固定系数。
Among them, AVR represents the ratio of the diameter of arteries and veins in the region of interest, and CRAE represents the equivalent value of the diameter of the central retinal artery.
Figure PCTCN2020099538-appb-000007
A i and A j respectively represent the maximum arterial vessel diameter and the minimum arterial vessel diameter in the region of interest obtained, 0.88 is a fixed coefficient; CRVE represents the equivalent value of the central retinal vein vessel diameter,
Figure PCTCN2020099538-appb-000008
V i and V j represent the maximum venous vessel diameter and the minimum venous vessel diameter of the obtained region of interest, respectively, and 0.95 is a fixed coefficient.
需要说明的是,图2所示的眼底视网膜血管识别及量化方法中的各个步骤均可以是由本申请实施例提供的眼底视网膜血管识别及量化装置中的各个单元模块来执行,且能达到相同或相似的有益效果,本申请实施例提供的眼底视网膜血管识别及量化装置能够应用在眼底视网膜血管识别和量化的场景中,具体的,上述眼底视网膜血管识别及量化装置可应用于服务器、计算机或移动终端等设备中。It should be noted that each step in the method for identifying and quantifying fundus retinal blood vessels shown in FIG. 2 can be executed by each unit module in the device for identifying and quantifying fundus retinal blood vessels provided by the embodiments of the present application, and can achieve the same or Similar beneficial effects, the fundus retinal blood vessel identification and quantification device provided in the embodiments of the present application can be applied to the scene of fundus retinal blood vessel identification and quantification. Specifically, the above-mentioned fundus retinal blood vessel identification and quantification device can be applied to a server, a computer, or a mobile device. Terminals and other equipment.
根据本申请的一个实施例,图12所示的眼底视网膜血管识别及量化装置的各个模块可以分别或全部合并为一个或若干个另外的单元来构成,或者其中的某个(些)模块还可以再拆分为功能上更小的多个单元来构成,这可以实现同样的操作,而不影响本申请的实施例的技术效果的实现。上述单元是基于逻辑功能划分的,在实际应用中,一个单元的功能也可以由多个单元来实现,或者多个单元的功能由一个单元实现。在本申请的其它实施例中,面部动作单元识别装置也可以包括其它单元,在实际应用中,这些功能也可以由其它单元协助实现,并且可以由多个单元协作实现。According to an embodiment of the present application, the modules of the fundus retinal blood vessel identification and quantification device shown in FIG. 12 can be separately or all combined into one or several additional units to form, or some of the modules can also It is further divided into multiple functionally smaller units to form, which can achieve the same operation without affecting the realization of the technical effects of the embodiments of the present application. The above-mentioned units are divided based on logical functions. In practical applications, the function of one unit may also be realized by multiple units, or the functions of multiple units may be realized by one unit. In other embodiments of the present application, the facial motion unit recognition device may also include other units. In practical applications, these functions may also be implemented with the assistance of other units, and may be implemented by multiple units in cooperation.
根据本申请的另一个实施例,可以通过在包括中央处理单元(CPU)、随机存取存储介质(RAM)、只读存储介质(ROM)等处理元件和存储元件的例如计算机的通用计算设备上运行能够执行如图2中所示的相应方法所涉及的各步骤的计算机程序(包括程序代码),来构造如图12中所示的眼底视网膜血管识别及量化装置设备,以及来实现本申请实施例的眼底视网膜血管识别及量化方法。所述计算机程序可以记载于例如计算机可读记录介质上,并通过计算机可读记录介质装载于上述计算设备中,并在其中运行。According to another embodiment of the present application, a general-purpose computing device such as a computer including a central processing unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM) and other processing elements and storage elements can be used Run a computer program (including program code) capable of executing the steps involved in the corresponding method shown in FIG. 2 to construct the fundus retinal blood vessel identification and quantification device as shown in FIG. 12, and to implement the application The method of identifying and quantifying the fundus retinal blood vessels of the cases. The computer program may be recorded on, for example, a computer-readable recording medium, and loaded into the above-mentioned computing device through the computer-readable recording medium, and run in it.
基于上述描述,请参见图13,图13为本申请实施例提供的一种电子设备的结构示意图,如图13所示,包括:存储器1301,用于存储一个或多个计算机程序;处理器1302,用于调用存储器1301存储的计算机程序执行上述眼底视网膜血管识别及量化方法实施例中的步骤;通信接口1303,用于进行输入输出,该通信接口1303可以为一个或多个;可以理解的,电子设备中各部分分别通过总线连接通信。其中,处理器1302具体用于调用计算机程序执行如下步骤:Based on the foregoing description, please refer to FIG. 13, which is a schematic structural diagram of an electronic device provided by an embodiment of the application. As shown in FIG. 13, it includes: a memory 1301 for storing one or more computer programs; a processor 1302 , Used to call the computer program stored in the memory 1301 to execute the steps in the above-mentioned embodiment of the fundus retinal blood vessel identification and quantification method; the communication interface 1303, used to perform input and output, the communication interface 1303 can be one or more; it is understandable, Each part of the electronic device communicates via a bus connection. Wherein, the processor 1302 is specifically configured to call a computer program to execute the following steps:
将原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个尺度的目标特征图;Input the original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales;
基于所述目标特征图进行视盘分割,获得视盘分割结果;Performing segmentation of the optic disc based on the target feature map, and obtaining a segmentation result of the optic disc;
采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果;Segmenting the original fundus image by using a pre-trained cascade segmentation network model to obtain an arteriovenous and vascular recognition result;
基于视盘分割结果进行感兴趣区域定位,获得感兴趣区域定位结果;Locate the region of interest based on the results of the segmentation of the optic disc, and obtain the region of interest location results;
根据动静脉血管识别结果提取血管中心线,采用邻域连通性判定方法检测血管中心线中的关键点,去除所述关键点以得到多个相互独立的血管段,并对各血管段上的动静脉类别信息进行修正,得到动静脉类别信息修正后的各血管段;The blood vessel centerline is extracted according to the result of arteriovenous blood vessel identification, the neighborhood connectivity determination method is used to detect the key points in the blood vessel centerline, the key points are removed to obtain multiple independent blood vessel segments, and the movement on each blood vessel segment is determined. Revise the vein category information to obtain each blood vessel segment after the correction of the arteriovenous category information;
基于动静脉类别信息修正后的各血管段的血管中心线,采用边界探测的方法获取动静脉类别信息修正后的各血管段的血管直径,根据所述血管直径计算感兴趣区域内动脉血管和静脉血管的直径比值。Based on the blood vessel center line of each blood vessel segment corrected by the arteriovenous category information, the method of boundary detection is used to obtain the blood vessel diameter of each blood vessel segment corrected by the arteriovenous category information, and the arterial blood vessels and veins in the region of interest are calculated based on the blood vessel diameters. The ratio of the diameters of blood vessels.
可选的,处理器1302执行所述将原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个尺度的目标特征图,包括:Optionally, the processor 1302 executes the input of the original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales, including:
将所述原始眼底图像输入所述U型卷积神经网络模型的编码器部分进行关键特征提取,得到一高维特征图;Input the original fundus image into the encoder part of the U-shaped convolutional neural network model for key feature extraction to obtain a high-dimensional feature map;
将所述高维特征图输入所述U型卷积神经网络模型的解码器部分进行上采样操作,输出多个尺度的目标特征图。The high-dimensional feature map is input into the decoder part of the U-shaped convolutional neural network model to perform an up-sampling operation, and target feature maps of multiple scales are output.
可选的,处理器1302执行所述将所述原始眼底图像输入所述U型卷积神经网络模型的编码器部分进行关键特征提取,得到一高维特征图,包括:Optionally, the processor 1302 performs key feature extraction by inputting the original fundus image into the encoder part of the U-shaped convolutional neural network model to obtain a high-dimensional feature map, including:
对所述原始眼底图像进行卷积处理以提取关键特征,得到与所述原始眼底图像尺寸相同的特征图;Performing convolution processing on the original fundus image to extract key features to obtain a feature map with the same size as the original fundus image;
对经过卷积处理得到的特征图进行最大池化操作,逐层缩小特征图尺寸,经过若干卷积层和池化层的交替处理得到所述高维特征图;Perform a maximum pooling operation on the feature map obtained through convolution processing, reduce the size of the feature map layer by layer, and obtain the high-dimensional feature map through alternate processing of several convolutional layers and pooling layers;
可选的,处理器1302执行所述将该高维特征图输入所述U型卷积神经网络模型的解码器部分进行上采样操作,输出多个尺度的目标特征图,包括:Optionally, the processor 1302 performs the up-sampling operation by inputting the high-dimensional feature map into the decoder part of the U-shaped convolutional neural network model, and outputting target feature maps of multiple scales, including:
对所述高维特征图进行上采样操作,逐层放大该高维特征图尺寸;Perform an up-sampling operation on the high-dimensional feature map, and enlarge the size of the high-dimensional feature map layer by layer;
通过跳跃连接层将编码阶段每个网络层提取的低维特征与解码阶段对称提取的高维特征进行合并,得到每个网络层的初始特征图,所述每个网络层的初始特征图尺度不同;The low-dimensional features extracted from each network layer in the encoding stage and the high-dimensional features extracted symmetrically in the decoding stage are merged through the jump connection layer to obtain the initial feature map of each network layer, and the initial feature map of each network layer has a different scale ;
通过每个网络层的输出分支对所述每个网络层的初始特征图进行输出得到多个尺度的目标特征图,所述每个网络层的输出分支中加入有注意力机制。The initial feature map of each network layer is output through the output branch of each network layer to obtain target feature maps of multiple scales, and an attention mechanism is added to the output branch of each network layer.
可选的,处理器1302执行所述基于所述目标特征图进行视盘分割,获得视盘分割结果,包括:Optionally, the processor 1302 performing the segmentation of the video disc based on the target feature map to obtain a result of the video disc segmentation includes:
将所述目标特征图进行融合以得到待分割图像;Fusing the target feature maps to obtain an image to be segmented;
对所述待分割图像进行候选框回归处理,以从所述待分割图像中定位视盘位置,并输出视盘的边界框信息;Performing candidate frame regression processing on the image to be segmented, so as to locate the position of the optic disc from the image to be segmented, and output bounding box information of the optic disc;
根据视盘的边界框信息裁剪出标定的视盘区域图像块输入预训练的U型分割网络中,经过特征提取和上采样操作输出视盘分割结果。According to the bounding box information of the video disc, cut out the calibrated video disc area image block and input it into the pre-trained U-shaped segmentation network, and output the segmentation result of the video disc after feature extraction and up-sampling operations.
可选的,处理器1302执行所述采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果,包括:Optionally, the processor 1302 executes the segmentation of the original fundus image by using the pre-trained cascaded segmentation network model to obtain the arteriovenous and vascular recognition result, including:
提取所述原始眼底图像的绿色通道图像,对所述绿色通道图像做直方图均衡化处理,得到对比度增强的绿色通道图像;Extracting the green channel image of the original fundus image, and performing histogram equalization processing on the green channel image to obtain a green channel image with enhanced contrast;
将所述对比度增强的绿色通道图像切成多个眼底图像块;Cutting the contrast-enhanced green channel image into multiple fundus image blocks;
将所述多个眼底图像块输入预设级联分割网络模型进行分割,获取动静脉血管识别结果。The multiple fundus image blocks are input into a preset cascade segmentation network model for segmentation, and an arteriovenous blood vessel recognition result is obtained.
可选的,处理器1302执行所述采用边界探测的方法获取动静脉类别信息修正后的各血管段的血管直径,包括:Optionally, the processor 1302 executes the method of using the boundary detection to obtain the vessel diameter of each vessel segment after the correction of the arteriovenous category information, including:
以血管中心点为圆心,在像素范围40*40大小的矩形区域内进行遍历,寻找距离动静脉类别信息修正后的各血管段中心线距离最近的边界点,以所述距离最近的边界点与血管中心点的距离为半径r,得到动静脉类别信息修正后的各血管段的血管直径2r。Take the blood vessel center point as the center, traverse in a rectangular area with a pixel range of 40*40 to find the boundary point closest to the center line of each blood vessel segment after the correction of the arteriovenous category information, and use the closest boundary point with the The distance of the blood vessel center point is the radius r, and the blood vessel diameter 2r of each blood vessel segment after the correction of the arteriovenous type information is obtained.
可选的,处理器1302采用以下公式计算感兴趣区域内动脉血管和静脉血管的直径比值:Optionally, the processor 1302 uses the following formula to calculate the ratio of the diameters of arterial vessels and venous vessels in the region of interest:
Figure PCTCN2020099538-appb-000009
Figure PCTCN2020099538-appb-000009
其中,AVR表示感兴趣区域内动脉血管和静脉血管的直径比值,CRAE表示视网膜中央动脉血管直径等效值,
Figure PCTCN2020099538-appb-000010
A i和A j分别表示获取到的感兴趣区域最大动脉血管直径和最小动脉血管直径,0.88为固定系数;CRVE表示视网膜中央静脉血管直径等效值,
Figure PCTCN2020099538-appb-000011
V i和V j分别表示获取到的感兴趣区域最大静脉血管直径和最小静脉血管直径,0.95为固定系数。
Among them, AVR represents the ratio of the diameter of arteries and veins in the region of interest, and CRAE represents the equivalent value of the diameter of the central retinal artery.
Figure PCTCN2020099538-appb-000010
A i and A j respectively represent the maximum arterial vessel diameter and the minimum arterial vessel diameter in the region of interest obtained, 0.88 is a fixed coefficient; CRVE represents the equivalent value of the central retinal vein vessel diameter,
Figure PCTCN2020099538-appb-000011
V i and V j represent the maximum venous vessel diameter and the minimum venous vessel diameter of the obtained region of interest, respectively, and 0.95 is a fixed coefficient.
示例性的,上述电子设备可以是计算机、笔记本电脑、平板电脑、掌上电脑、服务器等设备。电子设备可包括但不仅限于存储器1301、处理器1302、通信接口1303。本领域技术人员可以理解,所述示意图仅仅是电子设备的示例,并不构成对电子设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件。Exemplarily, the foregoing electronic device may be a computer, a notebook computer, a tablet computer, a palmtop computer, a server, and other devices. The electronic device may include, but is not limited to, a memory 1301, a processor 1302, and a communication interface 1303. Those skilled in the art can understand that the schematic diagram is only an example of the electronic device, and does not constitute a limitation on the electronic device, and may include more or fewer components than those shown in the figure, or a combination of certain components, or different components.
需要说明的是,由于电子设备的处理器1302执行计算机程序时实现上述的眼底视网膜血管识别及量化方法中的步骤,因此上述眼底视网膜血管识别及量化方法的实施例均适用于该电子设备,且均能达到相同或相似的有益效果。It should be noted that since the processor 1302 of the electronic device executes the computer program to implement the steps in the above-mentioned fundus retinal blood vessel identification and quantification method, the above-mentioned embodiments of the fundus retinal blood vessel identification and quantification method are all applicable to the electronic device, and All can achieve the same or similar beneficial effects.
本申请实施例还提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述的眼底视网膜血管识别及量化方法中的步骤:The embodiments of the present application also provide a computer-readable storage medium. The computer-readable storage medium stores a computer program. When the computer program is executed by a processor, the steps in the above-mentioned method for identifying and quantifying fundus retinal blood vessels are implemented:
将原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个尺度的目标特征图;Input the original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales;
基于所述目标特征图进行视盘分割,获得视盘分割结果;Performing segmentation of the optic disc based on the target feature map, and obtaining a segmentation result of the optic disc;
采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果;Segmenting the original fundus image by using a pre-trained cascade segmentation network model to obtain an arteriovenous and vascular recognition result;
基于视盘分割结果进行感兴趣区域定位,获得感兴趣区域定位结果;Locate the region of interest based on the results of the segmentation of the optic disc, and obtain the region of interest location results;
根据动静脉血管识别结果提取血管中心线,采用邻域连通性判定方法检测血管中心线中的关键点,去除所述关键点以得到多个相互独立的血管段,并对各血管段上的动静脉类别信息进行修正,得到动静脉类别信息修正后的各血管段;The blood vessel centerline is extracted according to the result of arteriovenous blood vessel identification, the neighborhood connectivity determination method is used to detect the key points in the blood vessel centerline, the key points are removed to obtain multiple independent blood vessel segments, and the movement on each blood vessel segment is determined. Revise the vein category information to obtain each blood vessel segment after the correction of the arteriovenous category information;
基于动静脉类别信息修正后的各血管段的血管中心线,采用边界探测的方法获取动静脉类别信息修正后的各血管段的血管直径,根据所述血管直径计算感兴趣区域内动脉血管和静脉血管的直径比值。Based on the blood vessel center line of each blood vessel segment corrected by the arteriovenous category information, the method of boundary detection is used to obtain the blood vessel diameter of each blood vessel segment corrected by the arteriovenous category information, and the arterial blood vessels and veins in the region of interest are calculated based on the blood vessel diameters. The ratio of the diameters of blood vessels.
可选的,所述计算机程序被处理器执行时还用于实现:将所述原始眼底图像输入所述U型卷积神经网络模型的编码器部分进行关键特征提取,得到一高维特征图;将所述高维特征图输入所述U型卷积神经网络模型的解码器部分进行上采样操作,输出多个尺度的目标特征图。Optionally, when the computer program is executed by the processor, it is also used to implement: input the original fundus image into the encoder part of the U-shaped convolutional neural network model for key feature extraction to obtain a high-dimensional feature map; The high-dimensional feature map is input into the decoder part of the U-shaped convolutional neural network model to perform an up-sampling operation, and target feature maps of multiple scales are output.
可选的,所述计算机程序被处理器执行时还用于实现:对所述原始眼底图像进行卷积处理以提取关键特征,得到与所述原始眼底图像尺寸相同的特征图;对经过卷积处理得到的特征图进行最大池化操作,逐层缩小特征图尺寸,经过若干卷积层和池化层的交替处理得到所述高维特征图。Optionally, when the computer program is executed by the processor, the computer program is also used to implement: performing convolution processing on the original fundus image to extract key features to obtain a feature map with the same size as the original fundus image; The processed feature map is subjected to a maximum pooling operation, the size of the feature map is reduced layer by layer, and the high-dimensional feature map is obtained through alternate processing of several convolutional layers and pooling layers.
可选的,所述计算机程序被处理器执行时还用于实现:对所述高维特征图进行上采样操作,逐层放大所述高维特征图尺寸;Optionally, when the computer program is executed by the processor, the computer program is also used to implement: perform an up-sampling operation on the high-dimensional feature map, and enlarge the size of the high-dimensional feature map layer by layer;
通过跳跃连接层将编码阶段每个网络层提取的低维特征与解码阶段对称提取的高维特 征进行合并,得到每个网络层的初始特征图,所述每个网络层的初始特征图尺度不同;The low-dimensional features extracted from each network layer in the encoding stage and the high-dimensional features extracted symmetrically in the decoding stage are merged through the jump connection layer to obtain the initial feature map of each network layer, and the initial feature map of each network layer has a different scale ;
通过每个网络层的输出分支对所述每个网络层的初始特征图进行输出得到多个尺度的目标特征图,所述每个网络层的输出分支中加入有注意力机制。The initial feature map of each network layer is output through the output branch of each network layer to obtain target feature maps of multiple scales, and an attention mechanism is added to the output branch of each network layer.
可选的,所述计算机程序被处理器执行时还用于实现:将所述目标特征图进行融合以得到待分割图像;对所述待分割图像进行候选框回归处理,以从所述待分割图像中定位视盘位置,并输出视盘的边界框信息;根据视盘的边界框信息裁剪出标定的视盘区域图像块输入预训练的U型分割网络中,经过特征提取和上采样操作输出视盘分割结果。Optionally, when the computer program is executed by the processor, the computer program is also used to realize: fuse the target feature map to obtain the image to be segmented; Locate the position of the disc in the image, and output the bounding box information of the disc; cut out the calibrated disc area image block according to the bounding box information of the disc and input it into the pre-trained U-shaped segmentation network, and output the segmentation result of the disc after feature extraction and up-sampling.
可选的,所述计算机程序被处理器执行时还用于实现:提取所述原始眼底图像的绿色通道图像,对所述绿色通道图像做直方图均衡化处理,得到对比度增强的绿色通道图像;将所述对比度增强的绿色通道图像切成多个眼底图像块;将所述多个眼底图像块输入预设级联分割网络模型进行分割,获取动静脉血管识别结果。Optionally, when the computer program is executed by the processor, the computer program is also used to implement: extracting the green channel image of the original fundus image, and performing histogram equalization processing on the green channel image to obtain a green channel image with enhanced contrast; The contrast-enhanced green channel image is cut into multiple fundus image blocks; the multiple fundus image blocks are input into a preset cascade segmentation network model for segmentation, and an arteriovenous blood vessel recognition result is obtained.
可选的,所述计算机程序被处理器执行时还用于实现:以血管中心点为圆心,在像素范围40*40大小的矩形区域内进行遍历,寻找距离动静脉类别信息修正后的各血管段中心线距离最近的边界点,以所述距离最近的边界点与血管中心点的距离为半径r,得到动静脉类别信息修正后的各血管段的血管直径2r。Optionally, when the computer program is executed by the processor, the computer program is also used to realize: traverse a rectangular area with a pixel range of 40*40 with the center point of the blood vessel as the center, and find each blood vessel whose distance is corrected by the category information of the artery and vein. The boundary point of the segment center line with the closest distance, and the radius r is the distance between the closest boundary point and the blood vessel center point, and the blood vessel diameter 2r of each blood vessel segment after the correction of the arteriovenous category information is obtained.
可选的,所述计算机程序被处理器执行时还用于采用以下公式计算感兴趣区域内动脉血管和静脉血管的直径比值:Optionally, when the computer program is executed by the processor, the following formula is used to calculate the ratio of the diameters of arterial vessels and venous vessels in the region of interest:
Figure PCTCN2020099538-appb-000012
Figure PCTCN2020099538-appb-000012
其中,AVR表示感兴趣区域内动脉血管和静脉血管的直径比值,CRAE表示视网膜中央动脉血管直径等效值,
Figure PCTCN2020099538-appb-000013
A i和A j分别表示获取到的感兴趣区域最大动脉血管直径和最小动脉血管直径,0.88为固定系数;CRVE表示视网膜中央静脉血管直径等效值,
Figure PCTCN2020099538-appb-000014
V i和V j分别表示获取到的感兴趣区域最大静脉血管直径和最小静脉血管直径,0.95为固定系数。
Among them, AVR represents the ratio of the diameter of arteries and veins in the region of interest, and CRAE represents the equivalent value of the diameter of the central retinal artery.
Figure PCTCN2020099538-appb-000013
A i and A j respectively represent the maximum arterial vessel diameter and the minimum arterial vessel diameter in the region of interest obtained, 0.88 is a fixed coefficient; CRVE represents the equivalent value of the central retinal vein vessel diameter,
Figure PCTCN2020099538-appb-000014
V i and V j represent the maximum venous vessel diameter and the minimum venous vessel diameter of the obtained region of interest, respectively, and 0.95 is a fixed coefficient.
示例性的,计算机可读存储介质的计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读存储介质可以是非易失性,也可以是易失性。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。Exemplarily, the computer program in the computer-readable storage medium includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate form. The computer-readable storage medium may be non-volatile or volatile. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media, etc.
需要说明的是,由于计算机可读存储介质的计算机程序被处理器执行时实现上述的眼底视网膜血管识别及量化方法中的步骤,因此上述眼底视网膜血管识别及量化方法的所有例均适用于该计算机可读存储介质,且均能达到相同或相似的有益效果。It should be noted that since the computer program of the computer-readable storage medium is executed by the processor to realize the steps in the above-mentioned fundus retinal blood vessel identification and quantification method, all examples of the above-mentioned fundus retinal blood vessel identification and quantification method are applicable to the computer The storage medium is readable and can achieve the same or similar beneficial effects.
以上所揭露的仅为本申请的部分实施例而已,当然不能以此来限定本申请之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本申请权利要求所作的等同变化,仍属于本申请所涵盖的范围。The above-disclosed are only part of the embodiments of this application. Of course, it cannot be used to limit the scope of rights of this application. Those of ordinary skill in the art can understand all or part of the procedures for implementing the above-mentioned embodiments and make them in accordance with the claims of this application. The equivalent change of is still within the scope of this application.

Claims (20)

  1. 一种眼底视网膜血管识别及量化方法,其中,所述方法包括:A method for identifying and quantifying retinal blood vessels in the fundus, wherein the method includes:
    将原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个尺度的目标特征图;Input the original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales;
    基于所述目标特征图进行视盘分割,获得视盘分割结果;Performing segmentation of the optic disc based on the target feature map, and obtaining a segmentation result of the optic disc;
    采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果;Segmenting the original fundus image by using a pre-trained cascade segmentation network model to obtain an arteriovenous and vascular recognition result;
    基于视盘分割结果进行感兴趣区域定位,获得感兴趣区域定位结果;Locate the region of interest based on the results of the segmentation of the optic disc, and obtain the region of interest location results;
    根据动静脉血管识别结果提取血管中心线,采用邻域连通性判定方法检测血管中心线中的关键点,去除所述关键点以得到多个相互独立的血管段,并对各血管段上的动静脉类别信息进行修正,得到动静脉类别信息修正后的各血管段;The blood vessel centerline is extracted according to the result of arteriovenous blood vessel identification, the neighborhood connectivity determination method is used to detect the key points in the blood vessel centerline, the key points are removed to obtain multiple independent blood vessel segments, and the movement on each blood vessel segment is determined. Revise the vein category information to obtain each blood vessel segment after the correction of the arteriovenous category information;
    基于动静脉类别信息修正后的各血管段的血管中心线,采用边界探测的方法获取动静脉类别信息修正后的各血管段的血管直径,根据所述血管直径计算感兴趣区域内动脉血管和静脉血管的直径比值。Based on the blood vessel center line of each blood vessel segment corrected by the arteriovenous category information, the method of boundary detection is used to obtain the blood vessel diameter of each blood vessel segment corrected by the arteriovenous category information, and the arterial blood vessels and veins in the region of interest are calculated based on the blood vessel diameters. The ratio of the diameters of blood vessels.
  2. 根据权利要求1所述的方法,其中,所述将原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个尺度的目标特征图,包括:The method according to claim 1, wherein said inputting the original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales comprises:
    将所述原始眼底图像输入所述U型卷积神经网络模型的编码器部分进行关键特征提取,得到一高维特征图;Input the original fundus image into the encoder part of the U-shaped convolutional neural network model for key feature extraction to obtain a high-dimensional feature map;
    将所述高维特征图输入所述U型卷积神经网络模型的解码器部分进行上采样操作,输出多个尺度的目标特征图。The high-dimensional feature map is input into the decoder part of the U-shaped convolutional neural network model to perform an up-sampling operation, and target feature maps of multiple scales are output.
  3. 根据权利要求2所述的方法,其中,所述将所述原始眼底图像输入所述U型卷积神经网络模型的编码器部分进行关键特征提取,得到一高维特征图,包括:The method according to claim 2, wherein the inputting the original fundus image into the encoder part of the U-shaped convolutional neural network model for key feature extraction to obtain a high-dimensional feature map comprises:
    对所述原始眼底图像进行卷积处理以提取关键特征,得到与所述原始眼底图像尺寸相同的特征图;Performing convolution processing on the original fundus image to extract key features to obtain a feature map with the same size as the original fundus image;
    对经过卷积处理得到的特征图进行最大池化操作,逐层缩小特征图尺寸,经过若干卷积层和池化层的交替处理得到所述高维特征图。A maximum pooling operation is performed on the feature map obtained through convolution processing, the size of the feature map is reduced layer by layer, and the high-dimensional feature map is obtained through alternate processing of several convolutional layers and pooling layers.
  4. 根据权利要求2所述的方法,其中,所述将所述高维特征图输入所述U型卷积神经网络模型的解码器部分进行上采样操作,输出多个尺度的目标特征图,包括:The method according to claim 2, wherein the inputting the high-dimensional feature map into the decoder part of the U-shaped convolutional neural network model to perform an upsampling operation and outputting target feature maps of multiple scales comprises:
    对所述高维特征图进行上采样操作,逐层放大所述高维特征图尺寸;Perform an up-sampling operation on the high-dimensional feature map, and enlarge the size of the high-dimensional feature map layer by layer;
    通过跳跃连接层将编码阶段每个网络层提取的低维特征与解码阶段对称提取的高维特征进行合并,得到每个网络层的初始特征图,所述每个网络层的初始特征图尺度不同;The low-dimensional features extracted from each network layer in the encoding stage and the high-dimensional features extracted symmetrically in the decoding stage are merged through the jump connection layer to obtain the initial feature map of each network layer, and the initial feature map of each network layer has a different scale ;
    通过每个网络层的输出分支对所述每个网络层的初始特征图进行输出得到多个尺度的目标特征图,所述每个网络层的输出分支中加入有注意力机制。The initial feature map of each network layer is output through the output branch of each network layer to obtain target feature maps of multiple scales, and an attention mechanism is added to the output branch of each network layer.
  5. 根据权利要求1至4任一项所述的方法,其中,所述基于所述目标特征图进行视盘分割,获得视盘分割结果,包括:The method according to any one of claims 1 to 4, wherein the segmentation of the optic disc based on the target feature map to obtain a result of the optic disc segmentation comprises:
    将所述目标特征图进行融合以得到待分割图像;Fusing the target feature maps to obtain an image to be segmented;
    对所述待分割图像进行候选框回归处理,以从所述待分割图像中定位视盘位置,并输出视盘的边界框信息;Performing candidate frame regression processing on the image to be segmented, so as to locate the position of the optic disc from the image to be segmented, and output bounding box information of the optic disc;
    根据视盘的边界框信息裁剪出标定的视盘区域图像块输入预训练的U型分割网络中,经过特征提取和上采样操作输出视盘分割结果。According to the bounding box information of the video disc, cut out the calibrated video disc area image block and input it into the pre-trained U-shaped segmentation network, and output the segmentation result of the video disc after feature extraction and up-sampling operations.
  6. 根据权利要求1-4任一项所述的方法,其中,所述采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果,包括:The method according to any one of claims 1 to 4, wherein the segmenting the original fundus image by using a pre-trained cascaded segmentation network model to obtain an arteriovenous and vascular recognition result comprises:
    提取所述原始眼底图像的绿色通道图像,对所述绿色通道图像做直方图均衡化处理,得到对比度增强的绿色通道图像;Extracting the green channel image of the original fundus image, and performing histogram equalization processing on the green channel image to obtain a green channel image with enhanced contrast;
    将所述对比度增强的绿色通道图像切成多个眼底图像块;Cutting the contrast-enhanced green channel image into multiple fundus image blocks;
    将所述多个眼底图像块输入预设级联分割网络模型进行分割,获取动静脉血管识别结果。The multiple fundus image blocks are input into a preset cascade segmentation network model for segmentation, and an arteriovenous blood vessel recognition result is obtained.
  7. 根据权利要求1至4任一项所述的方法,其中,所述采用边界探测的方法获取动静脉类别信息修正后的各血管段的血管直径,包括:The method according to any one of claims 1 to 4, wherein the obtaining the vessel diameter of each vessel segment after the correction of the arteriovenous category information by the method of boundary detection comprises:
    以血管中心点为圆心,在像素范围40*40大小的矩形区域内进行遍历,寻找距离动静脉类别信息修正后的各血管段中心线距离最近的边界点,以所述距离最近的边界点与血管中心点的距离为半径r,得到动静脉类别信息修正后的各血管段的血管直径2r。Take the blood vessel center point as the center, traverse in a rectangular area with a pixel range of 40*40 to find the boundary point closest to the center line of each blood vessel segment after the correction of the arteriovenous category information, and use the closest boundary point with the The distance of the blood vessel center point is the radius r, and the blood vessel diameter 2r of each blood vessel segment after the correction of the arteriovenous type information is obtained.
  8. 根据权利要求1所述的方法,其中,采用以下公式计算感兴趣区域内动脉血管和静脉血管的直径比值:The method according to claim 1, wherein the following formula is used to calculate the ratio of the diameters of arterial vessels and venous vessels in the region of interest:
    Figure PCTCN2020099538-appb-100001
    Figure PCTCN2020099538-appb-100001
    其中,AVR表示感兴趣区域内动脉血管和静脉血管的直径比值,CRAE表示视网膜中央动脉血管直径等效值,
    Figure PCTCN2020099538-appb-100002
    A i和A j分别表示获取到的感兴趣区域最大动脉血管直径和最小动脉血管直径,0.88为固定系数;CRVE表示视网膜中央静脉血管直径等效值,
    Figure PCTCN2020099538-appb-100003
    V i和V j分别表示获取到的感兴趣区域最大静脉血管直径和最小静脉血管直径,0.95为固定系数。
    Among them, AVR represents the ratio of the diameter of arteries and veins in the region of interest, and CRAE represents the equivalent value of the diameter of the central retinal artery.
    Figure PCTCN2020099538-appb-100002
    A i and A j respectively represent the maximum arterial vessel diameter and the minimum arterial vessel diameter in the region of interest obtained, 0.88 is a fixed coefficient; CRVE represents the equivalent value of the central retinal vein vessel diameter,
    Figure PCTCN2020099538-appb-100003
    V i and V j represent the maximum venous vessel diameter and the minimum venous vessel diameter of the obtained region of interest, respectively, and 0.95 is a fixed coefficient.
  9. 一种眼底视网膜血管识别及量化装置,其中,所述装置包括:A device for identifying and quantifying retinal blood vessels in the fundus, wherein the device comprises:
    特征提取模块,用于将原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个尺度的目标特征图;The feature extraction module is used to input the original fundus image into the pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales;
    视盘分割模块,用于基于所述目标特征图进行视盘分割,获得视盘分割结果;A video disc segmentation module, configured to perform video disc segmentation based on the target feature map to obtain a video disc segmentation result;
    血管识别模块,用于采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果;The blood vessel recognition module is used to segment the original fundus image by using a pre-trained cascade segmentation network model to obtain an arteriovenous blood vessel recognition result;
    区域定位模块,用于基于视盘分割结果进行感兴趣区域定位,获得感兴趣区域定位结果;The region positioning module is used to locate the region of interest based on the result of the segmentation of the optic disc, and obtain the region of interest positioning result;
    中心线提取模块,用于根据动静脉血管识别结果提取血管中心线,采用邻域连通性判定方法检测血管中心线中的关键点,去除所述关键点以得到多个相互独立的血管段,并对各血管段上的动静脉类别信息进行修正,得到动静脉类别信息修正后的各血管段;The centerline extraction module is used to extract the centerline of the blood vessel according to the result of the arteriovenous blood vessel identification, use the neighborhood connectivity determination method to detect the key points in the centerline of the blood vessel, remove the key points to obtain multiple independent blood vessel segments, and Correct the arterial and venous category information on each blood vessel segment, and obtain each blood vessel segment with the revised arterial and venous category information;
    直径比计算模块,用于基于动静脉类别信息修正后的各血管段的血管中心线,采用边界探测的方法获取动静脉类别信息修正后的各血管段的血管直径,根据所述血管直径计算感兴趣区域内动脉血管和静脉血管的直径比值。The diameter ratio calculation module is used to obtain the blood vessel diameter of each blood vessel segment corrected by the arteriovenous category information by using the method of boundary detection based on the corrected blood vessel center line of the arteriovenous category information, and calculate the sensation based on the blood vessel diameter The ratio of the diameters of arteries and veins in the region of interest.
  10. 一种电子设备,其中,所述电子设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现:An electronic device, wherein the electronic device includes a processor, a memory, and a computer program that is stored on the memory and can run on the processor, and when the processor executes the computer program:
    将原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个尺度的目标特征图;Input the original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales;
    基于所述目标特征图进行视盘分割,获得视盘分割结果;Performing segmentation of the optic disc based on the target feature map, and obtaining a segmentation result of the optic disc;
    采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果;Segmenting the original fundus image by using a pre-trained cascade segmentation network model to obtain an arteriovenous and vascular recognition result;
    基于视盘分割结果进行感兴趣区域定位,获得感兴趣区域定位结果;Locate the region of interest based on the results of the segmentation of the optic disc, and obtain the region of interest location results;
    根据动静脉血管识别结果提取血管中心线,采用邻域连通性判定方法检测血管中心线中的关键点,去除所述关键点以得到多个相互独立的血管段,并对各血管段上的动静脉类别信息进行修正,得到动静脉类别信息修正后的各血管段;The blood vessel centerline is extracted according to the result of arteriovenous blood vessel identification, the neighborhood connectivity determination method is used to detect the key points in the blood vessel centerline, the key points are removed to obtain multiple independent blood vessel segments, and the movement on each blood vessel segment is determined. Revise the vein category information to obtain each blood vessel segment after the correction of the arteriovenous category information;
    基于动静脉类别信息修正后的各血管段的血管中心线,采用边界探测的方法获取动静脉类别信息修正后的各血管段的血管直径,根据所述血管直径计算感兴趣区域内动脉血管和静脉血管的直径比值。Based on the blood vessel center line of each blood vessel segment corrected by the arteriovenous category information, the method of boundary detection is used to obtain the blood vessel diameter of each blood vessel segment corrected by the arteriovenous category information, and the arterial blood vessels and veins in the region of interest are calculated based on the blood vessel diameters. The ratio of the diameters of blood vessels.
  11. 根据权利要求10所述的电子设备,其中,所述处理器执行所述将原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个尺度的目标特征图,包括:The electronic device according to claim 10, wherein the processor executes the input of the original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales, comprising:
    将所述原始眼底图像输入所述U型卷积神经网络模型的编码器部分进行关键特征提取,得到一高维特征图;Input the original fundus image into the encoder part of the U-shaped convolutional neural network model for key feature extraction to obtain a high-dimensional feature map;
    将所述高维特征图输入所述U型卷积神经网络模型的解码器部分进行上采样操作,输出多个尺度的目标特征图。The high-dimensional feature map is input into the decoder part of the U-shaped convolutional neural network model to perform an up-sampling operation, and target feature maps of multiple scales are output.
  12. 根据权利要求11所述的电子设备,其中,所述处理器执行所述将所述原始眼底图像输入所述U型卷积神经网络模型的编码器部分进行关键特征提取,得到一高维特征图,包括:The electronic device according to claim 11, wherein the processor executes the input of the original fundus image into the encoder part of the U-shaped convolutional neural network model for key feature extraction to obtain a high-dimensional feature map ,include:
    对所述原始眼底图像进行卷积处理以提取关键特征,得到与所述原始眼底图像尺寸相同的特征图;Performing convolution processing on the original fundus image to extract key features to obtain a feature map with the same size as the original fundus image;
    对经过卷积处理得到的特征图进行最大池化操作,逐层缩小特征图尺寸,经过若干卷积层和池化层的交替处理得到所述高维特征图。A maximum pooling operation is performed on the feature map obtained through convolution processing, the size of the feature map is reduced layer by layer, and the high-dimensional feature map is obtained through alternate processing of several convolutional layers and pooling layers.
  13. 根据权利要求11所述的电子设备,其中,所述处理器执行所述将所述高维特征图输入所述U型卷积神经网络模型的解码器部分进行上采样操作,输出多个尺度的目标特征图,包括:The electronic device according to claim 11, wherein the processor performs an up-sampling operation by inputting the high-dimensional feature map into the decoder part of the U-shaped convolutional neural network model, and outputting multiple scales Target feature map, including:
    对所述高维特征图进行上采样操作,逐层放大所述高维特征图尺寸;Perform an up-sampling operation on the high-dimensional feature map, and enlarge the size of the high-dimensional feature map layer by layer;
    通过跳跃连接层将编码阶段每个网络层提取的低维特征与解码阶段对称提取的高维特征进行合并,得到每个网络层的初始特征图,所述每个网络层的初始特征图尺度不同;The low-dimensional features extracted from each network layer in the encoding stage and the high-dimensional features extracted symmetrically in the decoding stage are merged through the jump connection layer to obtain the initial feature map of each network layer, and the initial feature map of each network layer has a different scale ;
    通过每个网络层的输出分支对所述每个网络层的初始特征图进行输出得到多个尺度的目标特征图,所述每个网络层的输出分支中加入有注意力机制。The initial feature map of each network layer is output through the output branch of each network layer to obtain target feature maps of multiple scales, and an attention mechanism is added to the output branch of each network layer.
  14. 根据权利要求10至13任一项所述的电子设备,其中,所述处理器执行所述基于所述目标特征图进行视盘分割,获得视盘分割结果,包括:The electronic device according to any one of claims 10 to 13, wherein the execution by the processor to perform the segmentation of the optic disc based on the target feature map to obtain a result of the segmentation of the optic disc comprises:
    将所述目标特征图进行融合以得到待分割图像;Fusing the target feature maps to obtain an image to be segmented;
    对所述待分割图像进行候选框回归处理,以从所述待分割图像中定位视盘位置,并输出视盘的边界框信息;Performing candidate frame regression processing on the image to be segmented, so as to locate the position of the optic disc from the image to be segmented, and output bounding box information of the optic disc;
    根据视盘的边界框信息裁剪出标定的视盘区域图像块输入预训练的U型分割网络中,经过特征提取和上采样操作输出视盘分割结果。According to the bounding box information of the video disc, cut out the calibrated video disc area image block and input it into the pre-trained U-shaped segmentation network, and output the segmentation result of the video disc after feature extraction and up-sampling operations.
  15. 根据权利要求10至13任一项所述的电子设备,其中,所述处理器执行所述采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果,包括:The electronic device according to any one of claims 10 to 13, wherein the processor executes the segmentation of the original fundus image using a pre-trained cascaded segmentation network model to obtain an arteriovenous and vascular recognition result, comprising :
    提取所述原始眼底图像的绿色通道图像,对所述绿色通道图像做直方图均衡化处理,得到对比度增强的绿色通道图像;Extracting the green channel image of the original fundus image, and performing histogram equalization processing on the green channel image to obtain a green channel image with enhanced contrast;
    将所述对比度增强的绿色通道图像切成多个眼底图像块;Cutting the contrast-enhanced green channel image into multiple fundus image blocks;
    将所述多个眼底图像块输入预设级联分割网络模型进行分割,获取动静脉血管识别结果。The multiple fundus image blocks are input into a preset cascade segmentation network model for segmentation, and an arteriovenous blood vessel recognition result is obtained.
  16. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时用于实现:A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, it is used to realize:
    将原始眼底图像输入预训练的U型卷积神经网络模型进行处理,得到多个尺度的目标特征图;Input the original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales;
    基于所述目标特征图进行视盘分割,获得视盘分割结果;Performing segmentation of the optic disc based on the target feature map, and obtaining a segmentation result of the optic disc;
    采用预训练的级联分割网络模型对所述原始眼底图像进行分割,获得动静脉血管识别结果;Segmenting the original fundus image by using a pre-trained cascade segmentation network model to obtain an arteriovenous and vascular recognition result;
    基于视盘分割结果进行感兴趣区域定位,获得感兴趣区域定位结果;Locate the region of interest based on the results of the segmentation of the optic disc, and obtain the region of interest location results;
    根据动静脉血管识别结果提取血管中心线,采用邻域连通性判定方法检测血管中心线中的关键点,去除所述关键点以得到多个相互独立的血管段,并对各血管段上的动静脉类别信息进行修正,得到动静脉类别信息修正后的各血管段;The blood vessel centerline is extracted according to the result of arteriovenous blood vessel identification, the neighborhood connectivity determination method is used to detect the key points in the blood vessel centerline, the key points are removed to obtain multiple independent blood vessel segments, and the movement on each blood vessel segment is determined. Revise the vein category information to obtain each blood vessel segment after the correction of the arteriovenous category information;
    基于动静脉类别信息修正后的各血管段的血管中心线,采用边界探测的方法获取动静脉类别信息修正后的各血管段的血管直径,根据所述血管直径计算感兴趣区域内动脉血管和静脉血管的直径比值。Based on the blood vessel center line of each blood vessel segment corrected by the arteriovenous category information, the method of boundary detection is used to obtain the blood vessel diameter of each blood vessel segment corrected by the arteriovenous category information, and the arterial blood vessels and veins in the region of interest are calculated based on the blood vessel diameters. The ratio of the diameters of blood vessels.
  17. 根据权利要求16所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时还用于实现:将所述原始眼底图像输入所述U型卷积神经网络模型的编码器部分进行关键特征提取,得到一高维特征图;将所述高维特征图输入所述U型卷积神经网络模型的解码器部分进行上采样操作,输出多个尺度的目标特征图。The computer-readable storage medium according to claim 16, wherein, when the computer program is executed by the processor, it is also used to realize: input the original fundus image into the encoder part of the U-shaped convolutional neural network model to perform The key feature is extracted to obtain a high-dimensional feature map; the high-dimensional feature map is input into the decoder part of the U-shaped convolutional neural network model to perform an up-sampling operation, and a target feature map of multiple scales is output.
  18. 根据权利要求17所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时还用于实现:对所述原始眼底图像进行卷积处理以提取关键特征,得到与所述原始眼底图像尺寸相同的特征图;对经过卷积处理得到的特征图进行最大池化操作,逐层缩小特征图尺寸,经过若干卷积层和池化层的交替处理得到所述高维特征图。The computer-readable storage medium according to claim 17, wherein, when the computer program is executed by the processor, it is also used to implement: performing convolution processing on the original fundus image to extract key features, and obtain the difference between the original fundus and the original fundus image. Feature maps with the same image size; perform a maximum pooling operation on the feature maps obtained through convolution processing, reduce the size of the feature maps layer by layer, and obtain the high-dimensional feature maps through alternate processing of several convolutional layers and pooling layers.
  19. 根据权利要求17所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时还用于实现:对所述高维特征图进行上采样操作,逐层放大所述高维特征图尺寸;The computer-readable storage medium according to claim 17, wherein, when the computer program is executed by the processor, it is also used to implement: performing an up-sampling operation on the high-dimensional feature map, and amplifying the high-dimensional feature map layer by layer size;
    通过跳跃连接层将编码阶段每个网络层提取的低维特征与解码阶段对称提取的高维特征进行合并,得到每个网络层的初始特征图,所述每个网络层的初始特征图尺度不同;The low-dimensional features extracted from each network layer in the encoding stage and the high-dimensional features extracted symmetrically in the decoding stage are merged through the jump connection layer to obtain the initial feature map of each network layer, and the initial feature map of each network layer has a different scale ;
    通过每个网络层的输出分支对所述每个网络层的初始特征图进行输出得到多个尺度的目标特征图,所述每个网络层的输出分支中加入有注意力机制。The initial feature map of each network layer is output through the output branch of each network layer to obtain target feature maps of multiple scales, and an attention mechanism is added to the output branch of each network layer.
  20. 根据权利要求16至19任一项所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时还用于实现:将所述目标特征图进行融合以得到待分割图像;对所述待分割图像进行候选框回归处理,以从所述待分割图像中定位视盘位置,并输出视盘的边界框信息;根据视盘的边界框信息裁剪出标定的视盘区域图像块输入预训练的U型分割网络中,经过特征提取和上采样操作输出视盘分割结果。The computer-readable storage medium according to any one of claims 16 to 19, wherein when the computer program is executed by the processor, it is also used to realize: fuse the target feature map to obtain the image to be segmented; The image to be segmented is subjected to candidate frame regression processing to locate the position of the optic disc from the image to be segmented, and output the bounding box information of the optic disc; according to the bounding box information of the optic disc, cut out the calibrated image block of the optic disc area and input the pre-trained U-shape In the segmentation network, the segmentation results of the video disc are output after feature extraction and up-sampling operations.
PCT/CN2020/099538 2020-02-29 2020-06-30 Method and apparatus for recognizing and quantifying fundus retina vessel, and device and storage medium WO2021169128A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010134390.7A CN111340789A (en) 2020-02-29 2020-02-29 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels
CN202010134390.7 2020-02-29

Publications (1)

Publication Number Publication Date
WO2021169128A1 true WO2021169128A1 (en) 2021-09-02

Family

ID=71184092

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/099538 WO2021169128A1 (en) 2020-02-29 2020-06-30 Method and apparatus for recognizing and quantifying fundus retina vessel, and device and storage medium

Country Status (2)

Country Link
CN (1) CN111340789A (en)
WO (1) WO2021169128A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989643A (en) * 2021-10-26 2022-01-28 萱闱(北京)生物科技有限公司 Pipeline state detection method and device, medium and computing equipment
CN114359284A (en) * 2022-03-18 2022-04-15 北京鹰瞳科技发展股份有限公司 Method for analyzing retinal fundus images and related products
CN115294126A (en) * 2022-10-08 2022-11-04 南京诺源医疗器械有限公司 Intelligent cancer cell identification method for pathological image
CN115690124A (en) * 2022-11-02 2023-02-03 中国科学院苏州生物医学工程技术研究所 High-precision single-frame fundus fluorography image leakage area segmentation method and system
CN116206114A (en) * 2023-04-28 2023-06-02 成都云栈科技有限公司 Portrait extraction method and device under complex background
CN116309585A (en) * 2023-05-22 2023-06-23 山东大学 Method and system for identifying breast ultrasound image target area based on multitask learning
CN116473673A (en) * 2023-06-20 2023-07-25 浙江华诺康科技有限公司 Path planning method, device, system and storage medium for endoscope
CN116524548A (en) * 2023-07-03 2023-08-01 中国科学院自动化研究所 Vascular structure information extraction method, device and storage medium
CN116824116A (en) * 2023-06-26 2023-09-29 爱尔眼科医院集团股份有限公司 Super wide angle fundus image identification method, device, equipment and storage medium
WO2023186133A1 (en) * 2022-04-02 2023-10-05 武汉联影智融医疗科技有限公司 System and method for puncture path planning
CN117038088A (en) * 2023-10-09 2023-11-10 北京鹰瞳科技发展股份有限公司 Method, device, equipment and medium for determining onset of diabetic retinopathy
WO2023240319A1 (en) * 2022-06-16 2023-12-21 Eyetelligence Limited Fundus image analysis system

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340789A (en) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels
CN111815599B (en) * 2020-07-01 2023-12-15 上海联影智能医疗科技有限公司 Image processing method, device, equipment and storage medium
CN111932554B (en) * 2020-07-31 2024-03-22 青岛海信医疗设备股份有限公司 Lung vessel segmentation method, equipment and storage medium
CN113643353B (en) * 2020-09-04 2024-02-06 深圳硅基智能科技有限公司 Measurement method for enhancing resolution of vascular caliber of fundus image
CN111932535A (en) * 2020-09-24 2020-11-13 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
CN112529839B (en) * 2020-11-05 2023-05-02 西安交通大学 Method and system for extracting carotid vessel centerline in nuclear magnetic resonance image
CN112330684B (en) * 2020-11-23 2022-09-13 腾讯科技(深圳)有限公司 Object segmentation method and device, computer equipment and storage medium
CN112465772B (en) * 2020-11-25 2023-09-26 平安科技(深圳)有限公司 Fundus colour photographic image blood vessel evaluation method, device, computer equipment and medium
CN112446866B (en) * 2020-11-25 2023-05-26 上海联影医疗科技股份有限公司 Blood flow parameter calculation method, device, equipment and storage medium
CN112419338B (en) * 2020-12-08 2021-12-07 深圳大学 Head and neck endangered organ segmentation method based on anatomical prior knowledge
CN115969310A (en) * 2020-12-28 2023-04-18 深圳硅基智能科技有限公司 System and method for measuring pathological change characteristics of hypertensive retinopathy
CN112826442A (en) * 2020-12-31 2021-05-25 上海鹰瞳医疗科技有限公司 Psychological state identification method and equipment based on fundus images
CN112734784A (en) * 2021-01-28 2021-04-30 依未科技(北京)有限公司 High-precision fundus blood vessel boundary determining method, device, medium and equipment
CN112734828B (en) * 2021-01-28 2023-02-24 依未科技(北京)有限公司 Method, device, medium and equipment for determining center line of fundus blood vessel
CN112862787B (en) * 2021-02-10 2022-11-15 昆明同心医联科技有限公司 CTA image data processing method, device and storage medium
CN113012114B (en) * 2021-03-02 2021-12-03 推想医疗科技股份有限公司 Blood vessel identification method and device, storage medium and electronic equipment
CN113192074B (en) * 2021-04-07 2024-04-05 西安交通大学 Automatic arteriovenous segmentation method suitable for OCTA image
CN113269737B (en) * 2021-05-17 2024-03-19 北京鹰瞳科技发展股份有限公司 Fundus retina artery and vein vessel diameter calculation method and system
CN113344893A (en) * 2021-06-23 2021-09-03 依未科技(北京)有限公司 High-precision fundus arteriovenous identification method, device, medium and equipment
CN113425248B (en) * 2021-06-24 2024-03-08 平安科技(深圳)有限公司 Medical image evaluation method, device, equipment and computer storage medium
CN113538463A (en) * 2021-07-22 2021-10-22 强联智创(北京)科技有限公司 Aneurysm segmentation method, device and equipment
CN113689954A (en) * 2021-08-24 2021-11-23 平安科技(深圳)有限公司 Hypertension risk prediction method, device, equipment and medium
CN113792740B (en) * 2021-09-16 2023-12-26 平安创科科技(北京)有限公司 Artery and vein segmentation method, system, equipment and medium for fundus color illumination
CN113749690B (en) * 2021-09-24 2024-01-30 无锡祥生医疗科技股份有限公司 Blood vessel blood flow measuring method, device and storage medium
CN114037663A (en) * 2021-10-27 2022-02-11 北京医准智能科技有限公司 Blood vessel segmentation method, device and computer readable medium
CN114359280B (en) * 2022-03-18 2022-06-03 武汉楚精灵医疗科技有限公司 Gastric mucosa image boundary quantification method, device, terminal and storage medium
CN116843612A (en) * 2023-04-20 2023-10-03 西南医科大学附属医院 Image processing method for diabetic retinopathy diagnosis
CN117351009B (en) * 2023-12-04 2024-02-23 江苏富翰医疗产业发展有限公司 Method and system for generating blood oxygen saturation data based on multispectral fundus image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657612A (en) * 2017-10-16 2018-02-02 西安交通大学 Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
CN109166124A (en) * 2018-11-20 2019-01-08 中南大学 A kind of retinal vascular morphologies quantization method based on connected region
CN111340789A (en) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657612A (en) * 2017-10-16 2018-02-02 西安交通大学 Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
CN109166124A (en) * 2018-11-20 2019-01-08 中南大学 A kind of retinal vascular morphologies quantization method based on connected region
CN111340789A (en) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LANYAN XUE, XINRONG CAO, JIAWEN LIN, SHAOHUA ZHENG, LUN YU: "Artery / vein automatic classification in retinal images and vessel diameter Measurement", YIQI YIBIAO XUEBAO - CHINESE JOURNAL OF SCIENTIFIC INSTRUMENT, ZHONGGUO YIQI YIBIAO XUEHUI, BEIJING, CN, vol. 38, no. 9, 1 September 2017 (2017-09-01), CN, pages 2307 - 2316, XP055840324, ISSN: 0254-3087, DOI: 10.19650/j.cnki.cjsi.2017.09.027 *
OLAF RONNEBERGER, PHILIPP FISCHER, THOMAS BROX: "U-Net: Convolutional Networks for Biomedical Image Segmentation", MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2015 : 18TH INTERNATIONAL CONFERENCE, MUNICH, GERMANY, OCTOBER 5-9, 2015; PROCEEDINGS, SPRINGER, CHAM, vol. 9351, 1 January 2015 (2015-01-01) - 9 October 2015 (2015-10-09), Cham, pages 234 - 241, XP055543796, ISBN: 978-3-319-24573-7, DOI: 10.1007/978-3-319-24574-4_28 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989643A (en) * 2021-10-26 2022-01-28 萱闱(北京)生物科技有限公司 Pipeline state detection method and device, medium and computing equipment
CN113989643B (en) * 2021-10-26 2023-09-01 萱闱(北京)生物科技有限公司 Pipeline state detection method, device, medium and computing equipment
CN114359284A (en) * 2022-03-18 2022-04-15 北京鹰瞳科技发展股份有限公司 Method for analyzing retinal fundus images and related products
WO2023186133A1 (en) * 2022-04-02 2023-10-05 武汉联影智融医疗科技有限公司 System and method for puncture path planning
WO2023240319A1 (en) * 2022-06-16 2023-12-21 Eyetelligence Limited Fundus image analysis system
CN115294126A (en) * 2022-10-08 2022-11-04 南京诺源医疗器械有限公司 Intelligent cancer cell identification method for pathological image
CN115294126B (en) * 2022-10-08 2022-12-16 南京诺源医疗器械有限公司 Cancer cell intelligent identification method for pathological image
CN115690124A (en) * 2022-11-02 2023-02-03 中国科学院苏州生物医学工程技术研究所 High-precision single-frame fundus fluorography image leakage area segmentation method and system
CN116206114B (en) * 2023-04-28 2023-08-01 成都云栈科技有限公司 Portrait extraction method and device under complex background
CN116206114A (en) * 2023-04-28 2023-06-02 成都云栈科技有限公司 Portrait extraction method and device under complex background
CN116309585B (en) * 2023-05-22 2023-08-22 山东大学 Method and system for identifying breast ultrasound image target area based on multitask learning
CN116309585A (en) * 2023-05-22 2023-06-23 山东大学 Method and system for identifying breast ultrasound image target area based on multitask learning
CN116473673B (en) * 2023-06-20 2024-02-27 浙江华诺康科技有限公司 Path planning method, device, system and storage medium for endoscope
CN116473673A (en) * 2023-06-20 2023-07-25 浙江华诺康科技有限公司 Path planning method, device, system and storage medium for endoscope
CN116824116A (en) * 2023-06-26 2023-09-29 爱尔眼科医院集团股份有限公司 Super wide angle fundus image identification method, device, equipment and storage medium
CN116524548B (en) * 2023-07-03 2023-12-26 中国科学院自动化研究所 Vascular structure information extraction method, device and storage medium
CN116524548A (en) * 2023-07-03 2023-08-01 中国科学院自动化研究所 Vascular structure information extraction method, device and storage medium
CN117038088A (en) * 2023-10-09 2023-11-10 北京鹰瞳科技发展股份有限公司 Method, device, equipment and medium for determining onset of diabetic retinopathy
CN117038088B (en) * 2023-10-09 2024-02-02 北京鹰瞳科技发展股份有限公司 Method, device, equipment and medium for determining onset of diabetic retinopathy

Also Published As

Publication number Publication date
CN111340789A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
WO2021169128A1 (en) Method and apparatus for recognizing and quantifying fundus retina vessel, and device and storage medium
WO2020199593A1 (en) Image segmentation model training method and apparatus, image segmentation method and apparatus, and device and medium
WO2021003821A1 (en) Cell detection method and apparatus for a glomerular pathological section image, and device
CN111222361B (en) Method and system for analyzing characteristic data of change of blood vessel of retina in hypertension
Zhang et al. Automated semantic segmentation of red blood cells for sickle cell disease
Chetoui et al. Explainable end-to-end deep learning for diabetic retinopathy detection across multiple datasets
WO2022063198A1 (en) Lung image processing method, apparatus and device
Hassan et al. Joint segmentation and quantification of chorioretinal biomarkers in optical coherence tomography scans: A deep learning approach
US11967181B2 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
WO2022142030A1 (en) Method and system for measuring lesion features of hypertensive retinopathy
CN107292835B (en) Method and device for automatically vectorizing retinal blood vessels of fundus image
CN110751636B (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
CN111882566B (en) Blood vessel segmentation method, device, equipment and storage medium for retina image
CN111860169B (en) Skin analysis method, device, storage medium and electronic equipment
Gui et al. Optic disc localization algorithm based on improved corner detection
TWI719587B (en) Pre-processing method and storage device for quantitative analysis of fundus image
Dhane et al. Spectral clustering for unsupervised segmentation of lower extremity wound beds using optical images
CN111797901A (en) Retinal artery and vein classification method and device based on topological structure estimation
Sun et al. Automatic detection of retinal regions using fully convolutional networks for diagnosis of abnormal maculae in optical coherence tomography images
CN111797900A (en) Arteriovenous classification method and device of OCT-A image
Mao et al. Deep learning with skip connection attention for choroid layer segmentation in oct images
Wang et al. Automatic vessel crossing and bifurcation detection based on multi-attention network vessel segmentation and directed graph search
CN114359284A (en) Method for analyzing retinal fundus images and related products
Tania et al. Computational complexity of image processing algorithms for an intelligent mobile enabled tongue diagnosis scheme
Chen et al. Stage diagnosis for Chronic Kidney Disease based on ultrasonography

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20922271

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20922271

Country of ref document: EP

Kind code of ref document: A1