WO2023137904A1 - Eye fundus image-based lesion detection method and apparatus, and device and storage medium - Google Patents

Eye fundus image-based lesion detection method and apparatus, and device and storage medium Download PDF

Info

Publication number
WO2023137904A1
WO2023137904A1 PCT/CN2022/090164 CN2022090164W WO2023137904A1 WO 2023137904 A1 WO2023137904 A1 WO 2023137904A1 CN 2022090164 W CN2022090164 W CN 2022090164W WO 2023137904 A1 WO2023137904 A1 WO 2023137904A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
fundus
network
feature
screening
Prior art date
Application number
PCT/CN2022/090164
Other languages
French (fr)
Chinese (zh)
Inventor
郑喜民
王天誉
舒畅
陈又新
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2023137904A1 publication Critical patent/WO2023137904A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Quality & Reliability (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The present application relates to the fields of image recognition and digital treatment. Disclosed are an eye fundus image-based lesion detection method and apparatus, and a computer device and a storage medium. The method comprises: acquiring an eye fundus screening image, wherein the eye fundus screening image comprises a scanning image and an angiography image; inputting the scanning image into a first network in a dual-channel network of a deep learning network model, and acquiring a first image feature obtained by the first network, wherein the first image feature comprises an eye fundus curvature and a reflectivity; inputting the angiography image into a second network in the dual-channel network of the deep learning network model, and acquiring a second image feature obtained by the second network, wherein the second image feature comprises a blood vessel density and an eye fundus tissue thickness; fusing the first image feature with the second image feature to obtain a fused feature; and according to the fused feature, matching a macular lesion level corresponding to the image. The present application can improve the accuracy of recognizing a macular lesion level of the eye fundus.

Description

基于眼底图像的病变检测方法、装置、设备及存储介质Lesion detection method, device, equipment and storage medium based on fundus image
本申请要求于2022年1月21日提交中国专利局、申请号为202210073516.3,发明名称为“基于眼底图像的病变检测方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202210073516.3 and the invention title "Method, device, equipment and storage medium for lesion detection based on fundus image" submitted to the China Patent Office on January 21, 2022, the entire contents of which are incorporated in this application by reference.
技术领域technical field
本申请涉及到图像识别及数字医疗领域,特别是涉及到一种基于眼底图像的病变检测方法、装置、计算机设备及存储介质。The present application relates to the fields of image recognition and digital medical care, in particular to a fundus image-based lesion detection method, device, computer equipment and storage medium.
背景技术Background technique
老年性黄斑变性(AMD)是一种严重影响老年人视力的眼部疾病,目前对AMD的检测是通过基于彩色眼底照片对眼底图像进行深入而耗时的分析。发明人意识到目前研究使用的眼底图像分别是眼底彩色图像和光学相干断层扫描(OCT)图像,该两者图像的眼底特征相互独立,无法准确地识别出眼底黄斑病变的程度。Age-related macular degeneration (AMD) is an eye disease that seriously affects the vision of the elderly. Currently, AMD is detected through in-depth and time-consuming analysis of fundus images based on color fundus photographs. The inventor realized that the fundus images used in the current research are fundus color images and optical coherence tomography (OCT) images respectively. The fundus features of the two images are independent of each other, and the degree of fundus maculopathy cannot be accurately identified.
技术问题technical problem
本申请的主要目的为提供一种基于眼底图像的病变检测方法、装置、计算机设备及存储介质,旨在解决目前眼底黄斑病变的程度的识别成都的准确性低的问题。The main purpose of this application is to provide a lesion detection method, device, computer equipment and storage medium based on fundus images, aiming to solve the problem of low accuracy in identifying the degree of fundus macular lesions at present.
技术解决方案technical solution
为了实现上述发明目的,本申请提出一种基于眼底图像的病变检测方法,包括:In order to achieve the above-mentioned purpose of the invention, this application proposes a lesion detection method based on fundus images, including:
获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像;Obtaining a fundus screening image, the fundus screening image including a scan image and an angiographic image;
将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率;The scanned image is input to the first network in the dual-channel network of the deep learning network model, and the first image features obtained by the first network are obtained; the first image features include fundus curvature and reflectivity;
将所述造影图像输入到深度学习网络模型的双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度;The contrast image is input to the second network in the dual-channel network of the deep learning network model, and the second image features obtained by the second network are obtained; the second image features include blood vessel density and fundus tissue thickness;
将所述第一图像特征与第二图像特征进行融合,得到融合特征;Fusing the first image feature with the second image feature to obtain a fusion feature;
根据所述融合特征匹配所述图像对应的黄斑病变等级。Match the grade of maculopathy corresponding to the image according to the fusion feature.
本申请还提供一种基于眼底图像的病变检测装置,包括:The present application also provides a lesion detection device based on fundus images, including:
眼底图像模块,用于获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像;The fundus image module is used to obtain fundus screening images, and the fundus screening images include scanning images and contrast images;
第一网络模块,用于将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率;The first network module is used to input the scanned image into the first network in the dual-channel network of the deep learning network model, and obtain the first image features obtained by the first network; the first image features include fundus curvature and reflectivity;
第二网络模块,用于将所述造影图像输入到深度学习网络模型的双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度;The second network module is configured to input the contrast image into the second network in the dual-channel network of the deep learning network model, and obtain the second image features obtained by the second network; the second image features include blood vessel density and fundus tissue thickness;
特征融合模块,用于将所述第一图像特征与第二图像特征进行融合,得到融合特征;A feature fusion module, configured to fuse the first image feature with the second image feature to obtain a fusion feature;
等级匹配模块,用于根据所述融合特征匹配所述图像对应的黄斑病变等级。A grade matching module, configured to match the macular lesion grade corresponding to the image according to the fusion feature.
本申请还提供一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现一种基于眼底图像的病变检测方法,其中,所述基于眼底图像的病变检测方法,包括:The present application also provides a computer device, including a memory and a processor, the memory stores a computer program, and when the processor executes the computer program, a fundus image-based lesion detection method is implemented, wherein the fundus image-based lesion detection method includes:
获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像;Obtaining a fundus screening image, the fundus screening image including a scan image and an angiographic image;
将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率;The scanned image is input to the first network in the dual-channel network of the deep learning network model, and the first image features obtained by the first network are obtained; the first image features include fundus curvature and reflectivity;
将所述造影图像输入到深度学习网络模型的双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度;The contrast image is input to the second network in the dual-channel network of the deep learning network model, and the second image features obtained by the second network are obtained; the second image features include blood vessel density and fundus tissue thickness;
将所述第一图像特征与第二图像特征进行融合,得到融合特征;Fusing the first image feature with the second image feature to obtain a fusion feature;
根据所述融合特征匹配所述图像对应的黄斑病变等级。Match the grade of maculopathy corresponding to the image according to the fusion feature.
本申请还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现一种基于眼底图像的病变检测方法,其中,所述基于眼底图像的病变检测方法,包括:The present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, a method for detecting a lesion based on a fundus image is implemented, wherein the method for detecting a lesion based on a fundus image includes:
获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像;Obtaining a fundus screening image, the fundus screening image including a scan image and an angiographic image;
将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率;The scanned image is input to the first network in the dual-channel network of the deep learning network model, and the first image features obtained by the first network are obtained; the first image features include fundus curvature and reflectivity;
将所述造影图像输入到深度学习网络模型的双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度;The contrast image is input to the second network in the dual-channel network of the deep learning network model, and the second image features obtained by the second network are obtained; the second image features include blood vessel density and fundus tissue thickness;
将所述第一图像特征与第二图像特征进行融合,得到融合特征;Fusing the first image feature with the second image feature to obtain a fusion feature;
根据所述融合特征匹配所述图像对应的黄斑病变等级。Match the grade of maculopathy corresponding to the image according to the fusion feature.
有益效果Beneficial effect
本申请例提供了一种基于光学相干断层扫描图像和光学相干断层扫描血管造影图像,可以提高黄斑病变识别、检测的精度与准确度。The present application example provides an image based on optical coherence tomography and angiography of optical coherence tomography, which can improve the precision and accuracy of macular degeneration identification and detection.
附图说明Description of drawings
图1为本申请基于眼底图像的病变检测方法的一实施例流程示意图;Fig. 1 is a schematic flow chart of an embodiment of the lesion detection method based on fundus images of the present application;
图2为本申请基于眼底图像的病变检测装置的一实施例结构示意图;FIG. 2 is a schematic structural diagram of an embodiment of a lesion detection device based on fundus images of the present application;
图3为本申请计算机设备的一实施例结构示意框图。Fig. 3 is a schematic structural block diagram of an embodiment of the computer equipment of the present application.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization, functional features and advantages of the present application will be further described in conjunction with the embodiments and with reference to the accompanying drawings.
本发明的最佳实施方式BEST MODE FOR CARRYING OUT THE INVENTION
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solution and advantages of the present application clearer, the present application will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present application, and are not intended to limit the present application.
本申请实施例可以基于人工智能的深度学习模型对医疗诊断相关的数据进行获取和处理。其中,人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。In this embodiment of the present application, data related to medical diagnosis can be acquired and processed based on an artificial intelligence deep learning model. Among them, artificial intelligence (AI) is the theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
参照图1,本申请实施例提供一种基于眼底图像的病变检测方法,包括步骤S10-S50,对于所述基于眼底图像的病变检测方法的各个步骤的详细阐述如下,所述基于眼底图像的病变检测方法可以由内置有相应功能的应用程序完成,例如内置于应用程序中的“眼底图像黄斑病变检测识别”功能,通过该功能使得应用程序能够从眼底筛查图像中识别和判定是否产生黄斑病变以及对黄斑病变进行分级,所述应用程序可以运行于终端设备上或运行于云端服务器中,因此,所述基于眼底图像的病变检测方法也可以理解为由运行所述应用程序的终端设备或云端服务器完成。Referring to Fig. 1, the embodiment of the present application provides a fundus image-based lesion detection method, including steps S10-S50. The detailed description of each step of the fundus image-based lesion detection method is as follows. The fundus image-based lesion detection method can be completed by an application program with corresponding functions, such as the built-in "fundus image macular degeneration detection and identification" function in the application program. Through this function, the application program can identify and determine whether macular degeneration occurs and grade macular degeneration from fundus screening images. The application program can run Running on the terminal device or in the cloud server, therefore, the method for detecting the lesion based on the fundus image can also be understood as being completed by the terminal device or the cloud server running the application program.
S10、获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像。S10. Acquire a fundus screening image, where the fundus screening image includes a scan image and a contrast image.
本实施例应用于基于图像识别技术对眼底是否产生黄斑变性以及黄斑病变的等级进行检测的场景,首先获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像,所述扫描图像为光学相干断层扫描图像(简称OCT),所述造影图像为光学相干断层扫描血管造影图像(简称OCTA),所述扫描图像和所述造影图像中均具有判定黄斑病变的特征,且所述扫描图像和所述造影图像中判定黄斑病变的侧重特征不相同,基于所述扫描图像和所述造影图像可以更加准确地对眼底黄斑病变进行识别、检测。This embodiment is applied to the scene of detecting whether macular degeneration occurs in the fundus and the grade of macular degeneration based on image recognition technology. First, the fundus screening image is obtained. The fundus screening image includes a scanned image and a contrast image. The scan image is an optical coherence tomography image (OCT for short), and the contrast image is an optical coherence tomography angiography image (OCTA for short). The features are different, based on the scanned image and the angiographic image, the fundus macular degeneration can be identified and detected more accurately.
S20、将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率。S20. Input the scanned image to the first network in the dual-channel network of the deep learning network model, and acquire the first image features obtained by the first network; the first image features include fundus curvature and reflectivity.
本实施例中,在获获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像之后,本实施例中通过深度学习网络模型学习到扫描图像和造影图像判别所述黄斑病变的图像特征,所述深度学习网络为双通道网络,包括第一网络与第二网络,通过两个不同的网络分别接收所述扫描图像和造影图像,其中,第一网络能够提取扫描图像中的判断黄斑病变的特征,将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率,即将提取到的第一图像特征转化为眼底的参数,包括眼底曲率与反射率。In this embodiment, after the fundus screening image is obtained, and the fundus screening image includes a scan image and a contrast image, in this embodiment, the deep learning network model is used to learn the image features of the scan image and the contrast image to distinguish the macular lesion. The deep learning network is a dual-channel network, including a first network and a second network. The scan image and the contrast image are respectively received through two different networks, wherein the first network can extract features for judging maculopathy in the scan image, and the scan image is input to the first network of the dual-channel network of the deep learning network model. , acquiring the first image features obtained by the first network; the first image features include fundus curvature and reflectance, that is, converting the extracted first image features into fundus parameters, including fundus curvature and reflectance.
S30、将所述造影图像输入到深度学习网络模型的双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度。S30. Input the contrast image into the second network of the dual-channel network of the deep learning network model, and acquire the second image features obtained by the second network; the second image features include blood vessel density and fundus tissue thickness.
本实施例中,在将所述扫描图像输入到双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率之后,将所述造影图像输入到深度学习网络模型的双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度,其中,第二网络能够提取造影图像中的判断黄斑病变的特征,将提取到的第二图像特征转化为眼底的参数,包括血管密度以及眼底组织厚度。In this embodiment, after the scanned image is input into the first network of the dual-channel network, the first image features obtained by the first network are obtained; after the first image features include fundus curvature and reflectivity, the contrast image is input to the second network of the dual-channel network of the deep learning network model, and the second image features obtained by the second network are obtained; the second image features include blood vessel density and fundus tissue thickness, wherein the second network can extract features for judging macular degeneration in the contrast image, and convert the extracted second image features into parameters of the fundus, including blood vessel density and fundus tissue thickness.
S40、将所述第一图像特征与第二图像特征进行融合,得到融合特征。S40. Fusion the first image feature and the second image feature to obtain a fusion feature.
本实施例中,在将所述扫描图像输入到双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率,以及将所述造影图像输入到双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度后,为了更加准确地对黄斑病变进行判断,将两种不同图像的识别到的眼底特征进行结合,即将所述第一图像特征与第二图像特征进行融合,得到融合特征,所述融合特征中包含判定黄斑病变的各项眼底参数的数值,包括眼底曲率、反射率、血管密度以及眼底组织厚度等,且该多项眼底参数是由第一图像特征与第二图像特征融合得到的,即所述第二图像特征中亦包含眼底曲率、反射率,并基于第二图像特征中的眼底曲率、反射率对第一图像特征的眼底曲率与反射率进行修正,从而得到融合特征。In this embodiment, after the scanned image is input to the first network of the dual-channel network, the first image features obtained by the first network are obtained; the first image features include fundus curvature and reflectivity, and the contrast image is input to the second network of the dual-channel network, and the second image features obtained by the second network are obtained; after the second image features include blood vessel density and fundus tissue thickness, in order to judge macular degeneration more accurately, the fundus features recognized by two different images are combined, that is, the first image feature and the second image feature are fused to obtain a fusion Features, the fusion features include the values of various fundus parameters for determining macular degeneration, including fundus curvature, reflectivity, blood vessel density, and fundus tissue thickness, etc., and the multiple fundus parameters are obtained by fusing the first image feature and the second image feature, that is, the second image feature also includes fundus curvature and reflectance, and the fundus curvature and reflectance of the first image feature are corrected based on the fundus curvature and reflectance in the second image feature, so as to obtain the fusion feature.
S50、根据所述融合特征匹配所述图像对应的黄斑病变等级。S50. Match the maculopathy grade corresponding to the image according to the fusion feature.
本实施例中,在将所述第一图像特征与第二图像特征进行融合,得到融合特征之后,根据所述融合特征匹配所述图像对应的黄斑病变等级,融合了光学相干断层扫描图像与光学相干断层扫描血管造影图像中提取到的表征黄斑病变的图像特征,然后根据融合特征匹配对应的黄斑病变等级,在一种实施方式中,每一等级的黄斑病变均预先设置有每种眼底参数的数值范围,当所述融合特征的各项参数的数值满足对应等级的黄斑病变的眼底参数的范围时,为所述融合特征匹配相应的黄斑病变等级,从而提高黄斑病变识别、检测的精度与准确度。In this embodiment, after the first image feature is fused with the second image feature to obtain the fusion feature, the grade of macular degeneration corresponding to the image is matched according to the fusion feature, the image features representing macular degeneration extracted from the optical coherence tomography image and the optical coherence tomography angiography image are fused, and the corresponding macular degeneration grade is matched according to the fusion feature. In one embodiment, each grade of macular degeneration is preset with a numerical range of each fundus parameter. When the range of the fundus parameters is within the range, the fusion feature is matched with the corresponding grade of macular degeneration, thereby improving the precision and accuracy of macular degeneration identification and detection.
本实施例提供一种基于光学相干断层扫描图像和光学相干断层扫描血管造影图像,并结合深度双通道神经网络算法进行眼底黄斑病变进行识别、分级检测的方法,首先获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像,然后将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率,将所述造影图像输入到深度学习网络模型的双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度,将所述第一图像特征与第二图像特征进行融合,得到融合特征,所述融合特征中包含判定黄斑病变的各项眼底参数的数值,包括眼底曲率、反射率、血管密度以及眼底组织厚度等,且该多项眼底参数是由第一图像特征与第二图像特征融合得到的,即所述第二图像特征中亦包含眼底曲率、反射率,并基于第二图像特征中的眼底曲率、反射率对第一图像特征的眼底曲率与反射率进行修正,从而得到融合特征;根据所述融合特征匹配所述图像对应的黄斑病变等级,从而提高黄斑病变识别、检测的精度与准确度。This embodiment provides a method based on optical coherence tomography images and optical coherence tomography angiography images, combined with a deep dual-channel neural network algorithm for fundus macular lesions for identification and graded detection. Firstly, a fundus screening image is obtained. The fundus screening image includes a scan image and a contrast image, and then the scan image is input to the first network in the dual-channel network of the deep learning network model, and the first image features obtained by the first network are acquired; the first image features include fundus curvature and reflectance, and the contrast images are input to the deep learning network. The second network in the dual-channel network of the model obtains the second image features obtained by the second network; the second image features include blood vessel density and fundus tissue thickness, and the first image features are fused with the second image features to obtain fusion features. The fundus curvature and reflectance in the first image feature are corrected for the fundus curvature and reflectance to obtain a fusion feature; the macular degeneration level corresponding to the image is matched according to the fusion feature, thereby improving the recognition and detection of macular degeneration. Accuracy and accuracy.
在一个实施例中,所述根据所述融合特征匹配所述图像对应的黄斑病变等级,包括:In one embodiment, the matching the maculopathy grade corresponding to the image according to the fusion feature includes:
计算所述融合特征与标准特征的损失度,根据所述损失度匹配所述黄斑病变等级。Calculate the loss degree of the fusion feature and the standard feature, and match the maculopathy grade according to the loss degree.
本实施例中,在根据所述融合特征匹配所述图像对应的黄斑病变等级的过程中,将所述融合特征与标准特征进行对比,然后计算所述融合特征与标准特征的损失度,然后根据所述损失度匹配所述黄斑病变等级,其中,所述融合特征中,第一图像特征与第二图像特征对应标准特征的损失度的重要程度相同,在计算所述融合特征与标准特征的损失度时,可以先计算第一图像特征与标准特征的第一损失度,以及计算第二图像特征与标准特征的第二损失度,再将第一损失度与第二损失度进行加和,得到所述融合特征与标准特征的损失度,从而准确地对两种图像的特征进行计算与匹配,提高了黄斑病变识别的准确度。In this embodiment, in the process of matching the macular lesion grade corresponding to the image according to the fusion feature, the fusion feature is compared with the standard feature, and then the loss degree of the fusion feature and the standard feature is calculated, and then the macular lesion grade is matched according to the loss degree, wherein, in the fusion feature, the loss degree of the first image feature and the second image feature corresponding to the standard feature are of the same importance. Then add the first loss degree and the second loss degree to obtain the loss degree of the fusion feature and the standard feature, thereby accurately calculating and matching the features of the two images, and improving the accuracy of macular lesion recognition.
在一个实施例中,所述将所述第一图像特征与第二图像特征进行融合,得到融合特征,包括:In one embodiment, the fusion of the first image feature and the second image feature to obtain the fusion feature includes:
获取第一图像特征的第一眼底曲率与第一反射率,获取第二图像特征的第二眼底曲率与第二反射率;acquiring the first fundus curvature and first reflectance of the first image feature, and acquiring the second fundus curvature and second reflectance of the second image feature;
获取第一图像特征的第一血管密度与第一眼底组织厚度,获取第二图像特征的第二血管密度与第二眼底组织厚度;Obtaining the first blood vessel density and the first fundus tissue thickness of the first image feature, and acquiring the second blood vessel density and the second fundus tissue thickness of the second image feature;
获取所述第一图像特征与第二图像特征中各项特征的融合比例;Obtain the fusion ratio of each feature in the first image feature and the second image feature;
根据所述融合比例对所述第一眼底曲率与第一反射率、第二眼底曲率与第二反射率、第一血管密度与第一眼底组织厚度、第二血管密度与第二眼底组织厚度进行融合,得到融合特征。According to the fusion ratio, the first fundus curvature and the first reflectance, the second fundus curvature and the second reflectance, the first blood vessel density and the first fundus tissue thickness, and the second blood vessel density and the second fundus tissue thickness are fused to obtain fusion features.
本实施例中,在将所述第一图像特征与第二图像特征进行融合,得到融合特征的过程中,先获取第一图像特征的第一眼底曲率与第一反射率,获取第二图像特征的第二眼底曲率与第二反射率,获取第一图像特征的第一血管密度与第一眼底组织厚度,获取第二图像特征的第二血管密度与第二眼底组织厚度,再获取所述第一图像特征与第二图像特征中各项特征的融合比例,即第一图像特征中各项特征的比例以及第二图像特征中各项特征的比例,所述第二图像特征中亦包含眼底曲率、反射率,所述第一图像特征中亦包含血管密度及眼底组织厚度,然后根据所述融合比例对所述第一眼底曲率与第一反射率、第二眼底曲率与第二反射率、第一血管密度与第一眼底组织厚度、第二血管密度与第二眼底组织厚度进行融合,得到融合特征,将两种不同图像的识别到的眼底特征进行结合,从而提高融合后的特征的准确性。In this embodiment, in the process of fusing the first image feature with the second image feature to obtain the fused feature, first obtain the first fundus curvature and the first reflectance of the first image feature, obtain the second fundus curvature and the second reflectance of the second image feature, obtain the first blood vessel density and the first fundus tissue thickness of the first image feature, obtain the second blood vessel density and the second fundus tissue thickness of the second image feature, and then obtain the fusion ratio of the first image feature and each feature in the second image feature, that is, the ratio of each feature in the first image feature and the ratio of each feature in the second image feature, The second image feature also includes fundus curvature and reflectivity, and the first image feature also includes blood vessel density and fundus tissue thickness. Then, according to the fusion ratio, the first fundus curvature and the first reflectance, the second fundus curvature and the second reflectance, the first blood vessel density and the first fundus tissue thickness, the second blood vessel density and the second fundus tissue thickness are fused to obtain fusion features, and the recognized fundus features of two different images are combined to improve the accuracy of the fused features.
在一个实施例中,所述获取眼底筛查图像之后,包括:In one embodiment, after the fundus screening image is acquired, it includes:
获取所述眼底筛查图像的眼底周围部位的第一冗余图像;Acquiring a first redundant image of the surrounding parts of the fundus of the fundus screening image;
从所述眼底筛查图像删除所述第一冗余图像,得到待选眼底筛查图像;Deleting the first redundant image from the fundus screening image to obtain a fundus screening image to be selected;
从所述待选眼底筛查图像中获取黄斑部位的中央凹区域图像;Obtaining a fovea area image of the macula from the fundus screening image to be selected;
在所述待选眼底筛查图像中对所述中央凹区域图像进行随机翻转,得到目标眼底筛查图像。In the fundus screening image to be selected, the image of the fovea area is randomly flipped to obtain a target fundus screening image.
本实施例中,在获取眼底筛查图像之后,需要对所述眼底筛查图像进行增强,包括对扫描图像和造影图像的增强,具体的,获取所述眼底筛查图像的眼底周围部位的第一冗余图像,从所述眼底筛查图像删除所述第一冗余图像,得到待选眼底筛查图像,然后从所述待选眼底筛查图像中获取黄斑部位的中央凹区域图像,再在所述待选眼底筛查图像中对所述中央凹区域图像进行随机翻转来对图像进行增强,得到目标眼底筛查图像,从而提高眼底筛查图像识别的准确度。In this embodiment, after the fundus screening image is obtained, the fundus screening image needs to be enhanced, including the enhancement of the scanned image and the contrast image. Specifically, the first redundant image of the fundus surrounding part of the fundus screening image is obtained, the first redundant image is deleted from the fundus screening image to obtain a fundus screening image to be selected, and then the fovea region image of the macula is obtained from the fundus screening image to be selected, and then the image of the fovea region is randomly flipped in the fundus screening image to be selected. The enhancement is performed to obtain the target fundus screening image, thereby improving the recognition accuracy of the fundus screening image.
在一个实施例中,所述在所述待选眼底筛查图像中对所述中央凹区域图像进行随机翻转,得到目标眼底筛查图像之后,还包括:In one embodiment, after performing random flipping on the fovea region image in the fundus screening image to be selected to obtain the target fundus screening image, further comprising:
调用区域动态直方图均衡化对所述目标眼底筛查图像进行处理,得到均衡化的目标眼底筛查图像;Invoking regional dynamic histogram equalization to process the target fundus screening image to obtain an equalized target fundus screening image;
调用拉普拉斯滤波器对所述均衡化的目标眼底筛查图像进行增强,以增强目标眼底筛查图像中的眼底曲率、反射率、血管密度以及各层组织厚度的特征,得到增强后的目标眼底筛查图像。Invoking a Laplacian filter to enhance the equalized target fundus screening image, so as to enhance the characteristics of fundus curvature, reflectivity, blood vessel density and thickness of each layer of tissue in the target fundus screening image, and obtain the enhanced target fundus screening image.
本实施例中,在在所述待选眼底筛查图像中对所述中央凹区域图像进行随机翻转,得到目标眼底筛查图像之后,即对眼底筛查图形进行了一次图像增强,进一步的,第一次增强后的图像对比度可能还较低,对图像进行第二次增强,具体的,调用区域动态直方图均衡化对所述目标眼底筛查图像进行处理,得到均衡化的目标眼底筛查图像,然后调用拉普拉斯滤波器对所述均衡化的目标眼底筛查图像进行增强,以增强目标眼底筛查图像中的眼底曲率、反射率、血管密度以及各层组织厚度的特征,得到增强后的目标眼底筛查图像,从而对眼底筛查图像中的关键细节如眼底曲率、反射率、血管密度以及各层组织厚度继续图像增强,从而提高眼底筛查图像识别的准确度。In this embodiment, the fovea area image is randomly flipped in the fundus screening image to be selected, and after the target fundus screening image is obtained, an image enhancement is performed on the fundus screening graphic. Further, the contrast of the image after the first enhancement may be low, and the image is enhanced for the second time. Specifically, the target fundus screening image is processed by calling regional dynamic histogram equalization to obtain an equalized target fundus screening image, and then the Laplace filter is called to enhance the equalized target fundus screening image. By enhancing the characteristics of the fundus curvature, reflectivity, blood vessel density and thickness of each layer of tissue in the target fundus screening image, the enhanced target fundus screening image is obtained, so as to continue to enhance the key details in the fundus screening image, such as fundus curvature, reflectivity, blood vessel density and the thickness of each layer of tissue, so as to improve the accuracy of fundus screening image recognition.
在一个实施例中,所述将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络之前,还包括:In one embodiment, before the first network in the two-channel network of the described scanning image input to deep learning network model, also comprise:
构建具有两个分支网络的双通道网络,所述两个分支网络包括第一网络与第二网络;Constructing a dual-channel network with two branch networks, the two branch networks comprising a first network and a second network;
分别构建所述第一网络的若干个分支与所述第二网络的若干个分支,并配置每一个分支的滤波函数;所述第一网络与第二网络的分支之间能够进行信息传递;Constructing several branches of the first network and several branches of the second network respectively, and configuring a filter function of each branch; information can be transmitted between the branches of the first network and the second network;
基于所述滤波函数对所述扫描图像或所述造影图像的特征信息进行增强。The feature information of the scan image or the contrast image is enhanced based on the filter function.
本实施例中,在将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络之前,构建具有两个分支网络的双通道网络,所述两个分支网络包括第一网络与第二网络,通过两个不同分支网络对不同图像的特征进行提取,接着分别构建所述第一网络的若干个分支与所述第二网络的若干个分支,并配置每一个分支的滤波函数;所述第一网络与第二网络的分支之间能够进行信息传递,使得第一图像特征与第二图像特征能够更好地融合,然后基于所述滤波函数对所述扫描图像或所述造影图像的特征信息进行增强,从而提高图像识别的准确率。In this embodiment, before the scanned image is input to the first network of the dual-channel network of the deep learning network model, a dual-channel network with two branch networks is constructed. The two branch networks include the first network and the second network, and the features of different images are extracted through two different branch networks, and then several branches of the first network and several branches of the second network are respectively constructed, and a filter function of each branch is configured; information can be transferred between the branches of the first network and the second network, so that the features of the first image and the second image can be better integrated, and then based on the filter function. The feature information of the scanned image or the angiographic image is enhanced, thereby improving the accuracy of image recognition.
在一个实施例中,所述分别构建所述第一网络的若干个分支与所述第二网络的若干个分支,并配置每一个分支的滤波函数之后,还包括:In one embodiment, after constructing the several branches of the first network and the several branches of the second network respectively, and configuring the filter function of each branch, it further includes:
获取所述滤波函数对应的滤波器包含的信息通道;Obtaining information channels contained in a filter corresponding to the filter function;
获取第一信息通道的第一通道参数以及第二信息通道的第二通道参数;Obtain a first channel parameter of the first information channel and a second channel parameter of the second information channel;
判断当所述第一通道参数与所述第二通道参数在参数超空间中是否被约束向心生长;judging whether the first channel parameter and the second channel parameter are constrained to grow centripetally in the parameter hyperspace;
若是,将所述第二信息通道加入至所述第二信息通道,以对第一网络或第二网络进行剪枝,其中,所述第二信息通道为处于所述第一信息通道之后的信息通道。If so, adding the second information channel to the second information channel to prune the first network or the second network, where the second information channel is an information channel after the first information channel.
本实施例中,在分别构建所述第一网络的若干个分支与所述第二网络的若干个分支,并配置每一个分支的滤波函数之后,获取所述滤波函数对应的滤波器包含的信息通道,每一个滤波器都包含有多个通道供特征数据的接入与传递,然后获取第一信息通道的第一通道参数以及第二信息通道的第二通道参数,其中,第一信息通道与第二信息通道表征滤波器中不同的两个通道,且所述第一信息通道位于第二信息通道的前方,第一信息通道将计算后的特征数据传至第二信息通道,进一步的,判断当所述第一通道参数与所述第二通道参数在参数超空间中是否被约束向心生长,若是,将所述第二信息通道加入至所述第二信息通道,以对第一网络或第二网络进行剪枝,其中,所述第二信息通道为处于所述第一信息通道之后的信息通道,当多个滤波器在参数超空间中被约束向心生长时,虽然它们开始产生越来越相似的信息,但下一层对应输入通道传递的信息仍然充分使用,可以将下一层的通道进行融合,从而减少内存占用、功耗和所需的浮点操作,提高计算效率,提高对眼底图像的病变的筛查效率。In this embodiment, after constructing several branches of the first network and several branches of the second network, and configuring the filter function of each branch, obtain the information channels contained in the filter corresponding to the filter function, each filter contains multiple channels for the access and transmission of feature data, and then obtain the first channel parameters of the first information channel and the second channel parameters of the second information channel, wherein the first information channel and the second information channel represent two different channels in the filter, and the first information channel is located in front of the second information channel, and the first information channel transmits the calculated feature data to the second information channel Channel, further, judge whether the first channel parameter and the second channel parameter are constrained to grow centripetally in the parameter hyperspace, and if so, add the second information channel to the second information channel to prune the first network or the second network, wherein the second information channel is an information channel after the first information channel, when multiple filters are constrained to grow centripetally in the parameter hyperspace, although they begin to generate more and more similar information, the information transmitted by the corresponding input channel of the next layer is still fully used, and the channels of the next layer can be fused, thereby reducing memory usage, power consumption and The required floating-point operation improves calculation efficiency and improves the screening efficiency of lesions in fundus images.
参照图2,本申请还提供一种基于眼底图像的病变检测装置,包括:Referring to Figure 2, the present application also provides a lesion detection device based on fundus images, including:
眼底图像模块10,用于获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像;The fundus image module 10 is used to obtain fundus screening images, and the fundus screening images include scanned images and contrast images;
第一网络模块20,用于将所述扫描图像输入到双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率;The first network module 20 is configured to input the scanned image to the first network in the dual-channel network, and obtain the first image features obtained by the first network; the first image features include fundus curvature and reflectivity;
第二网络模块30,用于将所述造影图像输入到双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度;The second network module 30 is configured to input the contrast image into the second network in the dual-channel network, and obtain the second image features obtained by the second network; the second image features include blood vessel density and fundus tissue thickness;
特征融合模块40,用于将所述第一图像特征与第二图像特征进行融合,得到融合特征;A feature fusion module 40, configured to fuse the first image feature with the second image feature to obtain a fusion feature;
等级匹配模块50,用于根据所述融合特征匹配所述图像对应的黄斑病变等级。A grade matching module 50, configured to match the macular lesion grade corresponding to the image according to the fusion feature.
如上所述,可以理解地,本申请中提出的所述基于眼底图像的病变检测装置的各组成部分可以实现如上所述基于眼底图像的病变检测方法任一项的功能。As mentioned above, it can be understood that each component of the fundus image-based lesion detection device proposed in this application can realize the function of any one of the above-mentioned fundus image-based lesion detection methods.
在一个实施例中,所述根据所述融合特征匹配所述图像对应的黄斑病变等级,包括:In one embodiment, the matching the maculopathy grade corresponding to the image according to the fusion feature includes:
计算所述融合特征与标准特征的损失度,根据所述损失度匹配所述黄斑病变等级。Calculate the loss degree of the fusion feature and the standard feature, and match the maculopathy grade according to the loss degree.
在一个实施例中,所述将所述第一图像特征与第二图像特征进行融合,得到融合特征,包括:In one embodiment, the fusion of the first image feature and the second image feature to obtain the fusion feature includes:
获取第一图像特征的第一眼底曲率与第一反射率,获取第二图像特征的第二眼底曲率与第二反射率;acquiring the first fundus curvature and first reflectance of the first image feature, and acquiring the second fundus curvature and second reflectance of the second image feature;
获取第一图像特征的第一血管密度与第一眼底组织厚度,获取第二图像特征的第二血管密度与第二眼底组织厚度;Obtaining the first blood vessel density and the first fundus tissue thickness of the first image feature, and acquiring the second blood vessel density and the second fundus tissue thickness of the second image feature;
获取所述第一图像特征与第二图像特征中各项特征的融合比例;Obtain the fusion ratio of each feature in the first image feature and the second image feature;
根据所述融合比例对所述第一眼底曲率与第一反射率、第二眼底曲率与第二反射率、第一血管密度与第一眼底组织厚度、第二血管密度与第二眼底组织厚度进行融合,得到融合特征。According to the fusion ratio, the first fundus curvature and the first reflectance, the second fundus curvature and the second reflectance, the first blood vessel density and the first fundus tissue thickness, and the second blood vessel density and the second fundus tissue thickness are fused to obtain fusion features.
在一个实施例中,所述获取眼底筛查图像之后,包括:In one embodiment, after the fundus screening image is acquired, it includes:
获取所述眼底筛查图像的眼底周围部位的第一冗余图像;Acquiring a first redundant image of the surrounding parts of the fundus of the fundus screening image;
从所述眼底筛查图像删除所述第一冗余图像,得到待选眼底筛查图像;Deleting the first redundant image from the fundus screening image to obtain a fundus screening image to be selected;
从所述待选眼底筛查图像中获取黄斑部位的中央凹区域图像;Obtaining a fovea area image of the macula from the fundus screening image to be selected;
在所述待选眼底筛查图像中对所述中央凹区域图像进行随机翻转,得到目标眼底筛查图像。In the fundus screening image to be selected, the image of the fovea area is randomly flipped to obtain a target fundus screening image.
在一个实施例中,所述在所述待选眼底筛查图像中对所述中央凹区域图像进行随机翻转,得到目标眼底筛查图像之后,还包括:In one embodiment, after performing random flipping on the fovea region image in the fundus screening image to be selected to obtain the target fundus screening image, further comprising:
调用区域动态直方图均衡化对所述目标眼底筛查图像进行处理,得到均衡化的目标眼底筛查图像;Invoking regional dynamic histogram equalization to process the target fundus screening image to obtain an equalized target fundus screening image;
调用拉普拉斯滤波器对所述均衡化的目标眼底筛查图像进行增强,以增强目标眼底筛查图像中的眼底曲率、反射率、血管密度以及各层组织厚度的特征,得到增强后的目标眼底筛查图像。Invoking a Laplacian filter to enhance the equalized target fundus screening image, so as to enhance the characteristics of fundus curvature, reflectivity, blood vessel density and thickness of each layer of tissue in the target fundus screening image, and obtain the enhanced target fundus screening image.
在一个实施例中,所述将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络之前,还包括:In one embodiment, before the first network in the two-channel network of the described scanning image input to deep learning network model, also comprise:
构建具有两个分支网络的双通道网络,所述两个分支网络包括第一网络与第二网络;Constructing a dual-channel network with two branch networks, the two branch networks comprising a first network and a second network;
分别构建所述第一网络的若干个分支与所述第二网络的若干个分支,并配置每一个分支的滤波函数;所述第一网络与第二网络的分支之间能够进行信息传递;Constructing several branches of the first network and several branches of the second network respectively, and configuring a filter function of each branch; information can be transmitted between the branches of the first network and the second network;
基于所述滤波函数对所述扫描图像或所述造影图像的特征信息进行增强。The feature information of the scan image or the contrast image is enhanced based on the filter function.
在一个实施例中,所述分别构建所述第一网络的若干个分支与所述第二网络的若干个分支,并配置每一个分支的滤波函数之后,还包括:In one embodiment, after constructing the several branches of the first network and the several branches of the second network respectively, and configuring the filter function of each branch, it further includes:
获取所述滤波函数对应的滤波器包含的信息通道;Obtaining information channels contained in a filter corresponding to the filter function;
获取第一信息通道的第一通道参数以及第二信息通道的第二通道参数;Obtain a first channel parameter of the first information channel and a second channel parameter of the second information channel;
判断当所述第一通道参数与所述第二通道参数在参数超空间中是否被约束向心生长;judging whether the first channel parameter and the second channel parameter are constrained to grow centripetally in the parameter hyperspace;
若是,将所述第二信息通道加入至所述第二信息通道,以对第一网络或第二网络进行剪枝,其中,所述第二信息通道为处于所述第一信息通道之后的信息通道。If so, adding the second information channel to the second information channel to prune the first network or the second network, where the second information channel is an information channel after the first information channel.
参照图3,本申请实施例中还提供一种计算机设备,该计算机设备可以是移动终端,其内部结构可以如图3所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和显示装置及输入装置。其中,该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机设备的输入装置用于接收用户的输入。该计算机设计的处理器用于提供计算和控制能力。该计算机设备的存储器包括存储介质。该存储介质存储有操作系统、计算机程序和数据库。该计算机设备的数据库用于存放数据。该计算机程序被处理器执行时以实现一种基于眼底图像的病变检测方法。Referring to FIG. 3 , an embodiment of the present application also provides a computer device, which may be a mobile terminal, and its internal structure may be as shown in FIG. 3 . The computer equipment includes a processor, a memory, a network interface, and a display device and an input device connected through a system bus. Wherein, the network interface of the computer device is used to communicate with external terminals through a network connection. The input device of the computer equipment is used for receiving user's input. The computer is designed with a processor to provide computing and control capabilities. The memory of the computer device includes storage media. The storage medium stores an operating system, computer programs and databases. The database of the computer device is used to store data. When the computer program is executed by the processor, a lesion detection method based on the fundus image is realized.
上述处理器执行上述的基于眼底图像的病变检测方法,包括:获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像;将所述扫描图像输入到双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率;将所述造影图像输入到双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度;将所述第一图像特征与第二图像特征进行融合,得到融合特征;根据所述融合特征匹配所述图像对应的黄斑病变等级。The aforementioned processor executes the aforementioned lesion detection method based on a fundus image, comprising: acquiring a fundus screening image, the fundus screening image including a scan image and a contrast image; inputting the scan image to a first network in a dual-channel network, and obtaining first image features obtained by the first network; the first image features include fundus curvature and reflectivity; inputting the contrast image to a second network in the dual-channel network, and obtaining second image features obtained by the second network; the second image features include blood vessel density and fundus tissue thickness; Perform fusion to obtain fusion features; match the macular lesion grade corresponding to the image according to the fusion features.
所述计算机设备提供了一种基于光学相干断层扫描图像和光学相干断层扫描血管造影图像,并结合深度双通道神经网络算法进行眼底黄斑病变进行识别、分级检测的方法,首先获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像,然后将所述扫描图像输入到双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率,将所述造影图像输入到双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度,将所述第一图像特征与第二图像特征进行融合,得到融合特征,所述融合特征中包含判定黄斑病变的各项眼底参数的数值,包括眼底曲率、反射率、血管密度以及眼底组织厚度等,且该多项眼底参数是由第一图像特征与第二图像特征融合得到的,即所述第二图像特征中亦包含眼底曲率、反射率,并基于第二图像特征中的眼底曲率、反射率对第一图像特征的眼底曲率与反射率进行修正,从而得到融合特征;根据所述融合特征匹配所述图像对应的黄斑病变等级,从而提高黄斑病变识别、检测的精度与准确度。The computer equipment provides a method based on optical coherence tomography images and optical coherence tomography angiography images, combined with a deep dual-channel neural network algorithm for fundus macular lesions for identification and grading detection. Firstly, a fundus screening image is obtained. Network, to obtain the second image features obtained by the second network; the second image features include blood vessel density and fundus tissue thickness, the first image features and the second image features are fused to obtain fusion features, the fusion features include the values of various fundus parameters for determining macular degeneration, including fundus curvature, reflectivity, blood vessel density, and fundus tissue thickness, etc. Correct the fundus curvature and reflectivity of the first image feature to obtain the fusion feature; match the macular degeneration level corresponding to the image according to the fusion feature, thereby improving the accuracy and accuracy of macular degeneration identification and detection.
本申请一实施例还提供一种计算机可读存储介质,所述计算机可读存储介质可以是非易失性,也可以是易失性,其上存储有计算机程序,所述计算机程序被所述处理器执行时实现一种基于眼底图像的病变检测方法,包括步骤:获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像;将所述扫描图像输入到双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率;将所述造影图像输入到双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度;将所述第一图像特征与第二图像特征进行融合,得到融合特征;根据所述融合特征匹配所述图像对应的黄斑病变等级。An embodiment of the present application further provides a computer-readable storage medium, the computer-readable storage medium may be non-volatile or volatile, and a computer program is stored thereon. When the computer program is executed by the processor, a fundus image-based lesion detection method is implemented, comprising the steps of: acquiring a fundus screening image, the fundus screening image including a scanned image and a contrast image; inputting the scanned image into a first network in a dual-channel network, and obtaining a first image feature obtained by the first network; the first image feature includes fundus curvature and reflectivity; The contrast image is input to the second network in the dual-channel network, and the second image feature obtained by the second network is obtained; the second image feature includes blood vessel density and fundus tissue thickness; the first image feature is fused with the second image feature to obtain a fusion feature; and the macular degeneration grade corresponding to the image is matched according to the fusion feature.
所述计算机可读存储介质提供了一种基于光学相干断层扫描图像和光学相干断层扫描血管造影图像,并结合深度双通道神经网络算法进行眼底黄斑病变进行识别、分级检测的方法,首先获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像,然后将所述扫描图像输入到双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率,将所述造影图像输入到双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度,将所述第一图像特征与第二图像特征进行融合,得到融合特征,所述融合特征中包含判定黄斑病变的各项眼底参数的数值,包括眼底曲率、反射率、血管密度以及眼底组织厚度等,且该多项眼底参数是由第一图像特征与第二图像特征融合得到的,即所述第二图像特征中亦包含眼底曲率、反射率,并基于第二图像特征中的眼底曲率、反射率对第一图像特征的眼底曲率与反射率进行修正,从而得到融合特征;根据所述融合特征匹配所述图像对应的黄斑病变等级,从而提高黄斑病变识别、检测的精度与准确度。The computer-readable storage medium provides a method for identifying and grading detection of fundus macular lesions based on optical coherence tomography images and optical coherence tomography angiography images, combined with a deep dual-channel neural network algorithm. First, a fundus screening image is obtained, and the fundus screening image includes a scan image and a contrast image, and then the scan image is input to a first network in a dual-channel network to obtain first image features obtained by the first network; the first image features include fundus curvature and reflectance, and the contrast images are input to the dual-channel network. The second network in the second network obtains the second image feature obtained by the second network; the second image feature includes blood vessel density and fundus tissue thickness, and the first image feature is fused with the second image feature to obtain a fusion feature. The fusion feature includes the values of various fundus parameters for determining macular degeneration, including fundus curvature, reflectivity, blood vessel density, and fundus tissue thickness. Correct the fundus curvature and reflectance of the first image feature to obtain the fusion feature; match the macular degeneration grade corresponding to the image according to the fusion feature, thereby improving the accuracy and accuracy of macular degeneration identification and detection.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above-mentioned embodiments can be completed by instructing related hardware through a computer program. The computer program can be stored in a non-volatile computer-readable storage medium. When the computer program is executed, it can include the processes of the embodiments of the above-mentioned methods.
本申请所提供的和实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性或易失性存储器。Any reference to memory, storage, database or other media provided herein and used in the examples may include non-volatile or volatile memory.
非易失性存储器可以包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双速据率SDRAM(SSRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。Nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), Memory Bus (Rambus) Direct RAM (RDRAM), Direct Memory Bus Dynamic RAM (DRDRAM), and Memory Bus Dynamic RAM (RDRAM), among others.

Claims (19)

  1. 一种基于眼底图像的病变检测方法,其中,包括:A lesion detection method based on fundus images, including:
    获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像;Obtaining a fundus screening image, the fundus screening image including a scan image and an angiographic image;
    将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率;The scanned image is input to the first network in the dual-channel network of the deep learning network model, and the first image features obtained by the first network are obtained; the first image features include fundus curvature and reflectivity;
    将所述造影图像输入到深度学习网络模型的双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度;The contrast image is input to the second network in the dual-channel network of the deep learning network model, and the second image features obtained by the second network are obtained; the second image features include blood vessel density and fundus tissue thickness;
    将所述第一图像特征与第二图像特征进行融合,得到融合特征;Fusing the first image feature with the second image feature to obtain a fusion feature;
    根据所述融合特征匹配所述图像对应的黄斑病变等级。Match the grade of maculopathy corresponding to the image according to the fusion feature.
  2. 根据权利要求1所述的基于眼底图像的病变检测方法,其中,所述根据所述融合特征匹配所述图像对应的黄斑病变等级,包括:The lesion detection method based on the fundus image according to claim 1, wherein said matching the macular lesion grade corresponding to the image according to the fusion feature comprises:
    计算所述融合特征与标准特征的损失度,根据所述损失度匹配所述黄斑病变等级。Calculate the loss degree of the fusion feature and the standard feature, and match the maculopathy grade according to the loss degree.
  3. 根据权利要1所述的基于眼底图像的病变检测方法,其中,所述将所述第一图像特征与第二图像特征进行融合,得到融合特征,包括:The lesion detection method based on fundus images according to claim 1, wherein said fusing said first image features with second image features to obtain fusion features includes:
    获取第一图像特征的第一眼底曲率与第一反射率,获取第二图像特征的第二眼底曲率与第二反射率;acquiring the first fundus curvature and first reflectance of the first image feature, and acquiring the second fundus curvature and second reflectance of the second image feature;
    获取第一图像特征的第一血管密度与第一眼底组织厚度,获取第二图像特征的第二血管密度与第二眼底组织厚度;Obtaining the first blood vessel density and the first fundus tissue thickness of the first image feature, and acquiring the second blood vessel density and the second fundus tissue thickness of the second image feature;
    获取所述第一图像特征与第二图像特征中各项特征的融合比例;Obtain the fusion ratio of each feature in the first image feature and the second image feature;
    根据所述融合比例对所述第一眼底曲率与第一反射率、第二眼底曲率与第二反射率、第一血管密度与第一眼底组织厚度、第二血管密度与第二眼底组织厚度进行融合,得到融合特征。According to the fusion ratio, the first fundus curvature and the first reflectance, the second fundus curvature and the second reflectance, the first blood vessel density and the first fundus tissue thickness, and the second blood vessel density and the second fundus tissue thickness are fused to obtain fusion features.
  4. 根据权利要求1所述的基于眼底图像的病变检测方法,其中,所述获取眼底筛查图像之后,包括:The lesion detection method based on the fundus image according to claim 1, wherein, after the acquisition of the fundus screening image, comprising:
    获取所述眼底筛查图像的眼底周围部位的第一冗余图像;Acquiring a first redundant image of the surrounding parts of the fundus of the fundus screening image;
    从所述眼底筛查图像删除所述第一冗余图像,得到待选眼底筛查图像;Deleting the first redundant image from the fundus screening image to obtain a fundus screening image to be selected;
    从所述待选眼底筛查图像中获取黄斑部位的中央凹区域图像;Obtaining a fovea area image of the macula from the fundus screening image to be selected;
    在所述待选眼底筛查图像中对所述中央凹区域图像进行随机翻转,得到目标眼底筛查图像。In the fundus screening image to be selected, the image of the fovea area is randomly flipped to obtain a target fundus screening image.
  5. 根据权利要求4所述的基于眼底图像的病变检测方法,其中,所述在所述待选眼底筛查图像中对所述中央凹区域图像进行随机翻转,得到目标眼底筛查图像之后,还包括:The lesion detection method based on a fundus image according to claim 4, wherein, after randomly flipping the image of the fovea region in the fundus screening image to be selected, after obtaining the target fundus screening image, further comprising:
    调用区域动态直方图均衡化对所述目标眼底筛查图像进行处理,得到均衡化的目标眼底筛查图像;Invoking regional dynamic histogram equalization to process the target fundus screening image to obtain an equalized target fundus screening image;
    调用拉普拉斯滤波器对所述均衡化的目标眼底筛查图像进行增强,以增强目标眼底筛查图像中的眼底曲率、反射率、血管密度以及各层组织厚度的特征,得到增强后的目标眼底筛查图像。Invoking a Laplacian filter to enhance the equalized target fundus screening image, so as to enhance the characteristics of fundus curvature, reflectivity, blood vessel density and thickness of each layer of tissue in the target fundus screening image, and obtain the enhanced target fundus screening image.
  6. 根据权利要求1所述的基于眼底图像的病变检测方法,其中,所述将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络之前,还包括:The lesion detection method based on fundus image according to claim 1, wherein, before the first network in the dual-channel network of the described scanned image input to the deep learning network model, further comprising:
    构建具有两个分支网络的双通道网络,所述两个分支网络包括第一网络与第二网络;Constructing a dual-channel network with two branch networks, the two branch networks comprising a first network and a second network;
    分别构建所述第一网络的若干个分支与所述第二网络的若干个分支,并配置每一个分支的滤波函数;所述第一网络与第二网络的分支之间能够进行信息传递;Constructing several branches of the first network and several branches of the second network respectively, and configuring a filter function of each branch; information can be transmitted between the branches of the first network and the second network;
    基于所述滤波函数对所述扫描图像或所述造影图像的特征信息进行增强。The feature information of the scan image or the contrast image is enhanced based on the filter function.
  7. 根据权利要求6所述的基于眼底图像的病变检测方法,其中,所述分别构建所述第一网络的若干个分支与所述第二网络的若干个分支,并配置每一个分支的滤波函数之后,还包括:The lesion detection method based on fundus images according to claim 6, wherein, after constructing several branches of the first network and several branches of the second network respectively, and configuring the filter function of each branch, further comprising:
    获取所述滤波函数对应的滤波器包含的信息通道;Obtaining information channels contained in a filter corresponding to the filter function;
    获取第一信息通道的第一通道参数以及第二信息通道的第二通道参数;Obtain a first channel parameter of the first information channel and a second channel parameter of the second information channel;
    判断当所述第一通道参数与所述第二通道参数在参数超空间中是否被约束向心生长;judging whether the first channel parameter and the second channel parameter are constrained to grow centripetally in the parameter hyperspace;
    若是,将所述第二信息通道加入至所述第二信息通道,以对第一网络或第二网络进行剪枝,其中,所述第二信息通道为处于所述第一信息通道之后的信息通道。If so, adding the second information channel to the second information channel to prune the first network or the second network, where the second information channel is an information channel after the first information channel.
  8. 一种基于眼底图像的病变检测装置,其中,包括:A lesion detection device based on fundus images, including:
    眼底图像模块,用于获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像;The fundus image module is used to obtain fundus screening images, and the fundus screening images include scanning images and contrast images;
    第一网络模块,用于将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率;The first network module is used to input the scanned image into the first network in the dual-channel network of the deep learning network model, and obtain the first image features obtained by the first network; the first image features include fundus curvature and reflectivity;
    第二网络模块,用于将所述造影图像输入到深度学习网络模型的双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度;The second network module is configured to input the contrast image into the second network in the dual-channel network of the deep learning network model, and obtain the second image features obtained by the second network; the second image features include blood vessel density and fundus tissue thickness;
    特征融合模块,用于将所述第一图像特征与第二图像特征进行融合,得到融合特征;A feature fusion module, configured to fuse the first image feature with the second image feature to obtain a fusion feature;
    等级匹配模块,用于根据所述融合特征匹配所述图像对应的黄斑病变等级。A grade matching module, configured to match the macular lesion grade corresponding to the image according to the fusion feature.
  9. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其中,所述处理器执行所述计算机程序时实现一种基于眼底图像的病变检测方法:A computer device, comprising a memory and a processor, the memory stores a computer program, wherein, when the processor executes the computer program, a method for detecting a lesion based on a fundus image is realized:
    其中,所述的基于眼底图像的病变检测方法包括:Wherein, the described lesion detection method based on fundus image comprises:
    获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像;Obtaining a fundus screening image, the fundus screening image including a scan image and an angiographic image;
    将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率;The scanned image is input to the first network in the dual-channel network of the deep learning network model, and the first image features obtained by the first network are obtained; the first image features include fundus curvature and reflectivity;
    将所述造影图像输入到深度学习网络模型的双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度;The contrast image is input to the second network in the dual-channel network of the deep learning network model, and the second image features obtained by the second network are obtained; the second image features include blood vessel density and fundus tissue thickness;
    将所述第一图像特征与第二图像特征进行融合,得到融合特征;Fusing the first image feature with the second image feature to obtain a fusion feature;
    根据所述融合特征匹配所述图像对应的黄斑病变等级。10、根据权利要求9所述的计算机设备,其中,所述根据所述融合特征匹配所述图像对应的黄斑病变等级,包括:Match the grade of maculopathy corresponding to the image according to the fusion feature. 10. The computer device according to claim 9, wherein said matching the grade of macular degeneration corresponding to said image according to said fusion feature comprises:
    计算所述融合特征与标准特征的损失度,根据所述损失度匹配所述黄斑病变等级。Calculate the loss degree of the fusion feature and the standard feature, and match the maculopathy grade according to the loss degree.
  10. 根据权利要求9所述的计算机设备,其中,所述将所述第一图像特征与第二图像特征进行融合,得到融合特征,包括:The computer device according to claim 9, wherein said merging the first image feature and the second image feature to obtain the fused feature comprises:
    获取第一图像特征的第一眼底曲率与第一反射率,获取第二图像特征的第二眼底曲率与第二反射率;acquiring the first fundus curvature and first reflectance of the first image feature, and acquiring the second fundus curvature and second reflectance of the second image feature;
    获取第一图像特征的第一血管密度与第一眼底组织厚度,获取第二图像特征的第二血管密度与第二眼底组织厚度;Obtaining the first blood vessel density and the first fundus tissue thickness of the first image feature, and acquiring the second blood vessel density and the second fundus tissue thickness of the second image feature;
    获取所述第一图像特征与第二图像特征中各项特征的融合比例;Obtain the fusion ratio of each feature in the first image feature and the second image feature;
    根据所述融合比例对所述第一眼底曲率与第一反射率、第二眼底曲率与第二反射率、第一血管密度与第一眼底组织厚度、第二血管密度与第二眼底组织厚度进行融合,得到融合特征。According to the fusion ratio, the first fundus curvature and the first reflectance, the second fundus curvature and the second reflectance, the first blood vessel density and the first fundus tissue thickness, and the second blood vessel density and the second fundus tissue thickness are fused to obtain fusion features.
  11. 根据权利要求9所述的计算机设备,其中,所述获取眼底筛查图像之后,包括:The computer device according to claim 9, wherein, after the acquisition of the fundus screening image, comprising:
    获取所述眼底筛查图像的眼底周围部位的第一冗余图像;Acquiring a first redundant image of the surrounding parts of the fundus of the fundus screening image;
    从所述眼底筛查图像删除所述第一冗余图像,得到待选眼底筛查图像;Deleting the first redundant image from the fundus screening image to obtain a fundus screening image to be selected;
    从所述待选眼底筛查图像中获取黄斑部位的中央凹区域图像;Obtaining a fovea area image of the macula from the fundus screening image to be selected;
    在所述待选眼底筛查图像中对所述中央凹区域图像进行随机翻转,得到目标眼底筛查图像。In the fundus screening image to be selected, the image of the fovea area is randomly flipped to obtain a target fundus screening image.
  12. 根据权利要求12所述的计算机设备,其中,所述在所述待选眼底筛查图像中对所述中央凹区域图像进行随机翻转,得到目标眼底筛查图像之后,还包括:The computer device according to claim 12, wherein, after randomly flipping the fovea region image in the fundus screening image to be selected to obtain the target fundus screening image, further comprising:
    调用区域动态直方图均衡化对所述目标眼底筛查图像进行处理,得到均衡化的目标眼底筛查图像;Invoking regional dynamic histogram equalization to process the target fundus screening image to obtain an equalized target fundus screening image;
    调用拉普拉斯滤波器对所述均衡化的目标眼底筛查图像进行增强,以增强目标眼底筛查图像中的眼底曲率、反射率、血管密度以及各层组织厚度的特征,得到增强后的目标眼底筛查图像。Invoking a Laplacian filter to enhance the equalized target fundus screening image, so as to enhance the characteristics of fundus curvature, reflectivity, blood vessel density and thickness of each layer of tissue in the target fundus screening image, and obtain the enhanced target fundus screening image.
  13. 根据权利要求9所述的计算机设备,其中,所述将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络之前,还包括:The computer device according to claim 9, wherein, before the first network in the dual-channel network of the deep learning network model, the scanned image further comprises:
    构建具有两个分支网络的双通道网络,所述两个分支网络包括第一网络与第二网络;Constructing a dual-channel network with two branch networks, the two branch networks comprising a first network and a second network;
    分别构建所述第一网络的若干个分支与所述第二网络的若干个分支,并配置每一个分支的滤波函数;所述第一网络与第二网络的分支之间能够进行信息传递;Constructing several branches of the first network and several branches of the second network respectively, and configuring a filter function of each branch; information can be transmitted between the branches of the first network and the second network;
    基于所述滤波函数对所述扫描图像或所述造影图像的特征信息进行增强。The feature information of the scan image or the contrast image is enhanced based on the filter function.
  14. 根据权利要求14所述的计算机设备,其中,所述分别构建所述第一网络的若干个分支与所述第二网络的若干个分支,并配置每一个分支的滤波函数之后,还包括:The computer device according to claim 14, wherein, after constructing several branches of the first network and several branches of the second network respectively, and configuring the filter function of each branch, further comprising:
    获取所述滤波函数对应的滤波器包含的信息通道;Obtaining information channels contained in a filter corresponding to the filter function;
    获取第一信息通道的第一通道参数以及第二信息通道的第二通道参数;Obtain a first channel parameter of the first information channel and a second channel parameter of the second information channel;
    判断当所述第一通道参数与所述第二通道参数在参数超空间中是否被约束向心生长;judging whether the first channel parameter and the second channel parameter are constrained to grow centripetally in the parameter hyperspace;
    若是,将所述第二信息通道加入至所述第二信息通道,以对第一网络或第二网络进行剪枝,其中,所述第二信息通道为处于所述第一信息通道之后的信息通道。If so, adding the second information channel to the second information channel to prune the first network or the second network, where the second information channel is an information channel after the first information channel.
  15. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现一种眼底图像的病变检测方法:获取眼底筛查图像,所述眼底筛查图像包括扫描图像和造影图像;A computer-readable storage medium, on which a computer program is stored, wherein, when the computer program is executed by a processor, a fundus image lesion detection method is implemented: acquiring a fundus screening image, and the fundus screening image includes a scanned image and a contrast image;
    将所述扫描图像输入到深度学习网络模型的双通道网络中的第一网络,获取所述第一网络得到的第一图像特征;所述第一图像特征包括眼底曲率与反射率;The scanned image is input to the first network in the dual-channel network of the deep learning network model, and the first image features obtained by the first network are obtained; the first image features include fundus curvature and reflectivity;
    将所述造影图像输入到深度学习网络模型的双通道网络中的第二网络,获取所述第二网络得到的第二图像特征;所述第二图像特征包括血管密度以及眼底组织厚度;The contrast image is input to the second network in the dual-channel network of the deep learning network model, and the second image features obtained by the second network are obtained; the second image features include blood vessel density and fundus tissue thickness;
    将所述第一图像特征与第二图像特征进行融合,得到融合特征;Fusing the first image feature with the second image feature to obtain a fusion feature;
    根据所述融合特征匹配所述图像对应的黄斑病变等级。Match the grade of maculopathy corresponding to the image according to the fusion feature.
  16. 根据权利要求16所述的计算机可读存储介质,其中,所述根据所述融合特征匹配所述图像对应的黄斑病变等级,包括:The computer-readable storage medium according to claim 16, wherein the matching the macular degeneration grade corresponding to the image according to the fusion feature comprises:
    计算所述融合特征与标准特征的损失度,根据所述损失度匹配所述黄斑病变等级。Calculate the loss degree of the fusion feature and the standard feature, and match the maculopathy grade according to the loss degree.
  17. 根据权利要求16所述的计算机可读存储介质,其中,所述将所述第一图像特征与第二图像特征进行融合,得到融合特征,包括:The computer-readable storage medium according to claim 16, wherein said merging the first image feature with the second image feature to obtain the fused feature comprises:
    获取第一图像特征的第一眼底曲率与第一反射率,获取第二图像特征的第二眼底曲率与第二反射率;acquiring the first fundus curvature and first reflectance of the first image feature, and acquiring the second fundus curvature and second reflectance of the second image feature;
    获取第一图像特征的第一血管密度与第一眼底组织厚度,获取第二图像特征的第二血管密度与第二眼底组织厚度;Obtaining the first blood vessel density and the first fundus tissue thickness of the first image feature, and acquiring the second blood vessel density and the second fundus tissue thickness of the second image feature;
    获取所述第一图像特征与第二图像特征中各项特征的融合比例;Obtain the fusion ratio of each feature in the first image feature and the second image feature;
    根据所述融合比例对所述第一眼底曲率与第一反射率、第二眼底曲率与第二反射率、第一血管密度与第一眼底组织厚度、第二血管密度与第二眼底组织厚度进行融合,得到融合特征。According to the fusion ratio, the first fundus curvature and the first reflectance, the second fundus curvature and the second reflectance, the first blood vessel density and the first fundus tissue thickness, and the second blood vessel density and the second fundus tissue thickness are fused to obtain fusion features.
  18. 根据权利要求16所述的计算机可读存储介质,其中,所述获取眼底筛查图像之后,包括:The computer-readable storage medium according to claim 16, wherein, after acquiring the fundus screening image, comprising:
    获取所述眼底筛查图像的眼底周围部位的第一冗余图像;Acquiring a first redundant image of the surrounding parts of the fundus of the fundus screening image;
    从所述眼底筛查图像删除所述第一冗余图像,得到待选眼底筛查图像;Deleting the first redundant image from the fundus screening image to obtain a fundus screening image to be selected;
    从所述待选眼底筛查图像中获取黄斑部位的中央凹区域图像;Obtaining a fovea region image of the macula from the fundus screening image to be selected;
    在所述待选眼底筛查图像中对所述中央凹区域图像进行随机翻转,得到目标眼底筛查图像。In the fundus screening image to be selected, the image of the fovea area is randomly flipped to obtain a target fundus screening image.
  19. 根据权利要求16所述的计算机可读存储介质,其中,所述在所述待选眼底筛查图像中对所述中央凹区域图像进行随机翻转,得到目标眼底筛查图像之后,还包括:The computer-readable storage medium according to claim 16, wherein, after randomly flipping the image of the fovea region in the fundus screening image to be selected, after obtaining the target fundus screening image, further comprising:
    调用区域动态直方图均衡化对所述目标眼底筛查图像进行处理,得到均衡化的目标眼底筛查图像;Invoking regional dynamic histogram equalization to process the target fundus screening image to obtain an equalized target fundus screening image;
    调用拉普拉斯滤波器对所述均衡化的目标眼底筛查图像进行增强,以增强目标眼底筛查图像中的眼底曲率、反射率、血管密度以及各层组织厚度的特征,得到增强后的目标眼底筛查图像。Invoking a Laplacian filter to enhance the equalized target fundus screening image, so as to enhance the characteristics of fundus curvature, reflectivity, blood vessel density and thickness of each layer of tissue in the target fundus screening image, and obtain the enhanced target fundus screening image.
PCT/CN2022/090164 2022-01-21 2022-04-29 Eye fundus image-based lesion detection method and apparatus, and device and storage medium WO2023137904A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210073516.3 2022-01-21
CN202210073516.3A CN114494734A (en) 2022-01-21 2022-01-21 Method, device and equipment for detecting pathological changes based on fundus image and storage medium

Publications (1)

Publication Number Publication Date
WO2023137904A1 true WO2023137904A1 (en) 2023-07-27

Family

ID=81472176

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/090164 WO2023137904A1 (en) 2022-01-21 2022-04-29 Eye fundus image-based lesion detection method and apparatus, and device and storage medium

Country Status (2)

Country Link
CN (1) CN114494734A (en)
WO (1) WO2023137904A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180204111A1 (en) * 2013-02-28 2018-07-19 Z Advanced Computing, Inc. System and Method for Extremely Efficient Image and Pattern Recognition and Artificial Intelligence Platform
CN110021009A (en) * 2019-01-18 2019-07-16 平安科技(深圳)有限公司 A kind of method, apparatus and storage medium for assessing eye fundus image quality
CN112884729A (en) * 2021-02-04 2021-06-01 北京邮电大学 Auxiliary diagnosis method and device for fundus diseases based on bimodal deep learning
CN113011485A (en) * 2021-03-12 2021-06-22 北京邮电大学 Multi-mode multi-disease long-tail distribution ophthalmic disease classification model training method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018045363A1 (en) * 2016-09-02 2018-03-08 Gargeya Rishab Screening method for automated detection of vision-degenerative diseases from color fundus images
CN110766656B (en) * 2019-09-19 2023-08-11 平安科技(深圳)有限公司 Method, device, equipment and storage medium for screening fundus macular region abnormality
CN112446860B (en) * 2020-11-23 2024-04-16 中山大学中山眼科中心 Automatic screening method for diabetic macular edema based on transfer learning
CN112883962B (en) * 2021-01-29 2023-07-18 北京百度网讯科技有限公司 Fundus image recognition method, fundus image recognition apparatus, fundus image recognition device, fundus image recognition program, and fundus image recognition program
CN112991343B (en) * 2021-04-30 2021-08-13 北京至真互联网技术有限公司 Method, device and equipment for identifying and detecting macular region of fundus image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180204111A1 (en) * 2013-02-28 2018-07-19 Z Advanced Computing, Inc. System and Method for Extremely Efficient Image and Pattern Recognition and Artificial Intelligence Platform
CN110021009A (en) * 2019-01-18 2019-07-16 平安科技(深圳)有限公司 A kind of method, apparatus and storage medium for assessing eye fundus image quality
CN112884729A (en) * 2021-02-04 2021-06-01 北京邮电大学 Auxiliary diagnosis method and device for fundus diseases based on bimodal deep learning
CN113011485A (en) * 2021-03-12 2021-06-22 北京邮电大学 Multi-mode multi-disease long-tail distribution ophthalmic disease classification model training method and device

Also Published As

Publication number Publication date
CN114494734A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
US20220076420A1 (en) Retinopathy recognition system
EP3373798B1 (en) Method and system for classifying optic nerve head
WO2018201632A1 (en) Artificial neural network and system for recognizing lesion in fundus image
CN110309849A (en) Blood-vessel image processing method, device, equipment and storage medium
JP2021536057A (en) Lesion detection and positioning methods, devices, devices, and storage media for medical images
CN107563434B (en) Brain MRI image classification method and device based on three-dimensional convolutional neural network
CN110263755B (en) Eye ground image recognition model training method, eye ground image recognition method and eye ground image recognition device
JP2019192215A (en) 3d quantitative analysis of retinal layers with deep learning
WO2021190656A1 (en) Method and apparatus for localizing center of macula in fundus image, server, and storage medium
WO2022166399A1 (en) Fundus oculi disease auxiliary diagnosis method and apparatus based on bimodal deep learning
CN112017185A (en) Focus segmentation method, device and storage medium
CN111340087A (en) Image recognition method, image recognition device, computer-readable storage medium and computer equipment
CN110415245A (en) Optical data determines method, model training method and equipment
CN117788407A (en) Training method for glaucoma image feature extraction based on artificial neural network
CN117058676B (en) Blood vessel segmentation method, device and system based on fundus examination image
CN116030042B (en) Diagnostic device, method, equipment and storage medium for doctor's diagnosis
WO2023137904A1 (en) Eye fundus image-based lesion detection method and apparatus, and device and storage medium
CN116452571A (en) Image recognition method based on deep neural network
WO2021139446A1 (en) Anti-vascular endothelial growth factor (vegf) curative effect prediction apparatus and method
Shilpa et al. An Ensemble Approach to Detect Diabetic Retinopathy using the Residual Contrast Limited Adaptable Histogram Equalization Method
Pham et al. Generative Adversarial Networks for Retinal Image Enhancement with Pathological Information
Gambhir et al. Severity classification of diabetic retinopathy using ShuffleNet
Hirota et al. Automatic Estimation of Objective Cyclodeviation in Fundus Image Using Machine Learning
Prasad et al. Reduction of False Microaneurysms in Retinal Fundus Images using Fuzzy C-Means Clustering in terms NLM Anisotropic Filter
Kaur et al. Survey on various Feature Detection and Feature Selection Methods using Retinopathy Images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22921335

Country of ref document: EP

Kind code of ref document: A1