WO2023137904A1 - Procédé et appareil de détection de lésion basée sur une image de fond d'œil, dispositif, et support de stockage - Google Patents

Procédé et appareil de détection de lésion basée sur une image de fond d'œil, dispositif, et support de stockage Download PDF

Info

Publication number
WO2023137904A1
WO2023137904A1 PCT/CN2022/090164 CN2022090164W WO2023137904A1 WO 2023137904 A1 WO2023137904 A1 WO 2023137904A1 CN 2022090164 W CN2022090164 W CN 2022090164W WO 2023137904 A1 WO2023137904 A1 WO 2023137904A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
fundus
network
feature
screening
Prior art date
Application number
PCT/CN2022/090164
Other languages
English (en)
Chinese (zh)
Inventor
郑喜民
王天誉
舒畅
陈又新
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2023137904A1 publication Critical patent/WO2023137904A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the present application relates to the fields of image recognition and digital medical care, in particular to a fundus image-based lesion detection method, device, computer equipment and storage medium.
  • Age-related macular degeneration is an eye disease that seriously affects the vision of the elderly.
  • AMD is detected through in-depth and time-consuming analysis of fundus images based on color fundus photographs.
  • OCT optical coherence tomography
  • the main purpose of this application is to provide a lesion detection method, device, computer equipment and storage medium based on fundus images, aiming to solve the problem of low accuracy in identifying the degree of fundus macular lesions at present.
  • this application proposes a lesion detection method based on fundus images, including:
  • the fundus screening image including a scan image and an angiographic image
  • the scanned image is input to the first network in the dual-channel network of the deep learning network model, and the first image features obtained by the first network are obtained; the first image features include fundus curvature and reflectivity;
  • the contrast image is input to the second network in the dual-channel network of the deep learning network model, and the second image features obtained by the second network are obtained; the second image features include blood vessel density and fundus tissue thickness;
  • the present application also provides a lesion detection device based on fundus images, including:
  • the fundus image module is used to obtain fundus screening images, and the fundus screening images include scanning images and contrast images;
  • the first network module is used to input the scanned image into the first network in the dual-channel network of the deep learning network model, and obtain the first image features obtained by the first network;
  • the first image features include fundus curvature and reflectivity;
  • the second network module is configured to input the contrast image into the second network in the dual-channel network of the deep learning network model, and obtain the second image features obtained by the second network;
  • the second image features include blood vessel density and fundus tissue thickness;
  • a feature fusion module configured to fuse the first image feature with the second image feature to obtain a fusion feature
  • a grade matching module configured to match the macular lesion grade corresponding to the image according to the fusion feature.
  • the present application also provides a computer device, including a memory and a processor, the memory stores a computer program, and when the processor executes the computer program, a fundus image-based lesion detection method is implemented, wherein the fundus image-based lesion detection method includes:
  • the fundus screening image including a scan image and an angiographic image
  • the scanned image is input to the first network in the dual-channel network of the deep learning network model, and the first image features obtained by the first network are obtained; the first image features include fundus curvature and reflectivity;
  • the contrast image is input to the second network in the dual-channel network of the deep learning network model, and the second image features obtained by the second network are obtained; the second image features include blood vessel density and fundus tissue thickness;
  • the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, a method for detecting a lesion based on a fundus image is implemented, wherein the method for detecting a lesion based on a fundus image includes:
  • the fundus screening image including a scan image and an angiographic image
  • the scanned image is input to the first network in the dual-channel network of the deep learning network model, and the first image features obtained by the first network are obtained; the first image features include fundus curvature and reflectivity;
  • the contrast image is input to the second network in the dual-channel network of the deep learning network model, and the second image features obtained by the second network are obtained; the second image features include blood vessel density and fundus tissue thickness;
  • the present application example provides an image based on optical coherence tomography and angiography of optical coherence tomography, which can improve the precision and accuracy of macular degeneration identification and detection.
  • Fig. 1 is a schematic flow chart of an embodiment of the lesion detection method based on fundus images of the present application
  • FIG. 2 is a schematic structural diagram of an embodiment of a lesion detection device based on fundus images of the present application
  • Fig. 3 is a schematic structural block diagram of an embodiment of the computer equipment of the present application.
  • AI artificial intelligence
  • data related to medical diagnosis can be acquired and processed based on an artificial intelligence deep learning model.
  • AI artificial intelligence
  • digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
  • the embodiment of the present application provides a fundus image-based lesion detection method, including steps S10-S50.
  • the detailed description of each step of the fundus image-based lesion detection method is as follows.
  • the fundus image-based lesion detection method can be completed by an application program with corresponding functions, such as the built-in "fundus image macular degeneration detection and identification" function in the application program. Through this function, the application program can identify and determine whether macular degeneration occurs and grade macular degeneration from fundus screening images.
  • the application program can run Running on the terminal device or in the cloud server, therefore, the method for detecting the lesion based on the fundus image can also be understood as being completed by the terminal device or the cloud server running the application program.
  • the fundus screening image is obtained.
  • the fundus screening image includes a scanned image and a contrast image.
  • the scan image is an optical coherence tomography image (OCT for short)
  • the contrast image is an optical coherence tomography angiography image (OCTA for short).
  • OCT optical coherence tomography image
  • OCTA optical coherence tomography angiography image
  • the deep learning network model is used to learn the image features of the scan image and the contrast image to distinguish the macular lesion.
  • the deep learning network is a dual-channel network, including a first network and a second network.
  • the scan image and the contrast image are respectively received through two different networks, wherein the first network can extract features for judging maculopathy in the scan image, and the scan image is input to the first network of the dual-channel network of the deep learning network model.
  • the first image features obtained by the first network are obtained; after the first image features include fundus curvature and reflectivity, the contrast image is input to the second network of the dual-channel network of the deep learning network model, and the second image features obtained by the second network are obtained; the second image features include blood vessel density and fundus tissue thickness, wherein the second network can extract features for judging macular degeneration in the contrast image, and convert the extracted second image features into parameters of the fundus, including blood vessel density and fundus tissue thickness.
  • the first image features obtained by the first network are obtained;
  • the first image features include fundus curvature and reflectivity, and the contrast image is input to the second network of the dual-channel network, and the second image features obtained by the second network are obtained;
  • the second image features include blood vessel density and fundus tissue thickness, in order to judge macular degeneration more accurately, the fundus features recognized by two different images are combined, that is, the first image feature and the second image feature are fused to obtain a fusion Features
  • the fusion features include the values of various fundus parameters for determining macular degeneration, including fundus curvature, reflectivity, blood vessel density, and fundus tissue thickness, etc.
  • the multiple fundus parameters are obtained by fusing the first image feature and the second image feature, that is, the second image feature also includes fundus curvature and reflectance, and the fundus curvature and reflectance of the first image feature are corrected based on the fundus curvature and reflectance in the second image
  • the grade of macular degeneration corresponding to the image is matched according to the fusion feature, the image features representing macular degeneration extracted from the optical coherence tomography image and the optical coherence tomography angiography image are fused, and the corresponding macular degeneration grade is matched according to the fusion feature.
  • each grade of macular degeneration is preset with a numerical range of each fundus parameter. When the range of the fundus parameters is within the range, the fusion feature is matched with the corresponding grade of macular degeneration, thereby improving the precision and accuracy of macular degeneration identification and detection.
  • This embodiment provides a method based on optical coherence tomography images and optical coherence tomography angiography images, combined with a deep dual-channel neural network algorithm for fundus macular lesions for identification and graded detection.
  • a fundus screening image is obtained.
  • the fundus screening image includes a scan image and a contrast image, and then the scan image is input to the first network in the dual-channel network of the deep learning network model, and the first image features obtained by the first network are acquired; the first image features include fundus curvature and reflectance, and the contrast images are input to the deep learning network.
  • the second network in the dual-channel network of the model obtains the second image features obtained by the second network; the second image features include blood vessel density and fundus tissue thickness, and the first image features are fused with the second image features to obtain fusion features.
  • the fundus curvature and reflectance in the first image feature are corrected for the fundus curvature and reflectance to obtain a fusion feature; the macular degeneration level corresponding to the image is matched according to the fusion feature, thereby improving the recognition and detection of macular degeneration. Accuracy and accuracy.
  • the matching the maculopathy grade corresponding to the image according to the fusion feature includes:
  • the fusion feature in the process of matching the macular lesion grade corresponding to the image according to the fusion feature, the fusion feature is compared with the standard feature, and then the loss degree of the fusion feature and the standard feature is calculated, and then the macular lesion grade is matched according to the loss degree, wherein, in the fusion feature, the loss degree of the first image feature and the second image feature corresponding to the standard feature are of the same importance. Then add the first loss degree and the second loss degree to obtain the loss degree of the fusion feature and the standard feature, thereby accurately calculating and matching the features of the two images, and improving the accuracy of macular lesion recognition.
  • the fusion of the first image feature and the second image feature to obtain the fusion feature includes:
  • the first fundus curvature and the first reflectance, the second fundus curvature and the second reflectance, the first blood vessel density and the first fundus tissue thickness, and the second blood vessel density and the second fundus tissue thickness are fused to obtain fusion features.
  • the process of fusing the first image feature with the second image feature to obtain the fused feature first obtain the first fundus curvature and the first reflectance of the first image feature, obtain the second fundus curvature and the second reflectance of the second image feature, obtain the first blood vessel density and the first fundus tissue thickness of the first image feature, obtain the second blood vessel density and the second fundus tissue thickness of the second image feature, and then obtain the fusion ratio of the first image feature and each feature in the second image feature, that is, the ratio of each feature in the first image feature and the ratio of each feature in the second image feature
  • the second image feature also includes fundus curvature and reflectivity
  • the first image feature also includes blood vessel density and fundus tissue thickness.
  • the first fundus curvature and the first reflectance, the second fundus curvature and the second reflectance, the first blood vessel density and the first fundus tissue thickness, the second blood vessel density and the second fundus tissue thickness are fused to obtain fusion features, and the recognized fundus features of two different images are combined to improve the accuracy of the fused features.
  • the fundus screening image after the fundus screening image is acquired, it includes:
  • the image of the fovea area is randomly flipped to obtain a target fundus screening image.
  • the fundus screening image needs to be enhanced, including the enhancement of the scanned image and the contrast image.
  • the first redundant image of the fundus surrounding part of the fundus screening image is obtained, the first redundant image is deleted from the fundus screening image to obtain a fundus screening image to be selected, and then the fovea region image of the macula is obtained from the fundus screening image to be selected, and then the image of the fovea region is randomly flipped in the fundus screening image to be selected.
  • the enhancement is performed to obtain the target fundus screening image, thereby improving the recognition accuracy of the fundus screening image.
  • the fovea area image is randomly flipped in the fundus screening image to be selected, and after the target fundus screening image is obtained, an image enhancement is performed on the fundus screening graphic. Further, the contrast of the image after the first enhancement may be low, and the image is enhanced for the second time.
  • the target fundus screening image is processed by calling regional dynamic histogram equalization to obtain an equalized target fundus screening image, and then the Laplace filter is called to enhance the equalized target fundus screening image.
  • the enhanced target fundus screening image is obtained, so as to continue to enhance the key details in the fundus screening image, such as fundus curvature, reflectivity, blood vessel density and the thickness of each layer of tissue, so as to improve the accuracy of fundus screening image recognition.
  • first network in the two-channel network of the described scanning image input to deep learning network model also comprise:
  • the feature information of the scan image or the contrast image is enhanced based on the filter function.
  • a dual-channel network with two branch networks is constructed before the scanned image is input to the first network of the dual-channel network of the deep learning network model.
  • the two branch networks include the first network and the second network, and the features of different images are extracted through two different branch networks, and then several branches of the first network and several branches of the second network are respectively constructed, and a filter function of each branch is configured; information can be transferred between the branches of the first network and the second network, so that the features of the first image and the second image can be better integrated, and then based on the filter function.
  • the feature information of the scanned image or the angiographic image is enhanced, thereby improving the accuracy of image recognition.
  • adding the second information channel to the second information channel to prune the first network or the second network, where the second information channel is an information channel after the first information channel.
  • each filter contains multiple channels for the access and transmission of feature data, and then obtain the first channel parameters of the first information channel and the second channel parameters of the second information channel, wherein the first information channel and the second information channel represent two different channels in the filter, and the first information channel is located in front of the second information channel, and the first information channel transmits the calculated feature data to the second information channel Channel, further, judge whether the first channel parameter and the second channel parameter are constrained to grow centripetally in the parameter hyperspace, and if so, add the second information channel to the second information channel to prune the first network or the second network, wherein the second information channel is an information channel after the first information channel, when multiple filters are constrained to grow centripetally in the parameter hyperspace, although they begin to generate more and more similar information, the information transmitted by the corresponding input channel of the next layer is still fully used, and the channels of
  • the present application also provides a lesion detection device based on fundus images, including:
  • the fundus image module 10 is used to obtain fundus screening images, and the fundus screening images include scanned images and contrast images;
  • the first network module 20 is configured to input the scanned image to the first network in the dual-channel network, and obtain the first image features obtained by the first network; the first image features include fundus curvature and reflectivity;
  • the second network module 30 is configured to input the contrast image into the second network in the dual-channel network, and obtain the second image features obtained by the second network; the second image features include blood vessel density and fundus tissue thickness;
  • a feature fusion module 40 configured to fuse the first image feature with the second image feature to obtain a fusion feature
  • a grade matching module 50 configured to match the macular lesion grade corresponding to the image according to the fusion feature.
  • each component of the fundus image-based lesion detection device proposed in this application can realize the function of any one of the above-mentioned fundus image-based lesion detection methods.
  • the matching the maculopathy grade corresponding to the image according to the fusion feature includes:
  • the fusion of the first image feature and the second image feature to obtain the fusion feature includes:
  • the first fundus curvature and the first reflectance, the second fundus curvature and the second reflectance, the first blood vessel density and the first fundus tissue thickness, and the second blood vessel density and the second fundus tissue thickness are fused to obtain fusion features.
  • the fundus screening image after the fundus screening image is acquired, it includes:
  • the image of the fovea area is randomly flipped to obtain a target fundus screening image.
  • first network in the two-channel network of the described scanning image input to deep learning network model also comprise:
  • the feature information of the scan image or the contrast image is enhanced based on the filter function.
  • adding the second information channel to the second information channel to prune the first network or the second network, where the second information channel is an information channel after the first information channel.
  • an embodiment of the present application also provides a computer device, which may be a mobile terminal, and its internal structure may be as shown in FIG. 3 .
  • the computer equipment includes a processor, a memory, a network interface, and a display device and an input device connected through a system bus.
  • the network interface of the computer device is used to communicate with external terminals through a network connection.
  • the input device of the computer equipment is used for receiving user's input.
  • the computer is designed with a processor to provide computing and control capabilities.
  • the memory of the computer device includes storage media.
  • the storage medium stores an operating system, computer programs and databases.
  • the database of the computer device is used to store data.
  • the aforementioned processor executes the aforementioned lesion detection method based on a fundus image, comprising: acquiring a fundus screening image, the fundus screening image including a scan image and a contrast image; inputting the scan image to a first network in a dual-channel network, and obtaining first image features obtained by the first network; the first image features include fundus curvature and reflectivity; inputting the contrast image to a second network in the dual-channel network, and obtaining second image features obtained by the second network; the second image features include blood vessel density and fundus tissue thickness; Perform fusion to obtain fusion features; match the macular lesion grade corresponding to the image according to the fusion features.
  • the computer equipment provides a method based on optical coherence tomography images and optical coherence tomography angiography images, combined with a deep dual-channel neural network algorithm for fundus macular lesions for identification and grading detection.
  • a fundus screening image is obtained.
  • the second image features include blood vessel density and fundus tissue thickness
  • the first image features and the second image features are fused to obtain fusion features
  • the fusion features include the values of various fundus parameters for determining macular degeneration, including fundus curvature, reflectivity, blood vessel density, and fundus tissue thickness, etc.
  • Correct the fundus curvature and reflectivity of the first image feature to obtain the fusion feature match the macular degeneration level corresponding to the image according to the fusion feature, thereby improving the accuracy and accuracy of macular degeneration identification and detection.
  • An embodiment of the present application further provides a computer-readable storage medium, the computer-readable storage medium may be non-volatile or volatile, and a computer program is stored thereon.
  • a fundus image-based lesion detection method is implemented, comprising the steps of: acquiring a fundus screening image, the fundus screening image including a scanned image and a contrast image; inputting the scanned image into a first network in a dual-channel network, and obtaining a first image feature obtained by the first network; the first image feature includes fundus curvature and reflectivity; The contrast image is input to the second network in the dual-channel network, and the second image feature obtained by the second network is obtained; the second image feature includes blood vessel density and fundus tissue thickness; the first image feature is fused with the second image feature to obtain a fusion feature; and the macular degeneration grade corresponding to the image is matched according to the fusion feature.
  • the computer-readable storage medium provides a method for identifying and grading detection of fundus macular lesions based on optical coherence tomography images and optical coherence tomography angiography images, combined with a deep dual-channel neural network algorithm.
  • a fundus screening image is obtained, and the fundus screening image includes a scan image and a contrast image, and then the scan image is input to a first network in a dual-channel network to obtain first image features obtained by the first network; the first image features include fundus curvature and reflectance, and the contrast images are input to the dual-channel network.
  • the second network in the second network obtains the second image feature obtained by the second network; the second image feature includes blood vessel density and fundus tissue thickness, and the first image feature is fused with the second image feature to obtain a fusion feature.
  • the fusion feature includes the values of various fundus parameters for determining macular degeneration, including fundus curvature, reflectivity, blood vessel density, and fundus tissue thickness. Correct the fundus curvature and reflectance of the first image feature to obtain the fusion feature; match the macular degeneration grade corresponding to the image according to the fusion feature, thereby improving the accuracy and accuracy of macular degeneration identification and detection.
  • Any reference to memory, storage, database or other media provided herein and used in the examples may include non-volatile or volatile memory.
  • Nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), Memory Bus (Rambus) Direct RAM (RDRAM), Direct Memory Bus Dynamic RAM (DRDRAM), and Memory Bus Dynamic RAM (RDRAM), among others.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Quality & Reliability (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

La présente demande se rapporte aux domaines de la reconnaissance d'image et du traitement numérique. L'invention divulgue un procédé et un appareil de détection de lésion basée sur une image de fond d'œil, ainsi qu'un dispositif informatique et un support de stockage. Le procédé consiste : à acquérir une image de filtrage de fond d'œil, l'image de filtrage de fond d'œil comprenant une image de balayage et une image d'angiographie ; à entrer l'image de balayage dans un premier réseau dans un réseau à double canal d'un modèle de réseau d'apprentissage profond et à acquérir une première caractéristique d'image obtenue par le premier réseau, la première caractéristique d'image comprenant une courbure de fond d'œil et une réflectivité ; à entrer l'image d'angiographie dans un second réseau dans le réseau à double canal du modèle de réseau d'apprentissage profond et à acquérir une seconde caractéristique d'image obtenue par le second réseau, la seconde caractéristique d'image comprenant une densité de vaisseau sanguin et une épaisseur de tissu de fond d'œil ; à fusionner la première caractéristique d'image avec la seconde caractéristique d'image pour obtenir une caractéristique fusionnée ; et, selon la caractéristique fusionnée, à mettre en correspondance un niveau de lésion maculaire correspondant à l'image. La présente demande peut améliorer la précision de reconnaissance d'un niveau de lésion maculaire du fond d'œil.
PCT/CN2022/090164 2022-01-21 2022-04-29 Procédé et appareil de détection de lésion basée sur une image de fond d'œil, dispositif, et support de stockage WO2023137904A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210073516.3 2022-01-21
CN202210073516.3A CN114494734A (zh) 2022-01-21 2022-01-21 基于眼底图像的病变检测方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023137904A1 true WO2023137904A1 (fr) 2023-07-27

Family

ID=81472176

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/090164 WO2023137904A1 (fr) 2022-01-21 2022-04-29 Procédé et appareil de détection de lésion basée sur une image de fond d'œil, dispositif, et support de stockage

Country Status (2)

Country Link
CN (1) CN114494734A (fr)
WO (1) WO2023137904A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180204111A1 (en) * 2013-02-28 2018-07-19 Z Advanced Computing, Inc. System and Method for Extremely Efficient Image and Pattern Recognition and Artificial Intelligence Platform
CN110021009A (zh) * 2019-01-18 2019-07-16 平安科技(深圳)有限公司 一种评估眼底图像质量的方法、装置及存储介质
CN112884729A (zh) * 2021-02-04 2021-06-01 北京邮电大学 基于双模态深度学习的眼底疾病辅助诊断方法和装置
CN113011485A (zh) * 2021-03-12 2021-06-22 北京邮电大学 多模态多病种长尾分布眼科疾病分类模型训练方法和装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018045363A1 (fr) * 2016-09-02 2018-03-08 Gargeya Rishab Procédé de criblage pour la détection automatisée de maladies dégénératives de la vision à partir d'images de fond d'œil en couleur
CN110766656B (zh) * 2019-09-19 2023-08-11 平安科技(深圳)有限公司 筛查眼底黄斑区异常的方法、装置、设备和存储介质
CN112446860B (zh) * 2020-11-23 2024-04-16 中山大学中山眼科中心 一种基于迁移学习的糖尿病黄斑水肿自动筛查方法
CN112883962B (zh) * 2021-01-29 2023-07-18 北京百度网讯科技有限公司 眼底图像识别方法、装置、设备、存储介质以及程序产品
CN112991343B (zh) * 2021-04-30 2021-08-13 北京至真互联网技术有限公司 眼底图像黄斑区域的识别检测方法和装置及设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180204111A1 (en) * 2013-02-28 2018-07-19 Z Advanced Computing, Inc. System and Method for Extremely Efficient Image and Pattern Recognition and Artificial Intelligence Platform
CN110021009A (zh) * 2019-01-18 2019-07-16 平安科技(深圳)有限公司 一种评估眼底图像质量的方法、装置及存储介质
CN112884729A (zh) * 2021-02-04 2021-06-01 北京邮电大学 基于双模态深度学习的眼底疾病辅助诊断方法和装置
CN113011485A (zh) * 2021-03-12 2021-06-22 北京邮电大学 多模态多病种长尾分布眼科疾病分类模型训练方法和装置

Also Published As

Publication number Publication date
CN114494734A (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
US20220076420A1 (en) Retinopathy recognition system
EP3373798B1 (fr) Procédé et système de classification de papille de nerf optique
WO2018201632A1 (fr) Réseau neuronal artificiel et système de reconnaissance d'une lésion dans une image de fond d'œil
CN110309849A (zh) 血管图像处理方法、装置、设备及存储介质
JP2021536057A (ja) 医療画像に対する病変の検出及び位置決め方法、装置、デバイス、及び記憶媒体
CN110263755B (zh) 眼底图像识别模型训练方法、眼底图像识别方法和设备
JP2019192215A (ja) 深層学習を用いた網膜層の3d定量解析
WO2021190656A1 (fr) Procédé et appareil de localisation du centre de la macula dans une image de fond d'oeil, serveur et support de stockage
WO2022166399A1 (fr) Procédé et appareil de diagnostic auxiliaire de maladie de fond d'œil basés sur un apprentissage profond bimodal
CN112017185A (zh) 病灶分割方法、装置及存储介质
CN111340087A (zh) 图像识别方法、装置、计算机可读存储介质和计算机设备
CN117058676B (zh) 一种基于眼底检查影像的血管分割方法、装置和系统
CN110415245A (zh) 眼部数据确定方法、模型训练方法及设备
CN117788407A (zh) 基于人工神经网络的青光眼图像特征提取的训练方法
CN116030042B (zh) 一种针对医生目诊的诊断装置、方法、设备及存储介质
WO2023137904A1 (fr) Procédé et appareil de détection de lésion basée sur une image de fond d'œil, dispositif, et support de stockage
CN116452571A (zh) 一种基于深度神经网络的图像识别方法
CN116092667A (zh) 基于多模态影像的疾病检测方法、系统、装置及存储介质
WO2021139446A1 (fr) Appareil et procédé de prédiction d'effet curatif anti-facteur de croissance de l'endothélium vasculaire (vegf)
CN111374632B (zh) 视网膜病变检测方法、装置及计算机可读存储介质
CN110992364A (zh) 视网膜图像识别方法、装置、计算机设备和存储介质
Pham et al. Generative Adversarial Networks for Retinal Image Enhancement with Pathological Information
Gambhir et al. Severity classification of diabetic retinopathy using ShuffleNet
Hirota et al. Automatic Estimation of Objective Cyclodeviation in Fundus Image Using Machine Learning
Prasad et al. Reduction of False Microaneurysms in Retinal Fundus Images using Fuzzy C-Means Clustering in terms NLM Anisotropic Filter

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22921335

Country of ref document: EP

Kind code of ref document: A1