EP4334897A1 - Annotation d'image de rétine, et procédés d'entraînement et modèles de traitement d'image associés - Google Patents

Annotation d'image de rétine, et procédés d'entraînement et modèles de traitement d'image associés

Info

Publication number
EP4334897A1
EP4334897A1 EP22727873.6A EP22727873A EP4334897A1 EP 4334897 A1 EP4334897 A1 EP 4334897A1 EP 22727873 A EP22727873 A EP 22727873A EP 4334897 A1 EP4334897 A1 EP 4334897A1
Authority
EP
European Patent Office
Prior art keywords
image
cross
sectional
retina
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22727873.6A
Other languages
German (de)
English (en)
Inventor
Muhammet ASLAN
Canan Asli UTINE
Cihan TOPAL
Özlem Özden ZENGIN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vestel Elektronik Sanayi ve Ticaret AS
Original Assignee
Vestel Elektronik Sanayi ve Ticaret AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vestel Elektronik Sanayi ve Ticaret AS filed Critical Vestel Elektronik Sanayi ve Ticaret AS
Publication of EP4334897A1 publication Critical patent/EP4334897A1/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the present disclosure relates to retina image annotation, and related image training methods and image processing models.
  • OCT optical coherence tomography
  • Fundus images are obtained by photographing the rear of an eye using a specialised fundus camera. Ophthalmologists and other trained medical professionals may be trained to use fundus images to identify many eye conditions and/or diseases, in particular posterior segment disorders such as age-related macular degeneration, diabetic retinopathy and glaucoma. Early diagnosis of such diseases is important to minimise vision loss. Fundus images may also be used to monitor disease progression. Fundus imaging can be carried out using low-cost and/or portable equipment and therefore can be used for diagnosis of such eye disorders by ophthalmologists or other medical professionals in various outpatient settings.
  • Optical coherence tomography provides cross-sectional layered views of the fundus (rear of the eye).
  • OCT is often used in monitoring and in determining treatment for posterior segment eye disorders mentioned above.
  • a limitation of OCT is that the equipment used to capture these images is expensive and immobile, and therefore is typically only located in hospitals. OCT therefore typically is used in identifying treatment plans and other periodic monitoring post-diagnosis.
  • the present architecture provides a method of annotating fundus images based on OCT data.
  • the annotated fundus images may then be used to train a machine learning component to classify retinal images based on fundus data only.
  • This annotation of training data teaches the system to discover some previously unknown or unnoticed visual features that were too subtle to be recognized, or to be observed by human perception. This is an improvement over existing methods exploiting only fundus or OCT data respectively, as the model learns to identify visual features in fundus images which may not be visible to a manual annotator, based on previous training alongside OCT data.
  • Images annotated using the present techniques can be used to train an image processing to more accurately detect disease patterns in fundus images (captured with low-cost fundus imaging equipment, without requiring expensive OCT equipment), and potentially even disease patterns that would not be visible to a human or would require a high level of human expertise to spot.
  • Benefits of the present techniques therefore include increased disease detection capabilities using low-cost imaging equipment, and de-skilling of the disease detection process. This enables fundus data to be used in treatment and follow-up decisions where there are limitations on access to OCT data, for example due to cost. Funds images and OCT scans are referred to by way of example, but the techniques can be applied to other forms of imaging/scanning technology.
  • a computer-implemented method of annotating conventional retina images comprising: receiving a conventional image of a retina, the conventional retina image captured using an image capture device; receiving an associated cross-sectional image of said retina, the cross- sectional image captured using a cross-sectional imaging system; determining a disease location in an image plane of the cross-sectional image; and generating annotation data for annotating the disease location in an image plane of the conventional image, by projecting the disease location from the image plane of the cross-sectional image into to the image plane of the conventional image, based on a known mapping between the cross-sectional image and the conventional image.
  • the image plane of the cross-sectional image may lie substantially parallel to the image plane of the conventional image, such that the cross sectional image maps to a scan line in the image plane of the conventional image.
  • Multiple cross-sectional images associated with the conventional retina image may be received, and multiple disease locations are determined in respective image planes of the cross-sectional images and projected into the image plane of the conventional image to generate the annotation data.
  • the annotation data may be generated via interpolation of the projected disease locations within the image plane of the conventional image.
  • the multiple cross-sectional images may correspond to multiple scan lines in the image plane of the conventional image, and the annotation data may be generated by interpolating the projected disease locations within one or more regions separating the scan lines.
  • the or each disease location may be determined in the image plane of the cross- sectional image: via automated image recognition applied to the cross-sectional image, via manual annotation applied to the cross-sectional image, or via a combination of automated image recognition and manual annotation.
  • the annotation data may be generated in the form of a segmentation mask.
  • the segmentation mask may assign a severity level to each of at least some pixels of the conventional image, based on at least one of: a severity level(s) assigned to the disease location(s) in the cross-sectional image, and a depth of the disease location(s) within or behind the retina.
  • the conventional image may be a fundus image, the image capture device being a fundus camera.
  • the cross sectional image may be an optical coherence tomography (OCT) image, the cross-sectional imaging system being an OCT imaging system.
  • OCT optical coherence tomography
  • the multiple disease locations may be determined in the image plane of the cross- sectional image, and a geometric voting algorithm may be applied to the multiple disease locations to generate the annotation data based on at least one of: a severity level assigned to each disease location in the cross-sectional image, and a depth of each disease location within or behind the retina.
  • the method may be applied to multiple conventional retina images and respective associated cross-sectional images, to generate respective annotation data for the multiple conventional retina images, wherein the multiple conventional retina images and their annotation data are used to train an image processing model to identify disease regions in conventional retina images, the multiple conventional retina images used as training inputs and their respective annotation data providing training ground truth, wherein the training inputs do not include the cross-sectional images.
  • the training inputs may additionally comprise associated patient data.
  • an image processing system comprising: an input configured to receive a conventional retina image; and one or more processors configured to apply a trained image processing model to the conventional retina image, in order to identify at least one disease location therein, the image processing model trained in accordance with any training method disclosed herein.
  • Figure 1 A shows schematically a fundus image
  • Figure IB shows schematically an OCT scan of a cross section of the fundus
  • Figure 2 shows schematically a series of OCT scan cross sections taken at various heights of a 2D fundus image
  • Figure 3 shows schematically an input and output of an example state-of-the- art fundus image segmentation method
  • Figure 4 shows schematically how pixels of a fundus image may be annotated based on OCT data
  • Figure 5 shows schematically a geometric voting algorithm applied to an annotated OCT scan
  • Figure 6 shows how dense registration may be applied to a partially annotated fundus image
  • Figure 7 shows an architecture for generating annotated fundus images and training a fundus image segmentation network
  • Figure 8 shows an example input dataset and output of an artificial intelligence model trained on annotated fundus images.
  • an artificial intelligence system to extract more useful information from fundus images.
  • This system is trained based on a comprehensive data set consisting of pairs of images, each pair comprising a fundus image and a corresponding optical coherence tomography (OCT) scan.
  • OCT optical coherence tomography
  • the dataset is used to identify visual cues in fundus images by correlating fundus images with OCT data.
  • An annotated set of fundus images may be obtained by applying the correlation with OCT data, with the annotations identifying pixels of the fundus images corresponding to areas of interest in the corresponding OCT scans.
  • Each fundus image of the fundus-OCT pair may be annotated based on this correlation.
  • the annotated fundus images are then used to train an AI algorithm to detect visual features of interest for unannotated fundus images only.
  • FIG. 1 A shows an example of a fundus image.
  • a fundus image is a conventional camera image showing the rear (fundus) of an eye.
  • the fundus image 100 shown in Figure 1 A shows a fundus with no sign of disease.
  • Fundus images are typically captured with specialised camera equipment comprising a microscope attached to a flash-enabled camera. Low-cost and portable fundus cameras may be used in a variety of settings.
  • the fundus image 100 shown in Figure 1 A shows a 2D view of the back of the eye.
  • the retina comprises a number of layers, some of which are semi-transparent.
  • the fundus image 100 is a 2D image and therefore does not provide any depth information for any irregularities or features of interest in the image.
  • the image 100 shows the macula 102 in the centre of the image, the optic disc to the right, and blood vessels 106 of the eye.
  • Retinal abnormalities visible in a fundus image 100 may be used by a trained ophthalmologist or other medical professional to diagnose a number of eye- related diseases or conditions affecting the back of the eye, including age-related macular degeneration, diabetic retinopathy, and glaucoma.
  • development of treatment plans and ongoing monitoring may require techniques to identify the depth of the abnormality.
  • FIG IB shows an example of an OCT scan 110 of a retina.
  • Each OCT scan represents a single horizontal cross-section of the back of the eye at a given height.
  • a single OCT scan 110 corresponds to a horizontal line of a fundus image 100.
  • the OCT scan captures depth information of the fundus.
  • different layers of the retina are visible and may appear as different colours or shades in an OCT scan. Identifying the depth of retinal abnormalities is important in the diagnosis and treatment of eye diseases such as those mentioned above.
  • Equipment for capturing OCT scans is expensive and generally not portable.
  • OCTs are employed in ophthalmology only in hospital settings, and are more often used to aid in treatment planning for diagnosed posterior segment diseases mentioned above than as a diagnostic tool.
  • FIG. 2 shows the association between a single fundus image 100 and a series of OCT scans 110a- 1 lOe.
  • a series of OCT scans may be taken across the centre of the fundus image 100, as shown by the area 200 of the fundus image.
  • the OCT images may focus on a particular region of the fundus, in this example the centre.
  • Each OCT scan corresponds to one of the vertical lines shown in the area 200.
  • An example OCT scan 110 is shown which corresponds to the bright green line in the centre of the fundus image.
  • the retina comprises a series of semi-transparent layers of neurons and below these is a pigmented layer.
  • the choroid which is the vascular layer of the eye, lies below this pigmented layer. Layering of the retina is shown in image 202.
  • Each layer of the retina may be identified from an OCT scan by a trained professional such as an ophthalmologist.
  • Each horizontal line shown in image 200 corresponds with a different OCT scan.
  • a series of OCT scans 1 lOa-1 lOe each taken at a different vertical height.
  • a green line is shown in each fundus image lOOa-lOOe on the left showing the height of the given scan within the fundus.
  • each scan shows a different profile of the retinal tissue.
  • Any abnormalities identified in the OCT scans may be identified with the given vertical height as well as the depth and the horizontal position given by the OCT scan itself. Abnormalities may be identified across multiple spans based on their appearance and position in the scans.
  • Abnormalities may be identified manually by a domain expert such as a trained ophthalmologist, or it may be identified automatically, for example by using a trained machine learning model.
  • the system described herein uses identified features of OCT scans, such as abnormalities, to train a fundus classifier. Annotation of OCT scans is described in more detail below.
  • the system described herein is used to train a fundus classifier that applies classification to each pixel of the input image.
  • the fundus classifier segments the image into sets of pixels corresponding to multiple classes.
  • Figure 3 shows an example of the input data and results of a state-of-the-art classifier/segmentation method for fundus images.
  • a domain expert as mentioned above, may identify areas of the fundus image typically visible to a domain expert.
  • a training fundus image 300 is of a healthy eye.
  • the classifier may be used to detect the blood vessels visible in the image.
  • Image 310 shows an example of a manual annotation provided by a domain expert which marks out the blood vessels of the fundus visible in the training image 300.
  • a machine learning model for example a convolutional neural network may be trained on pairs of fundus training images 300 and manual annotations 310.
  • CNNs are commonly used in image processing tasks such as image segmentation.
  • CNNs comprise a number of layers, including convolutional layers consisting of a set of kernels which are convolved across an input volume, where the input volume may be a 2D or 3D array representing an image.
  • a colour image may be represented by three colour channels, each comprising a 2D M X N array of pixel values, such that the input volume is a M X N X 3 tensor.
  • CNNs also include pooling layers which ‘downsample’ an input to a lower dimensional array, and layers applying nonlinearities such as ReLU. Each layer outputs a volume of feature arrays. The original resolution of the input image may be restored to the output by applying upsampling layers if, for example, the desired output of the network is an annotated version of the input image.
  • the convolutional neural network is first initialised with a set of weights.
  • the input training image 300 is processed by the network which predicts a label for each pixel of the input image.
  • a loss function is used to assess the labels predicted by the network against the manual annotations 310 provided for the given input image.
  • the weights of the network are updated such that the network predictions are close to the manually annotated examples for a training set of multiple annotated fundus images.
  • loss functions and specific architectures may be defined to achieve the training goal of predicting accurate pixel labels.
  • the network may be applied in an inference stage.
  • a fundus image 100 without any manual annotation is input to the network, which applies its trained weights and outputs a label for each pixel of the image, for example identifying certain pixels of the image as blood vessels.
  • An example output 320 is shown in Figure 3.
  • the predicted areas of the image corresponding to blood vessels is shown in purple.
  • the example of Figure 3 shows the inference output applied to the same image 300 shown for training.
  • the network is generally applied to fundus images which have not been seen in training.
  • a system will now be described which enables annotation of fundus images based on OCT data.
  • This allows training of a fundus classifier of the type described above, without the limitation of the annotated training data being based only on visual features observable by a human expert.
  • the method described below enables a computer system to learn subtle visual features of a fundus image which tend to correspond with particular features identifiable in an OCT scan.
  • the resulting fundus segmentation network aims to extract information from 2D fundus images which can be used in diagnosis, follow-up monitoring, and treatment of eye conditions.
  • Figure 4 shows how annotations may be applied to a fundus image based on an OCT image for which areas of interest have been annotated.
  • an OCT image may itself be automatically annotated by applying a trained artificial intelligence model, described later with reference to Figure 7, or may be annotated manually by a domain expert trained in analysing OCT images.
  • the OCT image 110 in Figure 4 shows a number of annotated areas 402, highlighted in green. These may, for example, represent abnormalities in the fundus indicative of disease, or other features of interest.
  • Annotations may comprise labels identifying a type of abnormality, and/or other relevant features, such as a severity. Severities may be determined by the automatic or manual annotator based on training data and domain expertise, respectively.
  • each OCT image corresponds with a cross section of a 2D fundus image, represented by a single line 404 on the training fundus image.
  • the annotations of the OCT scans must be projected onto the plane of the fundus image 300. This is shown on the right hand side of Figure 4.
  • a geometric voting algorithm is described in more detail below with reference to Figure 5, which may be used to determine which pixels of the original fundus image should be annotated based on the annotated segments of the OCT image.
  • the corresponding annotations are shown on the line of the fundus image as red segments 408.
  • the remaining pixels of the fundus image i.e. pixels lying between OCT scans, may be annotated by applying an interpolation technique to the annotated sets of pixels.
  • Figure 5 shows a Geometric Voting Algorithm applied to project annotations of an OCT scan image to the top layer of the retina, and map this projection onto the corresponding line of a fundus image.
  • the retina comprises a series of transparent or semi-transparent layers of tissue, as shown by the area in Figure 4.
  • the annotation of the fundus image 300 aims to associate even small or subtle visible features of the fundus with features of the OCT scan 110.
  • OCT features are mapped onto features of the fundus image that are actually detectable by a computer network, although they do not need to be visible to the human eye.
  • the depth of the OCT features may therefore taken into account, since features appearing in deeper layers of the retina are less likely to appear in a fundus image and therefore annotating the corresponding area of the fundus image is not helpful for training a segmentation network for fundus images.
  • a severity of the OCT annotations may also be used by the voting scheme to determine the shape of the abnormality or visual feature identified in the OCT as it appears in the image plane of the fundus image. For example, it is important to flag abnormalities which are considered severe in an OCT image, so that any visual indicator that may exist in the fundus image may be identified by the network, therefore making the network sensitive to severe indicators of disease even if not highly visible.
  • Geometric voting algorithms have previously been applied in the field of astronomy in order to track stars. This algorithm is not described in detail herein.
  • the geometry of the retinal layers may be used, along with knowledge of the visibility of features at various depths of the retina in order to determine the shape of the corresponding visual feature as it appears in the training fundus image 300. This is shown for the green and blue annotations of the OCT scan in Figure 5.
  • remaining pixels of the fundus image 300 may be annotated using an interpolation. Between eight and one hundred OCT scans may be collected for a single fundus image, providing a subset of annotated pixels for the fundus image. Dense registration may be used to interpolate the annotation values for the pixels of each line 404 to remaining pixels of the image to obtain a continuous area of annotated pixels 602. This is shown in Figure 6.
  • Figure 7 shows an example architecture comprising a pipeline for generating annotated fundus images and a convolutional neural network which may be trained based on the generated fundus annotations to apply pixel-wise classification to the fundus images based on learned relationships with OCT data.
  • Automatic annotation of OCT scans may be carried out by a trained convolutional neural network 700 to generate a set of ‘ground truth’ automatic OCT annotations 704.
  • a human annotator with domain expertise may also analyse the OCT scans for each fundus training image 300 to generate a set of manual ‘ground truth’ annotated OCT scans 706.
  • Each of the training fundus images 300, as well as the set of manually annotated OCT scans 706, and automatically annotated OCT scans 704, are input to a dense fundus annotator which annotates the fundus images based on the OCT annotations using a geometric voting algorithm to apply pixel annotations along the lines 404 of the input fundus image 300, and dense registration to determine pixel annotations for remaining pixels of the image.
  • the annotator 720 then outputs the dense ‘ground truth’ fundus annotations which may be used along with the unannotated training images 300 to train a segmentation network 730, which may take the form of a convolutional neural network, and which is trained as described earlier with reference to Figure 3.
  • the methods herein allow conventional fundus images to be annotated to train a model which can make OCT-based predictions for inputs comprising fundus image alone.
  • models of diagnosis and treatment may be further improved by introducing further input data, such as patient information, including age, gender, disease history, which can be strong indicators of risk, and which may influence treatment options in case of disease.
  • Figure 8 shows an example of the possible inputs and outputs of an artificial intelligence model which may be at least partially trained according to the methods described above.
  • Input datasets can include OCT images 110 as well as corresponding patient information 800 and fundus images.
  • OCT images, as described above are used in annotating a set of fundus images to incorporate or recognise features o OCT images not obvious in fundus data.
  • Additional patient data may be used either in the annotation stage, in order to learn correlations between patient attributes and corresponding fundus images, but may also be provided to the image segmentation network in order to help in predicting, for example, the risk or severity of identified abnormalities or other visual features of the fundus.
  • the output of the network described in Figure 7 is a set of annotated fundus images, with pixels classified according to one or more labels, identifying or classifying the severity of possible diseases.
  • This data may be further processed by the model 900, comprising at least the network components described in Figure 7, to determine or aid a human expert in determining outputs such as a diagnosis result 802, a progression analysis 804, or data for a treatment plan 806.
  • FIG. 7 and 8 are functional components implemented in a process or processing circuitry.
  • the training method described above is a computer-implemented method that is implemented using the same.
  • the processor or processing system or circuitry referred to herein may in practice be provided by a single chip or integrated circuit or plural chips or integrated circuits, optionally provided as a chipset, an application- specific integrated circuit (ASIC), field-programmable gate array (FPGA), digital signal processor (DSP), graphics processing units (GPUs), etc.
  • ASIC application- specific integrated circuit
  • FPGA field-programmable gate array
  • DSP digital signal processor
  • GPUs graphics processing units
  • the chip or chips may comprise circuitry (as well as possibly firmware) for embodying at least one or more of a data processor or processors, a digital signal processor or processors, baseband circuitry and radio frequency circuitry, which are configurable so as to operate in accordance with the exemplary embodiments.
  • the exemplary embodiments may be implemented at least in part by computer software stored in (non-transitory) memory and executable by the processor, or by hardware, or by a combination of tangibly stored software and hardware (and tangibly stored firmware).
  • the invention also extends to computer programs, particularly computer programs on or in a carrier, adapted for putting the invention into practice.
  • the program may be in the form of non-transitory source code, object code, a code intermediate source and object code such as in partially compiled form, or in any other non-transitory form suitable for use in the implementation of processes according to the invention.
  • the carrier may be any entity or device capable of carrying the program.
  • the carrier may comprise a storage medium, such as a solid- state drive (SSD) or other semiconductor-based RAM; a ROM, for example a CD ROM or a semiconductor ROM; a magnetic recording medium, for example a floppy disk or hard disk; optical memory devices in general; etc.
  • SSD solid- state drive
  • ROM read-only memory
  • magnetic recording medium for example a floppy disk or hard disk
  • optical memory devices in general etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

L'invention concerne un procédé mis en œuvre par ordinateur permettant d'annoter des images rétiniennes classiques, le procédé comprenant les étapes consistant à : recevoir une image classique d'une rétine, l'image rétinienne classique étant capturée à l'aide d'un dispositif de capture d'image; recevoir une image de coupe transversale associée de ladite rétine, l'image de coupe transversale étant capturée à l'aide d'un système d'imagerie transversale; déterminer l'emplacement d'une maladie dans un plan d'image de l'image de coupe transversale; et générer des données d'annotation pour annoter l'emplacement de la maladie dans un plan d'image de l'image classique, par projection de l'emplacement de la maladie à partir du plan d'image de l'image de coupe transversale dans le plan d'image de l'image classique, sur la base d'un mappage connu entre l'image de coupe transversale et l'image classique.
EP22727873.6A 2021-05-06 2022-05-05 Annotation d'image de rétine, et procédés d'entraînement et modèles de traitement d'image associés Pending EP4334897A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TR202107756 2021-05-06
PCT/EP2022/062189 WO2022234035A1 (fr) 2021-05-06 2022-05-05 Annotation d'image de rétine, et procédés d'entraînement et modèles de traitement d'image associés

Publications (1)

Publication Number Publication Date
EP4334897A1 true EP4334897A1 (fr) 2024-03-13

Family

ID=81940453

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22727873.6A Pending EP4334897A1 (fr) 2021-05-06 2022-05-05 Annotation d'image de rétine, et procédés d'entraînement et modèles de traitement d'image associés

Country Status (3)

Country Link
EP (1) EP4334897A1 (fr)
CN (1) CN117337447A (fr)
WO (1) WO2022234035A1 (fr)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11989877B2 (en) * 2018-09-18 2024-05-21 MacuJect Pty Ltd Method and system for analysing images of a retina

Also Published As

Publication number Publication date
WO2022234035A1 (fr) 2022-11-10
CN117337447A (zh) 2024-01-02

Similar Documents

Publication Publication Date Title
Asiri et al. Deep learning based computer-aided diagnosis systems for diabetic retinopathy: A survey
Veena et al. A novel optic disc and optic cup segmentation technique to diagnose glaucoma using deep learning convolutional neural network over retinal fundus images
Niemeijer et al. Information fusion for diabetic retinopathy CAD in digital color fundus photographs
US7529394B2 (en) CAD (computer-aided decision) support for medical imaging using machine learning to adapt CAD process with knowledge collected during routine use of CAD system
KR20190087272A (ko) 안저영상을 이용한 녹내장 진단 방법 및 이를 위한 장치
Shanthini et al. Threshold segmentation based multi-layer analysis for detecting diabetic retinopathy using convolution neural network
Valizadeh et al. Presentation of a segmentation method for a diabetic retinopathy patient’s fundus region detection using a convolutional neural network
US20210383262A1 (en) System and method for evaluating a performance of explainability methods used with artificial neural networks
KR20200087427A (ko) 딥러닝을 이용한 갑상선 암의 림프절 전이 진단 방법
Karthiyayini et al. Retinal image analysis for ocular disease prediction using rule mining algorithms
JP2022546344A (ja) 脳卒中特徴取得のための画像処理
CN112334990A (zh) 自动宫颈癌诊断系统
WO2003020112A9 (fr) Systeme et procede de depistage de la retinopathie diabetique chez des patients
CN111401102B (zh) 深度学习模型训练方法及装置、电子设备及存储介质
Bouacheria et al. Automatic glaucoma screening using optic nerve head measurements and random forest classifier on fundus images
Gupta et al. A novel method for automatic retinal detachment detection and estimation using ocular ultrasound image
KR20210050790A (ko) 딥러닝 기반의 아밀로이드 양성 반응을 나타내는 퇴행성 뇌질환 이미지 분류 장치 및 방법
US12067726B2 (en) Retina image annotation, and related training methods and image processing models
EP4334897A1 (fr) Annotation d'image de rétine, et procédés d'entraînement et modèles de traitement d'image associés
Rahmany et al. A priori knowledge integration for the detection of cerebral aneurysm
Azeroual et al. Convolutional Neural Network for Segmentation and Classification of Glaucoma.
TR2021007756A2 (tr) Reti̇na görüntü anotasyonu ve i̇lgi̇li̇ eği̇ti̇m yöntemleri̇ i̇le görüntü i̇şleme modelleri̇
Heyi et al. Development of a retinal image segmentation algorithm for the identifying prevalence markers of diabetic retinopathy using a neural network
Bhardwaj et al. A computational framework for diabetic retinopathy severity grading categorization using ophthalmic image processing
US20230394666A1 (en) Information processing apparatus, information processing method and information processing program

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20231206

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20240327

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)