WO2020226455A1 - Dispositif de prédiction de neuropathie optique à l'aide d'une image de fond d'œil et procédé de fourniture d'un résultat de prédiction de neuropathie optique - Google Patents

Dispositif de prédiction de neuropathie optique à l'aide d'une image de fond d'œil et procédé de fourniture d'un résultat de prédiction de neuropathie optique Download PDF

Info

Publication number
WO2020226455A1
WO2020226455A1 PCT/KR2020/006097 KR2020006097W WO2020226455A1 WO 2020226455 A1 WO2020226455 A1 WO 2020226455A1 KR 2020006097 W KR2020006097 W KR 2020006097W WO 2020226455 A1 WO2020226455 A1 WO 2020226455A1
Authority
WO
WIPO (PCT)
Prior art keywords
optic neuropathy
image
prediction
subject
fundus
Prior art date
Application number
PCT/KR2020/006097
Other languages
English (en)
Korean (ko)
Inventor
양희경
황정민
김광기
김영재
Original Assignee
서울대학교산학협력단
가천대학교 산학협력단
(의료)길의료재단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 서울대학교산학협력단, 가천대학교 산학협력단, (의료)길의료재단 filed Critical 서울대학교산학협력단
Publication of WO2020226455A1 publication Critical patent/WO2020226455A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • A61B3/145Arrangements specially adapted for eye photography by video means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the present invention relates to a technology for predicting whether a subject has optic neuropathy, and more particularly, by applying a fundus image to a pre-modeled optic neuropathy prediction model to predict whether the subject from which the fundus image was acquired has optic neuropathy. It relates to a method and apparatus for providing results.
  • Optic neuropathy such as optic nerve atrophy
  • optic nerve atrophy is a disease that refers to damage to the optic nerve due to any cause. Damage and death of nerve cells in the optic nerve cause the characteristic of optic neuropathy, and the main symptom is loss of vision accompanied by subtle coloration in the affected eye.
  • the most representative test for diagnosing optic neuropathy is a clinical symptom test, and typically, a diagnosis is performed on the basis of symptoms related to optic neuropathy (eg, pallor of optic disc).
  • symptoms related to optic neuropathy eg, pallor of optic disc.
  • clinical symptom examination from defining the optic nerve nipple to evaluating the paleness of the optic nerve nipple is a difficult task even for a diagnoser with years of experience, and it is based on a subjective interpretation, there is a possibility of misdiagnosis.
  • Electrophysiological tests, family history tests, and molecular genetic tests are used to minimize the possibility of misdiagnosing clinical symptoms, but these methods of testing for optic neuropathy have significant limitations in the time and cost of testing.
  • a method of providing an optic neuropathy prediction result performed by a computing device includes: obtaining a fundus image of a subject; Pre-processing to increase the clarity of the symptoms of optic neuropathy in the fundus image; And determining whether the subject has optic neuropathy by applying the preprocessed image to the pre-modeled optic neuropathy prediction model.
  • the pre-processing may include applying a color channel to the fundus image to obtain a green channel image filtered with a green channel; Removing noise from the green channel image; And correcting the brightness of the green channel image.
  • the pre-processing may include applying the color channel to the fundus image to obtain a blue channel image filtered with a blue channel; And generating a combined image obtained by combining the green channel image and the blue channel image.
  • the method for providing an optic neuropathy prediction result includes: setting a region of interest by thresholding the green channel image based on the brightness of the green channel image; And extracting an ROI image including the ROI.
  • the determining step is to a deep learning-based optic neuropathy prediction model configured to extract features related to optic neuropathy from the applied preprocessed image, and output a prediction result indicating whether the subject of the applied image has optic neuropathy. It may include applying the pre-processed image.
  • the optic neuropathy prediction model includes a plurality of convolution filters and a pooling filter, receives an output result of a previous layer and performs convolution or pooling each through the plurality of filters, and the processing result It may include an inception block configured to concatenate.
  • the inception block may include a 1 ⁇ 1 convolution filter and a convolution filter that outputs a feature map of a higher dimension than the 1 ⁇ 1 convolution filter.
  • the parameter of the optic neuropathy prediction model is predetermined using a training sample
  • the training sample includes a training pre-processed image and labeling data indicating whether the subject of the training pre-processed image has optic neuropathy. I can.
  • the determining may include calculating a feature parameter from the preprocessed image; Calculating a probability that the subject of the pre-processed image will have optic neuropathy by applying the feature parameter to a pre-modeled optic neuropathy prediction model; And determining whether the subject has optic neuropathy based on the probability.
  • the optic neuropathy prediction model is expressed by Equation 1 below.
  • BC represents the brightness correction ratio of the preprocessed image
  • TN represents the temporal-to-nasal ratio
  • logit(P) represents a value corresponding to the probability of having optic neuropathy.
  • ⁇ 0 , ⁇ 1 , ⁇ 2 of the optic neuropathy prediction model is a regression analysis based on a pre-processed image and a learning sample including labeling data indicating whether a subject of the training pre-processed image has optic neuropathy. It represents the regression coefficient of the model, determined through.
  • the step of calculating the probability includes calculating the probability by the following equation (2),
  • the probability (P) indicates that the closer to 1, the higher the probability that the subject has optic neuropathy.
  • a computer-readable recording medium may store program instructions that are readable by a computing device and operable by the computing device.
  • the processor may cause the processor to perform the method of providing an optic neuropathy prediction result according to the above-described embodiments.
  • an apparatus for predicting optic neuropathy using a fundus image includes: a storage unit storing a pre-learned optic neuropathy prediction model; A data acquisition unit that acquires a fundus image of a subject; An image preprocessing unit that preprocesses the fundus image to generate a preprocessed image; And a prediction unit that determines whether the subject has an optic neuropathy by applying the preprocessed image to the optic neuropathy prediction model.
  • the apparatus for providing a result of predicting optic nerve atrophy may obtain a result of predicting whether or not there is an optic neuropathy such as optic nerve atrophy using a fundus image.
  • an optic neuropathy such as optic nerve atrophy
  • the examination process is simple and economical compared to conventional optic neuropathy examination methods.
  • the fundus image by applying the fundus image to a deep learning-based or regression-based optic neuropathy prediction model, the influence of the subjective interpretation of the diagnoser on the prediction result can be excluded, thereby minimizing the possibility of misdiagnosis in the subjective aspect.
  • the performance of the predictive model can be maximized by pre-processing so that symptoms related to the optic neuropathy (eg, paleness of the optic nerve nipple) in the fundus image are clearer.
  • symptoms related to the optic neuropathy eg, paleness of the optic nerve nipple
  • FIG. 1 is a schematic block diagram of an apparatus for predicting optic neuropathy according to an embodiment of the present invention.
  • FIG. 2 is a flowchart of a method for providing a prediction result of an optic neuropathy according to an embodiment of the present invention.
  • FIG. 3 is a conceptual diagram illustrating a process of generating a preprocessed image according to an embodiment of the present invention.
  • FIG. 4 is a diagram for explaining a combined image according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a method for providing a predicted result of an optic neuropathy according to a first embodiment of the present invention.
  • 6A to 6J are conceptual structural diagrams of an optic neuropathy prediction model according to an embodiment of the present invention.
  • FIG. 7 is a view for explaining a fundus image for learning according to an experimental example of the present invention.
  • FIG. 8 is a diagram for explaining the performance of a predictive model modeled by the fundus image for learning of FIG. 7.
  • FIG. 9 is a flowchart of a method for providing a prediction result of an optic neuropathy according to a second embodiment of the present invention.
  • FIG. 10 is a diagram for explaining a cup depth and a background area according to an embodiment of the present invention.
  • FIG. 11 is a diagram for explaining a training sample for modeling a prediction model based on regression analysis according to an experimental example of the present invention.
  • FIG. 12 is a diagram for explaining the performance of a regression analysis-based prediction model modeled by the training sample of FIG. 11.
  • FIG. 1 is a schematic block diagram of an apparatus for predicting optic neuropathy according to an embodiment of the present invention.
  • an apparatus for predicting optic neuropathy 1 includes a data acquisition unit 10 for obtaining a fundus image of a subject; An image pre-processing unit 20 that pre-processes the fundus image and generates an input image to be applied to the predictive model; And a prediction unit 30 that generates a prediction result by applying the input image to the prediction model.
  • the prediction apparatus 1 may further include a modeling unit 50 that models a prediction model.
  • the prediction apparatus 1 may have an aspect that is entirely hardware, entirely software, or partially hardware and partially software.
  • the prediction device 1 may collectively refer to hardware equipped with data processing capability and operating software for driving it.
  • terms such as “unit”, “module”, “device”, or “system” are intended to refer to a combination of hardware and software driven by the hardware.
  • the hardware may be a computing device capable of processing data including a central processing unit (CPU), a graphical processing unit (GPU), or another processor.
  • software may refer to an executing process, an object, an executable file, a thread of execution, a program, and the like.
  • FIG. 2 is a flowchart of a method for providing a prediction result of an optic neuropathy according to an embodiment of the present invention.
  • the prediction device 1 may provide a result of predicting whether a subject associated with the fundus image has an optic neuropathy using a fundus image (eg, a patient with optic neuropathy, a normal person, etc.) to a user. have.
  • the prediction device 1 includes a fundus image acquisition step (S10); Pre-processing the fundus image (S20); And applying the preprocessed fundus image to the prediction model to generate a prediction result (S30).
  • the method for providing an optic neuropathy prediction result may be performed.
  • the prediction apparatus 1 may further perform the step S40 of providing the prediction result to the user.
  • the data acquisition unit 10 acquires a fundus image of the subject.
  • the fundus image is an image that provides information on the inside of the eyeball, and includes a retinal fundus picture.
  • the acquired fundus image is taken by a fundus camera.
  • the fundus camera includes various fundus cameras capable of obtaining a fundus picture.
  • the fundus camera may include a Shandong fundus camera, a Musandong fundus camera, an OCT type fundus camera, and the like.
  • the fundus image is acquired in the form of binoculars.
  • the present invention is not limited thereto, and the data acquisition unit 10 may acquire a monocular fundus image.
  • a left-right identifier indicating whether the acquired monocular fundus image is a left eye or a right eye may be further obtained.
  • the data acquisition unit 10 may acquire data related to the fundus image.
  • the data related to the fundus photograph may include subject identification information (eg, name, identification information, subject identifier, etc.) capable of identifying the subject of the fundus photograph.
  • the image preprocessing unit 20 preprocesses the fundus image acquired by the data acquisition unit 10 so that symptoms for optic neuropathy appear more clearly, thereby generating a preprocessed image.
  • the pre-processed image includes an image generated by the image pre-processing unit 20, and may include, for example, a filtered image, an extracted image, a corrected image, and a connected image.
  • FIG. 3 is a conceptual diagram illustrating a process of generating a preprocessed image according to an embodiment of the present invention.
  • the image preprocessor 20 divides the fundus image acquired by the data acquisition unit 10 into images for each channel using one or more color channels, and then pre-processes the image for each channel. You can create an image.
  • the image preprocessor 20 divides the fundus image into images for each channel (red channel image, green channel image, blue channel image) using an RGB channel, as shown in FIG. Filtering green channel images with high contrast between the optic disc structures.
  • the image preprocessor 20 sets a region substantially required for predicting optic neuropathy from the filtered image, that is, a region of interest, and extracts a region including the region of interest (ROI) to obtain a region of interest image. Generate.
  • the image preprocessor 20 calculates a brightness value of the filtered image, thresholds the filtered image based on the brightness value to calculate the boundary of the optic nerve nipple, and then calculates the boundary of the optic nerve nipple.
  • the region of interest is set based on the boundary.
  • the morphological characteristics of the optic nerve papilla are defined as the pale color of the neuroretinal rim and become brighter as the disease progresses. That is, in the fundus image, the optic nerve nipple is characterized by having a brighter value compared to the surroundings.
  • the image preprocessor 20 may detect a relatively bright optic nerve nipple by thresholding the filtered image based on a brightness value.
  • the image preprocessor 20 may set a region of interest including the optic nerve nipple, extract the region of interest, and use it to predict optic neuropathy.
  • the region of interest may further include a predetermined region adjacent to the boundary of the optic nerve nipple.
  • the boundary of the optic nerve papilla is defined based on the boundary of the nerve retina, and the boundary of the nerve retina may be difficult to define depending on the state of the fundus image and the degree of disease progression. Accordingly, the region of interest may be set to further include an additional region adjacent to the boundary of the optic nerve nipple determined by thresholding so that the region of interest is clinically significant.
  • the image preprocessor 20 may set a region including five consecutive pixels adjacent to the optic nerve nipple boundary by thresholding as the ROI. This can include a clinically significant (ie, substantially required to predict optic neuropathy) border of the neural retina.
  • the image preprocessor 20 extracts (cropping) a region including the set region of interest from the filtered image, and generates a preprocessed image.
  • the image preprocessor 20 may further perform an additional preprocessing operation other than filtering before setting and extracting the ROI from the green channel image.
  • the image preprocessor 20 may further perform a noise removal operation on the green channel image.
  • the image preprocessor 20 may remove noise by applying a mask image and/or a median filter through a morphology operation on the green channel image.
  • the image preprocessor 20 may further perform a brightness correction operation on the green channel image.
  • the fundus image may have non-uniform brightness due to imaging conditions and hemispherical characteristics of the retina. Since such non-uniform brightness affects the removal of blood vessel regions in the future and detection of regions of interest related to optic neuropathy, brightness correction is required.
  • the image preprocessor 20 may obtain a green channel image having uniform brightness through a brightness correction method using a bias image.
  • an image having uniform brightness is generated by the following [Equation 3].
  • (i, j) represents the pixel position in the image
  • G is the image before correction (e.g., the green channel image before noise removal or the green channel image after noise removal)
  • B is the bias inversely estimated from G Image
  • I represent an image with uniform brightness
  • the image preprocessor 20 may inversely estimate a bias image, which is a shading artifact, using a green channel image, and remove the inversely estimated bias image from the green channel image, thereby generating an image whose brightness is corrected. .
  • FIG. 4 is a diagram for explaining a combined image according to an embodiment of the present invention.
  • the image preprocessor 20 may further perform an operation of combining the green channel image with the blue channel image before extracting the ROI.
  • the green channel image may include a green channel image that has been noise removed and/or brightness corrected.
  • the blue channel image provides a high contrast of the optic nerve nipple itself. Due to the characteristics of the blue channel image, the combined image shown in FIG. 4(c) can make the optic disc, which is the part where the symptoms of optic neuropathy (e.g., optic nerve nipple pale), appear more clearly. have.
  • optic neuropathy e.g., optic nerve nipple pale
  • the thresholding process and the ROI extraction may be performed in various ways.
  • a green channel image and a blue channel image in which a region of interest is set by being thresholded are combined to generate a combined image, the region of the combined image corresponding to the set region of interest is determined as the final region of interest, and Images can be extracted.
  • the thresholding process is applied to the combined image to extract the ROI image.
  • the image preprocessor 20 provides at least a fundus image from the data acquired from the data acquisition unit 10 to the prediction unit 30.
  • the fundus image provided to the prediction unit 30 depends on the structure of the prediction model used by the prediction unit 30.
  • the prediction unit 30 determines a prediction result indicating whether the subject has an optic neuropathy by applying the fundus photograph of the subject to the optic neuropathy prediction model.
  • the prediction unit 30 may determine whether the target is a patient using a prediction model modeled in advance by the modeling unit 50.
  • the pre-modeled prediction model includes all models modeled after and/or before the fundus image of the subject is input.
  • the optic neuropathy prediction model is configured with a deep learning-based structure, and may be a model including parameters learned by a machine learning algorithm.
  • the predictive model for optic neuropathy may be a regression-based model.
  • the prediction apparatus 1 may include other components not described herein.
  • it may further include a data input device, a display, an output device such as printing, a network, a network interface and protocol, and the like.
  • an embodiment using the deep learning-based prediction model 310 is referred to as a first embodiment
  • an embodiment using the regression analysis-based model 320 is referred to as a second embodiment, and the present invention will be described in more detail.
  • FIG. 5 is a flowchart of a method for providing a predicted result of an optic neuropathy according to a first embodiment of the present invention.
  • the prediction apparatus 1 acquires a fundus image of a subject (S110), and generates a preprocessed image (S120).
  • a color channel image filtered by applying a color channel is obtained (S121), and a preprocessed image may be generated.
  • the filtered color channel image may be a green channel image.
  • the step S120 may generate the extracted ROI image as a pre-processed image by processing the filtered color channel image as a pre-processed image by further including the step of processing the filtered color channel image (S127), setting the ROI, and extracting it.
  • step S120 may further include removing noise and/or correcting brightness (S123), and/or generating a combined image by combining the green channel image and the blue channel image (S125).
  • the prediction device 1 calculates a prediction result indicating whether the subject has optic neuropathy by applying the preprocessed image generated in step S120 to the deep learning-based prediction model 310 (S130), and sends the prediction result to the user.
  • the deep learning-based prediction model 310 includes a model previously modeled by the deep learning model generator 510.
  • FIG. 6 is a conceptual structural diagram of a predictive model for optic neuropathy according to an embodiment of the present invention.
  • 6A is an overall structure diagram of an optic neuropathy prediction model
  • FIGS. 6B to 6J are partially enlarged views around each inception block included in FIG. 6A.
  • the optic neuropathy prediction model may include a plurality of inception blocks.
  • a typical CNN (convolutnion nueral network)-based model consists of a feature extraction layer including a convolutional filter and a pooling layer, and a classification layer including a fully connected layer, and uses multiple layer blocks using multiple convolutional layers and pooling layers. Then, the feature is extracted, and the final feature vector is processed through the classification layer.
  • the inception block includes a plurality of different filters, and concatenates the output results (e.g., feature maps) after receiving an existing layer and performing convolution or max pooling each through the plurality of filters.
  • output results e.g., feature maps
  • the inception block includes a 1 ⁇ 1 convolution filter and a higher-dimensional filter (eg, 3 ⁇ 3, 5 ⁇ 5 convolution filters). Accordingly, the inception block can effectively extract features of various dimensions, and the 1 ⁇ 1 convolution filter can reduce the number of parameters and computing resources by reducing the dimensions. As a result, the inception block can extract nonlinear features and have a high computational speed.
  • the deep learning-based optic neuropathy prediction model 310 of the present invention may be, for example, the GoogleNet model of non-patent document 1 shown in FIG. 6, but is not limited thereto, and nonlinear features of the fundus image can be extracted. It can be a variety of deep learning-based models that can be used.
  • the predictive model 310 of FIG. 6 that predicts the presence of optic neuropathy is modeled using a plurality of training samples by the modeling unit 50 (eg, the deep learning model generation unit 510).
  • Each learning sample may include a fundus image for learning, disease data indicating whether or not optic neuropathy is present.
  • the disease data may be expressed as a binary label indicating true or false of optic neuropathy.
  • the fundus image for learning may include a specific fundus image and/or an augmentation image that augments the specific fundus image.
  • the augmented image may include an image in which a specific image is horizontally or vertically flipped or scaled.
  • the modeling unit 50 updates parameters of the prediction model according to learning. Specifically, fundus images included in a plurality of training samples are input into a predictive model to classify the fundus image into an optic neuropathy group or a normal group, and the classification result (i.e., model application result) and disease data included in the training sample The parameters are updated to reduce the error compared to (i.e., the actual result). Through such an update process, the modeling unit 50 learns prediction performance of classifying the fundus image inputted with the prediction model into an optic neuropathy group or a normal group.
  • the modeling unit 50 may be a component included in the prediction apparatus 1 as shown in FIG. 1.
  • the present invention is not limited thereto.
  • the modeling unit 50 may be a component remotely located to the prediction device 1, and the device 1 uses a prediction model previously generated by the remotely located modeling unit 50 to the optic nerve of the subject. It can be received and stored prior to examination of the condition, and can be used for examination of optic neuropathy in the subject.
  • the prediction device 1 may further include a storage device (not shown) for storing a prediction model generated in advance.
  • the storage device may include read-only memory (ROM), flash memory, hard disk drive (HDD), and SSD.
  • the prediction model may be stored in a cloud server, and the device 1 may be configured to communicate with the cloud server to use the prediction model.
  • the prediction unit 30 applies the preprocessed image generated by the image preprocessing unit 20 to the deep learning-based model previously modeled by the modeling unit 50 to determine whether the subject is an optic neuropathy disease person.
  • the image input to the predictive model 310 may correspond to a fundus image for training.
  • the prediction device 1 when the fundus image for training is a fundus image based on a green channel image, the prediction device 1 generates a preprocessed image based on the green channel image by the image preprocessor 20, and this is used in the prediction model 310. Can be applied.
  • the prediction device 1 when the fundus image for training is a fundus image based on a green channel image and a blue channel image, the prediction device 1 generates a preprocessed image based on the green channel image and the blue channel image by the image preprocessor 20 And, it can be applied to the prediction model 310.
  • the prediction model 310 may determine whether the subject of the input fundus image has optic neuropathy, and may output the determination result as a prediction result (S130).
  • the prediction device 1 provides an output result (ie, a prediction result) of the prediction model 310 to a user (S140).
  • FIG. 7 is a diagram for explaining a fundus image for learning according to an experimental example of the present invention
  • FIG. 8 is a diagram for explaining the performance of a predictive model modeled by the fundus image for training of FIG. 7.
  • the modeling unit 50 models the predictive model 310 of FIG. 6 by using the fundus images of the training set and the validation set as training images.
  • each set of normal represents a fundus image of a person without optic neuropathy
  • each set of abnormal represents a fundus image of a person with optic neuropathy.
  • the modeling unit 50 may model a prediction model using an image obtained by applying a part or all of the augmentation processing shown in FIG. 7 to part or all of the abnormal fundus image.
  • the performance of the prediction model modeled by the fundus image for training in FIG. 7 has an accuracy of about 99%, as shown in FIG. 8.
  • the prediction device 1 can provide a very accurate prediction result to a user.
  • FIG. 9 is a flowchart of a method for providing a prediction result of an optic neuropathy according to a second embodiment of the present invention.
  • the prediction device 1 acquires a fundus image of the subject (S210) and generates a preprocessed image (S220).
  • a color channel image filtered by applying a color channel is obtained (S221), and a pre-processed image may be generated.
  • the filtered color channel image may be a green channel image.
  • step S220 may further include removing noise and/or correcting brightness (S223), and/or generating a combined image by combining the green channel image and the blue channel image (S225).
  • Steps (S221 to S225) are similar to the above-described steps (S110 to S120), a detailed description will be omitted.
  • the prediction apparatus 1 applies the preprocessed image to the optic neuropathy prediction model 320 to calculate a prediction result indicating whether the subject has optic neuropathy (S230), and provides the prediction result to the user (S240).
  • the optic neuropathy prediction model 320 includes a model previously modeled by the regression analysis model generation unit 520.
  • the prediction device 1 calculates a feature parameter value constituting the prediction model 320 from the preprocessed image (S231), and transfers the calculated feature parameter value to the prediction model 320. Apply (S233). And, based on the application result, it is determined whether or not the subject has optic neuropathy (S235).
  • the optic neuropathy prediction model 320 is a regression analysis-based model, and may be a model including a parameter determined by regression analysis.
  • the regression-based optic neuropathy prediction model 320 includes a characteristic parameter for analyzing the risk of the optic nerve nipple used to examine the optic neuropathy (ie, whether it is pale).
  • the feature parameters include a brightness correction ratio (BC), and a temporal-to-nasal ratio (TN).
  • FIG. 10 is a diagram illustrating a cup depth and a background area according to an embodiment of the present invention.
  • BC is defined as the ratio of the average brightness of the cup depth compared to the background area.
  • the background region is defined as a square in the papillomacular bundle nerve fiber layer.
  • pre-processed images are generated from fundus images (FIG. 10(A), (B), (C)), respectively.
  • the background area (white square in Fig. 10) is set on the papillary macula bundle to adjust the uneven brightness of the papillary image.
  • the center of the background area is automatically set at a point one disc diameter from the geometric center of the optic nerve, and the height of the background area fits within ⁇ 10 degrees from the geometric center of the optic nerve and the width is fixed at 15 pixels.
  • the boundary of the optic nerve nipple (yellow circle in Fig. 10) and the neuroretinal rim (area between the black circle and yellow circle in Fig. 10) can be automatically subdivided.
  • the neural retinal edge may be a clinically significant edge of the optic nerve retina of the first embodiment.
  • the brightness of the cup depth is calculated as the average intensity of pixels with brightness greater than 70% of the maximum value in the optic nerve papilla area.
  • TN is defined as a value obtained by dividing the average brightness of pixels in the outer region at the edge of the optic nerve retina by the average brightness of pixels in the nasal region at the edge of the optic nerve retina.
  • the prediction model based on regression analysis composed of the TN and BC may be expressed by the following equation.
  • logit(P) is a value related to the prediction result indicating whether the preprocessed image has paleness of the optic nerve nipple (i.e., optic neuropathy), and in one embodiment, logit(P) is a value related to the paleness of the optic nerve nipple. It is a value corresponding to the probability (that is, the probability of having an optic neuropathy).
  • is a regression coefficient of the model
  • the regression coefficient ( ⁇ is determined by regression analysis of a learning sample by the modeling unit 50 (eg, regression model generation unit 520). Regression analysis is performed with one dependent variable).
  • the regression coefficient represents the influence of the independent variable associated with the regression coefficient on the dependent variable, for example, ⁇ 0 represents the predicted result of feature parameters not included in the model. 1 indicates the effect of the characteristic parameter BC on the prediction result, and ⁇ 2 indicates the effect of the characteristic parameter TN on the prediction result.
  • the learning sample may include a learning preprocessed image and labeling data indicating whether a subject of the learning preprocessed image has an optic neuropathy.
  • the prediction unit 30 calculates the feature parameters (TN and BC) values from the preprocessed image (S231), and calculates the regression analysis-based prediction model 320 previously modeled by the modeling unit 50 at step S231.
  • the characteristic parameter value is applied (S233). And, based on the application result, it is determined whether or not the subject has optic neuropathy (S235).
  • the prediction unit 30 calculates a probability of having a paleness of the optic nerve nipple (ie, a probability of having an optic neuropathy) based on the application result, and determines whether the subject has optic neuropathy based on the probability (S235) ).
  • the probability of having paleness of the optic nerve nipple is calculated by the following equation.
  • the probability (P) is a value from 0 to 1, and the closer to 1 indicates that the probability of having an optic neuropathy is high.
  • the prediction unit 30 determines that the subject has a paleness of the optic nerve nipple (ie, optic neuropathy) (S235).
  • FIG. 11 is a diagram for explaining a training sample for modeling a regression analysis-based predictive model according to an experimental example of the present invention
  • FIG. 12 is a diagram of a regression analysis-based prediction model modeled by the training sample of FIG. It is a diagram for explaining the performance.
  • the modeling unit 50 includes a normal group related to a subject who does not have a pale, an outer pale group related to a subject having a temporal pallor, and a broad range related to a subject having a diffuse pallor.
  • the predictive model based on the regression analysis of Equation 1 may be modeled using the training samples (230 total) including the pale group.
  • the p value is a value calculated by the Bonferroni post-hoc test, which determines the Bonferrnoi correlation coefficient, and indicates whether the factors are significant. If the p-value is low (eg, p ⁇ 0.05), the factor is statistically significant.
  • the coefficient ⁇ 1 of the feature parameter BC of Equation 1 is 5.429
  • the coefficient ⁇ 2 of the feature parameter TN is 13.122
  • the coefficient ( ⁇ 0 ) can be determined to be -27.695.
  • the P values for each coefficient indicate that they are all significant (P values of ⁇ 0 , ⁇ 1 , and ⁇ 2 are ⁇ 0.0001, ⁇ 0.0001, and ⁇ 0.022). That is, the higher the BC and TN, the higher the risk of paleness of the optic nerve papilla.
  • Probability (P) by the regression analysis-based prediction model using BC and TN can generate the ROC curve of FIG. 12, and referring to FIG. 12, the accuracy of the regression analysis-based prediction model modeled by the training sample of FIG. 11 (accurnacy) is 96.1%, and the AUC value is 0.996, which shows that it has a fairly high performance.
  • the prediction model based on regression analysis has higher prediction results than an ophthalmologist with more than 10 years of experience.
  • the operation of the prediction apparatus 1 and method according to the above-described embodiments may be at least partially implemented as a computer program and recorded on a computer-readable recording medium.
  • a program product configured in a computer-readable medium containing program code, which may be executed by a processor for performing any or all steps, operations, or processes described.
  • the computer may be a computing device such as a desktop computer, laptop computer, notebook, smart phone, or the like, or may be any device that may be integrated.
  • a computer is a device with one or more alternative special purpose processors, memory, storage, and networking components (either wireless or wired).
  • the computer may run, for example, an operating system compatible with Microsoft's Windows, Apple OS X or iOS, a Linux distribution, or an operating system such as Google's Android OS.
  • the computer-readable recording medium includes all types of record identification devices storing data that can be read by a computer.
  • Examples of computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage identification device, and the like.
  • the computer-readable recording medium may be distributed over a computer system connected through a network, and computer-readable codes may be stored and executed in a distributed manner.
  • functional programs, codes, and code segments for implementing this embodiment may be easily understood by those skilled in the art to which this embodiment belongs.
  • the apparatus for providing an optic neuropathy prediction result uses machine learning, one of four-dimensional industrial technologies, such as deep learning and regression analysis, to determine whether a subject has optic neuropathy from a fundus image. It is predictable.

Abstract

L'invention concerne, dans des modes de réalisation, un procédé pour fournir un résultat de prédiction de neuropathie optique et un dispositif pour sa réalisation, le procédé comprenant les étapes consistant à : acquérir une image de fond d'œil d'un sujet ; prétraiter l'image de fond d'œil de sorte que des symptômes de neuropathie optique dans l'image ressortent plus clairement ; et déterminer si le sujet présente une neuropathie optique par application de l'image prétraitée à un modèle de prédiction de neuropathie optique pré-modélisé.
PCT/KR2020/006097 2019-05-08 2020-05-08 Dispositif de prédiction de neuropathie optique à l'aide d'une image de fond d'œil et procédé de fourniture d'un résultat de prédiction de neuropathie optique WO2020226455A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0053838 2019-05-08
KR1020190053838A KR102318194B1 (ko) 2019-05-08 2019-05-08 안저 이미지를 이용한 시신경병증 예측 장치 및 시신경병증 예측 결과 제공 방법

Publications (1)

Publication Number Publication Date
WO2020226455A1 true WO2020226455A1 (fr) 2020-11-12

Family

ID=73051562

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/006097 WO2020226455A1 (fr) 2019-05-08 2020-05-08 Dispositif de prédiction de neuropathie optique à l'aide d'une image de fond d'œil et procédé de fourniture d'un résultat de prédiction de neuropathie optique

Country Status (2)

Country Link
KR (1) KR102318194B1 (fr)
WO (1) WO2020226455A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20230118746A (ko) * 2022-02-04 2023-08-14 가톨릭대학교 산학협력단 정상 안압 시 녹내장 예측 시스템

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140093376A (ko) * 2013-01-16 2014-07-28 삼성전자주식회사 의료 영상을 이용하여 대상체에 악성 종양이 존재하는지 여부를 예측하는 장치 및 방법
KR20170017614A (ko) * 2015-08-07 2017-02-15 원광대학교산학협력단 의료 영상 기반의 질환 진단 정보 산출 방법 및 장치
US20180374213A1 (en) * 2015-12-15 2018-12-27 The Regents Of The University Of California Systems and Methods For Analyzing Perfusion-Weighted Medical Imaging Using Deep Neural Networks
KR20190014912A (ko) * 2017-08-04 2019-02-13 동국대학교 산학협력단 지정맥 인식 장치 및 방법
KR20190035368A (ko) * 2017-09-26 2019-04-03 연세대학교 산학협력단 뇌 신호로부터 변환한 이미지 기반의 감정 인식 방법 및 장치

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101848322B1 (ko) 2017-10-27 2018-04-20 주식회사 뷰노 피검체에 대한 안저 영상의 소견 및 진단 정보의 생성을 위하여 판독을 지원하는 방법 및 이를 이용한 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140093376A (ko) * 2013-01-16 2014-07-28 삼성전자주식회사 의료 영상을 이용하여 대상체에 악성 종양이 존재하는지 여부를 예측하는 장치 및 방법
KR20170017614A (ko) * 2015-08-07 2017-02-15 원광대학교산학협력단 의료 영상 기반의 질환 진단 정보 산출 방법 및 장치
US20180374213A1 (en) * 2015-12-15 2018-12-27 The Regents Of The University Of California Systems and Methods For Analyzing Perfusion-Weighted Medical Imaging Using Deep Neural Networks
KR20190014912A (ko) * 2017-08-04 2019-02-13 동국대학교 산학협력단 지정맥 인식 장치 및 방법
KR20190035368A (ko) * 2017-09-26 2019-04-03 연세대학교 산학협력단 뇌 신호로부터 변환한 이미지 기반의 감정 인식 방법 및 장치

Also Published As

Publication number Publication date
KR102318194B1 (ko) 2021-10-28
KR20200129440A (ko) 2020-11-18

Similar Documents

Publication Publication Date Title
WO2021040258A1 (fr) Dispositif et procédé de diagnostic automatique d'une maladie à l'aide d'une segmentation de vaisseau sanguin dans une image ophtalmique
EP2188779B1 (fr) Procédé d'extraction graphique d'une zone de langue reposant sur une analyse graphique et géométrique
Kavitha et al. Early detection of glaucoma in retinal images using cup to disc ratio
WO2016163755A1 (fr) Procédé et appareil de reconnaissance faciale basée sur une mesure de la qualité
WO2021025461A1 (fr) Système de diagnostic à base d'image échographique pour lésion d'artère coronaire utilisant un apprentissage automatique et procédé de diagnostic
KR100922653B1 (ko) 눈동자색 보정 장치 및 기록 매체
JP2008146172A (ja) 眼部検出装置、眼部検出方法及びプログラム
WO2012157835A1 (fr) Procédé de gestion d'une image vasculaire médicale en utilisant une technique de fusion d'images
JP2008226194A (ja) 瞳色補正装置およびプログラム
WO2021118255A2 (fr) Système et procédé pour analyser une lésion cornéenne au moyen d'une image de segment oculaire antérieur, et support d'enregistrement lisible par ordinateur
WO2018117353A1 (fr) Procédé de détection de limite entre l'iris et la sclérotique
WO2021153858A1 (fr) Dispositif d'aide à l'identification à l'aide de données d'image de maladies cutanées atypiques
Xiao et al. Retinal hemorrhage detection by rule-based and machine learning approach
US20050147304A1 (en) Head-top detecting method, head-top detecting system and a head-top detecting program for a human face
WO2019045385A1 (fr) Procédé d'alignement d'images et dispositif associé
WO2020226455A1 (fr) Dispositif de prédiction de neuropathie optique à l'aide d'une image de fond d'œil et procédé de fourniture d'un résultat de prédiction de neuropathie optique
WO2019221586A1 (fr) Système et procédé de gestion d'image médicale, et support d'enregistrement lisible par ordinateur
WO2021201582A1 (fr) Procédé et dispositif permettant d'analyser des causes d'une lésion cutanée
Kumar et al. Automatic detection of red lesions in digital color retinal images
WO2020116878A1 (fr) Dispositif de prédiction d'anévrisme intracrânien å l'aide d'une photo de fond d'oeil, et procédé de fourniture d'un résultat de prédiction d'anévrisme intracrânien
JPH11238129A (ja) 眼科用画像処理装置
CN111743524A (zh) 一种信息处理方法、终端和计算机可读存储介质
Priya et al. A novel approach to the detection of macula in human retinal imagery
WO2022119347A1 (fr) Procédé, appareil et support d'enregistrement pour analyser un tissu de plaque d'athérome par apprentissage profond basé sur une image échographique
WO2022034955A1 (fr) Appareil pour détecter un ulcère cornéen sur la base d'un traitement d'image et procédé associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20802775

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20802775

Country of ref document: EP

Kind code of ref document: A1