WO2024072067A1 - Procédé pour la prédiction du champ de vision futur d'un patient atteint de glaucome en utilisant un modèle d'apprentissage profond multimodal - Google Patents
Procédé pour la prédiction du champ de vision futur d'un patient atteint de glaucome en utilisant un modèle d'apprentissage profond multimodal Download PDFInfo
- Publication number
- WO2024072067A1 WO2024072067A1 PCT/KR2023/014957 KR2023014957W WO2024072067A1 WO 2024072067 A1 WO2024072067 A1 WO 2024072067A1 KR 2023014957 W KR2023014957 W KR 2023014957W WO 2024072067 A1 WO2024072067 A1 WO 2024072067A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- visual field
- data
- oct
- test data
- field test
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 208000010412 Glaucoma Diseases 0.000 title claims abstract description 38
- 238000013136 deep learning model Methods 0.000 title description 5
- 238000012360 testing method Methods 0.000 claims abstract description 60
- 238000013473 artificial intelligence Methods 0.000 claims abstract description 11
- 230000000007 visual effect Effects 0.000 claims description 117
- 239000013598 vector Substances 0.000 claims description 15
- 238000012014 optical coherence tomography Methods 0.000 description 44
- 238000005516 engineering process Methods 0.000 description 12
- 238000007781 pre-processing Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 9
- 238000003325 tomography Methods 0.000 description 9
- 230000015654 memory Effects 0.000 description 7
- 238000007689 inspection Methods 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 239000000523 sample Substances 0.000 description 3
- 238000013434 data augmentation Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 210000003733 optic disk Anatomy 0.000 description 2
- 210000001328 optic nerve Anatomy 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 201000004569 Blindness Diseases 0.000 description 1
- 206010061323 Optic neuropathy Diseases 0.000 description 1
- 206010047555 Visual field defect Diseases 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011337 individualized treatment Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 210000004126 nerve fiber Anatomy 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 230000002207 retinal effect Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/102—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Definitions
- the technology described below relates to a visual field prediction technique for glaucoma patients using a multimodal deep learning model.
- Glaucoma is a progressive optic neuropathy that can lead to blindness. Glaucoma diagnosis generally uses visual field testing and optical coherence tomography (OCT) images.
- OCT optical coherence tomography
- the technology described below seeks to provide a technique for predicting a subject's visual field using OCT data that contains structural information about optic nerve head damage related to glaucoma along with visual field test results.
- a method for predicting the future visual field of a glaucoma patient using an artificial intelligence model extracting features included in OCT data to obtain OCT feature data, visual field test data, the OCT feature data, and time Inputting information into a prediction model, using the output result of the prediction model to obtain a final visual field prediction result based on recent visual field test data and latest time information (Last time info) corresponding to the recent visual field test data.
- a future vision prediction device may include the following configuration.
- an input unit where OCT data and visual field test data are input
- an output unit that outputs the visual field prediction results, and features included in the OCT data are extracted to obtain OCT feature data.
- visual field test data, the OCT feature data, and time information are input into a prediction model, and the output results of the prediction model are used to enter the latest visual field test data and the latest time information (Last time info) corresponding to the recent visual field test data.
- the technology can produce visual field test results that can evaluate the subject's glaucoma at a future time point.
- the technology described below enables individualized treatment by quantifying/visualizing the progression of the glaucoma patient based on his or her current condition.
- Figure 1 shows an AI device 100 according to an embodiment of the present invention.
- Figure 2 shows a flowchart of a field of view prediction method according to an embodiment of the present invention.
- Figure 3 shows a field of view prediction model according to an embodiment of the present invention.
- Figure 4 shows a method of learning a visual field prediction model according to an embodiment of the present invention.
- Figure 5 shows a method of learning a visual field prediction model according to an embodiment of the present invention.
- Figure 6 shows the results of comparing the performance of models predicting the subject's field of view.
- first, second, A, B, etc. may be used to describe various components, but the components are not limited by the terms, and are only used for the purpose of distinguishing one component from other components. It is used only as For example, a first component may be named a second component without departing from the scope of the technology described below, and similarly, the second component may also be named a first component.
- the term and/or includes any of a plurality of related stated items or a combination of a plurality of related stated items.
- each component is responsible for. That is, two or more components, which will be described below, may be combined into one component, or one component may be divided into two or more components for more detailed functions.
- each of the components described below may additionally perform some or all of the functions handled by other components, and some of the main functions handled by each component may be performed by other components. Of course, it can also be carried out exclusively by .
- each process forming the method may occur in a different order from the specified order unless a specific order is clearly stated in the context. That is, each process may occur in the same order as specified, may be performed substantially simultaneously, or may be performed in the opposite order.
- Tomography data may refer to image data acquired using optical coherence tomography (OCT) (hereinafter used interchangeably with OCT data).
- OCT optical coherence tomography
- Optical coherence tomography can refer to tests that provide different types of data about the optic nerve head, such as thickness maps and tomography (transverse and longitudinal).
- OCT's thickness maps provide information about the retinal nerve fiber layer (RNFL) thickness around the optic disc, which is generally directly related to glaucomatous damage
- tomography's thickness maps include horizontal tomograms and It includes vertical tomograms and can provide information about the structural features of the optic disc.
- OCT data may include information including Lamina Cribrosa, Border tissue, etc.
- Visual field test data may refer to data on the results of a patient's visual field test.
- the visual field test per patient can be performed multiple times at time intervals and stored in correspondence with each timeline.
- the visual field prediction model disclosed by the present invention discloses a method of operating a deep learning model capable of predicting future visual field patterns based on the shape of the optic nerve using the above data (perimetry data, OCT data, etc.).
- the visual field prediction model corresponds to the development of a glaucoma prognosis prediction model using image information of glaucoma patients, and the future visual field defect pattern of glaucoma patients using existing visual field test data, thickness maps, and OCT image data such as tomography. It may correspond to a deep learning model that predicts.
- the field of view prediction model according to an embodiment of the present invention may be composed of a combination of CNN and RNN to extract features from image data and serial data.
- the vision prediction model can apply an efficient training mechanism with weighted loss to manage noisy data.
- the training mechanism will be explained later.
- a visual field prediction model with multiple inputs can help to better learn structure-function relationships, and the performance of the proposed model based on quantitative results can be improved.
- the visual field prediction device is a device that consistently processes input data and performs the calculations necessary to predict the visual field pattern of a glaucoma patient according to a specific model or algorithm.
- the field of view prediction device can be implemented in the form of a PC, a server on a network, a smart device, or a chipset with an embedded design program.
- Figure 1 is an example of a configuration diagram of a field of view prediction device 100 according to an embodiment of the present invention.
- the field of view prediction device 100 may be implemented as a mobile device, such as a mobile phone, smartphone, personal digital assistants (PDA), portable multimedia player (PMP), navigation, tablet PC, wearable device, or vehicle.
- a mobile device such as a mobile phone, smartphone, personal digital assistants (PDA), portable multimedia player (PMP), navigation, tablet PC, wearable device, or vehicle.
- PDA personal digital assistants
- PMP portable multimedia player
- the field of view prediction device 100 may include a communication unit 110, an input unit 120, a control unit 130, an interface unit 140, an output unit 160, and a memory 150. .
- the communication unit 110 can transmit and receive data with external devices such as other electronic devices or servers using wired or wireless communication technology.
- the communication unit 110 may transmit and receive sensor information, user input, learning models, and control signals with external devices.
- the input unit 120 can acquire various types of data.
- the input unit 120 may include a camera for inputting video signals, a microphone for receiving audio signals, and a user input unit for receiving information from the user.
- the camera or microphone may be treated as a sensor, and the signal obtained from the camera or microphone may be referred to as sensing data or sensor information.
- the output unit 160 may generate output related to vision, hearing, or tactile sensation.
- the output unit 160 may include a display unit that outputs visual information, a speaker that outputs auditory information, and a haptic module that outputs tactile information.
- the memory 150 can store data supporting various functions.
- the memory 150 may store input data obtained from the input unit 120 and various data obtained from a server or connected device.
- the control unit 130 may determine at least one executable operation of the field of view prediction device 100. Additionally, the control unit 130 may control the components of the viewing pattern prediction device 100 to perform the determined operation.
- Figure 2 shows a flowchart of a field of view prediction method according to an embodiment of the present invention.
- Figure 3 shows a field of view prediction model according to an embodiment of the present invention.
- the visual field prediction model includes a preprocessing unit 310 that performs preprocessing to extract features of OCT data and a prediction unit that processes a series of images and visual field test data ( 320) may be included. It may also include a result generating unit 330 that derives the final prediction result.
- the visual field prediction device 100 may perform image preprocessing before feeding (inputting) the OCT data to the visual field prediction model (S210).
- the visual field prediction model may use input data to perform a data preprocessing process to predict the patient's future visual field pattern corresponding to the data.
- a visual field prediction model can extract features through preprocessing of OCT data.
- OCT data may include thickness maps, horizontal tomograms, and vertical tomograms.
- all images can be normalized (pixel values from -1 to 1) and resized to 224 ⁇ 224.
- a convolution neural network (CNN) model may be used to analyze image data when preprocessing a visual field prediction model.
- CNN convolution neural network
- each of the thickness map, horizontal tomography, and vertical tomography images among the OCT data can be input to a predetermined preprocessing model 310, and each feature (Features from image) data) can be extracted.
- the feature may be defined as OCT feature data.
- a vision pattern prediction device can use 'ResNet-50', which is pre-trained as a preprocessing model, as a feature extractor.
- the feature extractor may provide a feature vector for each image as input.
- the OCT feature data may include a thickness map feature that provides information about the patient's entire field of view, and vertical and horizontal tomography map features that provide information about a specific area of the patient's field of view.
- the visual field prediction model can generate a feature vector that places emphasis on thickness map features among OCT data during preprocessing.
- the output vectors of the thickness map, vertical tomography, and horizontal tomography may consist of a 512-d vector, a 128-d vector, and a 128-d vector, respectively.
- the visual field prediction model can generate OCT feature data based on OCT data through the above preprocessing process.
- visual field test data, OCT feature data, and time information can be input into the prediction model (S220).
- the prediction model may be the prediction unit 320.
- the visual field test data input to the prediction model may be the patient's previous visual field test data (Previous VFs test data), and the visual field test data can be reconstructed as a vector and normalized to have an average of 0 and a standard deviation of 1. .
- the time information may include two types of information: timeline and time interval.
- the timeline information may refer to the time between the selected image and the first image. Additionally, the interval information may mean the time between the selected image and the previous image.
- the visual field prediction model can generate a multidimensional vector by connecting OCT feature data, visual field test data, and time information for each time step.
- the multidimensional vector can be used as an input to a Long Short-Term Memory (LSTM) network.
- the output of the LSTM may be a 128-d vector.
- the vision prediction model ideally inputs pair-data (visual field test data and OCT data), but if any one of the pair-data is missing, mask information is used to Missing data can be replaced.
- the mask information may mean a vector indicating whether the OCT data is missing or available. For example, a mask could define 1 as available and 0 as missing.
- virtual OCT data can be generated to generate the missing OCT image.
- the virtual OCT data may be an image filled with black.
- the OCT data when OCT data is missing and mask information indicates '0', the OCT data may be replaced with virtual OCT data and input into the visual field prediction model.
- the visual field prediction model according to an embodiment of the present invention can obtain LSTM output results (S230).
- the visual field prediction model produces a final visual field prediction result based on the LSTM output result, the latest visual field test data (Last VF), and the latest time information (Last time info) corresponding to the recent visual field test data. Can be obtained (S240).
- the present invention can solve the above problem by using OCT data and visual field test data together.
- previous visual field test data (only the last visual field test data) can be used as reference visual field test data to predict future visual field.
- Prediction accuracy can be greatly improved based on the LSTM output and the reference visual field test data.
- the visual field prediction model can predict the future visual field by combining LSTM output, previous visual field test data, and time information, and medical staff can diagnose the degree of progression of a glaucoma patient based on this.
- the future visual field may be generated in the form of visual field test data to be used as reference data for future visual field tests.
- 4 to 5 are diagrams showing a method of training a visual field prediction model according to an embodiment of the present invention.
- noisy visual field test data may be generated depending on the patient's current condition, so there is difficulty in obtaining stable data.
- the field of view prediction model re-assigned weights to samples to process noisy data, and the field of view prediction model has a field of view with re-weighted values based on the degree of noise. Can be trained by inspection data samples.
- OCT data has consistent and reliable thickness maps, so the above process can be omitted.
- the visual field prediction model may be composed of a regression model, and the regression model may predict visual field inspection data based on the thickness map.
- the input value of the regression model may be a thickness map, and the output may be visual field inspection data corresponding to the thickness map.
- the backbone of the regression model may be a pre-trained ResNet-50.
- the loss function is the mean square error (MSE loss), and data augmentation can be applied like the main model.
- the visual field prediction model may learn structure-function relationships using the regression model. After training, the model can predict common glaucoma patterns based on the thickness map.
- the mean absolute error (MAE) between ground truth and predicted visual field test data can be calculated for each patient. Since the calculated mean absolute error value (D) is different for each patient, it can be normalized (D'). D is calculated for each time interval (time step t) over time. Afterwards, weights are calculated based on the error value of each section.
- samples with errors in which the normalized error D' is less than the threshold (TH) may be regarded as good samples, and the rest may be regarded as noisy.
- the weight of the good sample may be set higher than the weight of the noisy sample.
- each time interval weight w i t may be set to 1 for samples with a small error value, and the weight for samples with a lot of noise may be set based on an exponential function.
- the field of view prediction model can be learned using training data reflecting weighted loss to reduce the influence of noise data, and the loss function used to train the field of view prediction model is Weighted mean square error (MSE) can be used as follows:
- Equation 1 w i is the readjusted weight, x i,pred is the predicted visual field test data, and x i,GT is the correct answer value.
- the future visual field prediction model discloses a deep learning model that predicts future visual field inspection data based on multiple inputs using previous visual field inspection data and OCT data (thickness map, vertical and horizontal tomographic images). .
- a CNN-RNN model is launched to analyze both series data (previous visual field test data) and image data (OCT data). Additionally, a mechanism to detect samples and re-weight them was introduced to handle noisy data.
- the re-weighting method can improve performance with or without OCT images.
- Figure 6 shows the results of comparing the performance of models predicting the subject's field of view.
- the researchers compared the performance of the aforementioned model with a conventional model predicting visual field (VF).
- Figure 6 shows the results of comparing the performance (MAE) of the researcher's proposed model (denoted by Ours) and the conventional model (denoted by Park's method and Berchuck's method) described in Figure 3.
- Berchuck's method is a VAE (variational autoencoder)-based model that predicts future VF based on past VF (Berchuck, S. I., Mukherjee, S. & Medeiros, F.
- VAE variable autoencoder
- Park's method is an RNN (recurrent neural network)-based model that predicts future VF based on past VF (Park, K., Kim, J. & Lee, J. Visual field prediction using recurrent neural network. Sci. Reports 9, 8385). Looking at Figure 6, it can be seen that the performance of the proposed model is higher than that of the conventional model.
- the method for predicting the visual field of a glaucoma patient as described above may be implemented as a program (or application) including an executable algorithm that can be executed on a computer.
- the program may be stored and provided in a temporary or non-transitory computer readable medium.
- a non-transitory readable medium refers to a medium that stores data semi-permanently and can be read by a device, rather than a medium that stores data for a short period of time, such as registers, caches, and memories.
- the various applications or programs described above include CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM (read-only memory), PROM (programmable read only memory), and EPROM (Erasable PROM, EPROM).
- EEPROM Electrically EPROM
- Temporarily readable media include Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDR SDRAM), and Enhanced SDRAM (Enhanced RAM). It refers to various types of RAM, such as SDRAM (ESDRAM), Synchronous DRAM (Synclink DRAM, SLDRAM), and Direct Rambus RAM (DRRAM).
- SRAM Static RAM
- DRAM Dynamic RAM
- SDRAM Synchronous DRAM
- DDR SDRAM Double Data Rate SDRAM
- Enhanced SDRAM Enhanced SDRAM
- ESDRAM Synchronous DRAM
- SLDRAM Synchronous DRAM
- DRRAM Direct Rambus RAM
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Databases & Information Systems (AREA)
- Primary Health Care (AREA)
- Life Sciences & Earth Sciences (AREA)
- Epidemiology (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Pathology (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Veterinary Medicine (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Heart & Thoracic Surgery (AREA)
- Ophthalmology & Optometry (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Eye Examination Apparatus (AREA)
Abstract
Selon des modes de réalisation de la présente invention, sont divulgués un procédé et un appareil pour la prédiction du champ de vision futur d'un patient atteint de glaucome. Le procédé pour la prédiction du champ de vision futur d'un patient atteint de glaucome en utilisant un modèle d'intelligence artificielle comprend les étapes consistant à : extraire des caractéristiques incluses dans des données d'OCT pour obtenir des données caractéristiques d'OCT ; entrer, dans un modèle de prédiction, des données de test de champ de vision, les données caractéristiques d'OCT et des informations temporelles ; et obtenir, par un résultat de sortie du modèle de prédiction, un résultat de prédiction de champ de vision final sur la base de dernières données de test de champ de vision et de dernières informations temporelles correspondant aux dernières données de test de champ de vision.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2022-0122623 | 2022-09-27 | ||
KR1020220122623A KR20240043488A (ko) | 2022-09-27 | 2022-09-27 | 멀티모달 딥 러닝 모델을 이용한 녹내장 환자의 미래 시야 예측 방법 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024072067A1 true WO2024072067A1 (fr) | 2024-04-04 |
Family
ID=90478744
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2023/014957 WO2024072067A1 (fr) | 2022-09-27 | 2023-09-27 | Procédé pour la prédiction du champ de vision futur d'un patient atteint de glaucome en utilisant un modèle d'apprentissage profond multimodal |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR20240043488A (fr) |
WO (1) | WO2024072067A1 (fr) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200020079A (ko) * | 2018-08-16 | 2020-02-26 | 한국과학기술원 | 안저 사진에서 녹내장 진단을 위해 cam에서 추출된 roi를 중간 입력자로 사용하는 2단계 랭킹 컨볼루셔널 뉴럴 네트워크 |
KR20200114837A (ko) * | 2019-03-29 | 2020-10-07 | 단국대학교 산학협력단 | 녹내장 진단용 피쳐 생성 방법 및 장치, 그것을 이용한 녹내장 진단 방법 및 장치 |
US20220058803A1 (en) * | 2019-02-14 | 2022-02-24 | Carl Zeiss Meditec Ag | System for oct image translation, ophthalmic image denoising, and neural network therefor |
KR20220053208A (ko) * | 2020-10-22 | 2022-04-29 | 순천향대학교 산학협력단 | 안과 질환 진단을 위한 딥러닝 기반의 안저 영상 분류 장치 및 방법 |
KR20220095291A (ko) * | 2020-12-29 | 2022-07-07 | 부산대학교 산학협력단 | 빅데이터와 인공지능 기술기반의 시기능 변화 예측 시스템 및 방법 |
-
2022
- 2022-09-27 KR KR1020220122623A patent/KR20240043488A/ko unknown
-
2023
- 2023-09-27 WO PCT/KR2023/014957 patent/WO2024072067A1/fr unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200020079A (ko) * | 2018-08-16 | 2020-02-26 | 한국과학기술원 | 안저 사진에서 녹내장 진단을 위해 cam에서 추출된 roi를 중간 입력자로 사용하는 2단계 랭킹 컨볼루셔널 뉴럴 네트워크 |
US20220058803A1 (en) * | 2019-02-14 | 2022-02-24 | Carl Zeiss Meditec Ag | System for oct image translation, ophthalmic image denoising, and neural network therefor |
KR20200114837A (ko) * | 2019-03-29 | 2020-10-07 | 단국대학교 산학협력단 | 녹내장 진단용 피쳐 생성 방법 및 장치, 그것을 이용한 녹내장 진단 방법 및 장치 |
KR20220053208A (ko) * | 2020-10-22 | 2022-04-29 | 순천향대학교 산학협력단 | 안과 질환 진단을 위한 딥러닝 기반의 안저 영상 분류 장치 및 방법 |
KR20220095291A (ko) * | 2020-12-29 | 2022-07-07 | 부산대학교 산학협력단 | 빅데이터와 인공지능 기술기반의 시기능 변화 예측 시스템 및 방법 |
Also Published As
Publication number | Publication date |
---|---|
KR20240043488A (ko) | 2024-04-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021073380A1 (fr) | Procédé d'entraînement de modèle de reconnaissance d'image, et procédé et appareil de reconnaissance d'image | |
WO2019132168A1 (fr) | Système d'apprentissage de données d'images chirurgicales | |
WO2017022908A1 (fr) | Procédé et programme de calcul de l'âge osseux au moyen de réseaux neuronaux profonds | |
WO2017095014A1 (fr) | Système de diagnostic d'anomalie cellulaire utilisant un apprentissage dnn, et procédé de gestion de diagnostic de celui-ci | |
WO2022019402A1 (fr) | Programme d'ordinateur et procédé d'apprentissage de modèle de réseau neuronal artificiel sur la base d'un signal biologique de série chronologique | |
WO2018169330A1 (fr) | Systèmes et procédés permettant de déterminer des défauts dans le champ visuel d'un utilisateur | |
WO2021137454A1 (fr) | Procédé et système à base d'intelligence artificielle pour analyser des informations médicales d'utilisateur | |
WO2019098415A1 (fr) | Procédé permettant de déterminer si un sujet a développé un cancer du col de l'utérus, et dispositif utilisant ledit procédé | |
WO2022131642A1 (fr) | Appareil et procédé pour déterminer la gravité d'une maladie sur la base d'images médicales | |
WO2022114822A1 (fr) | Procédé et dispositif informatique pour fournir des informations d'analyse concernant une image ultrasonore vasculaire par utilisation d'un réseau neuronal artificiel | |
WO2021230534A1 (fr) | Appareil de prédiction de lésion orbitaire et périorbitaire et procédé de prédiction associé | |
CN115862831B (zh) | 一种智能在线预约诊疗管理系统及方法 | |
WO2019189972A1 (fr) | Méthode d'analyse d'image d'iris par l'intelligence artificielle de façon à diagnostiquer la démence | |
CN112307947A (zh) | 用于生成信息的方法和装置 | |
CN114419400B (zh) | 图像识别模型的训练方法、识别方法、装置、介质和设备 | |
WO2018221816A1 (fr) | Procédé permettant de déterminer si une personne examinée est infectée par un micro-organisme et appareil utilisant ledit procédé | |
WO2024072067A1 (fr) | Procédé pour la prédiction du champ de vision futur d'un patient atteint de glaucome en utilisant un modèle d'apprentissage profond multimodal | |
CN112397195B (zh) | 用于生成体格检查模型的方法、装置、电子设备和介质 | |
WO2020246676A1 (fr) | Système de diagnostic automatique du cancer du col de l'utérus | |
WO2020000721A1 (fr) | Procédé de dépistage de maladie, serveur de test et support d'informations lisible par ordinateur | |
JP7450239B2 (ja) | 脳卒中検査システム、脳卒中検査方法、及び、プログラム | |
WO2024029697A1 (fr) | Procédé de prédiction de risque de maladie cérébrale et procédé d'entraînement d'un modèle d'analyse de risque pour une maladie cérébrale | |
WO2024053996A1 (fr) | Procédé de fourniture d'informations concernant une prédiction de verrue ou de cor et appareil faisant appel à celui-ci | |
WO2022158694A1 (fr) | Procédé permettant de traiter une image de tissu pathologique et appareil associé | |
WO2024025136A1 (fr) | Appareil et procédé de proposition d'un procédé d'analyse de biomarqueur optimal basé sur une analyse multi-informations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23873178 Country of ref document: EP Kind code of ref document: A1 |