WO2020117021A1 - Dispositif de capture d'image de fond d'oeil autonome, dispositif de capture et d'interprétation d'image de fond d'oeil autonome et système de capture et d'interprétation d'image de fond d'oeil autonome - Google Patents

Dispositif de capture d'image de fond d'oeil autonome, dispositif de capture et d'interprétation d'image de fond d'oeil autonome et système de capture et d'interprétation d'image de fond d'oeil autonome Download PDF

Info

Publication number
WO2020117021A1
WO2020117021A1 PCT/KR2019/017258 KR2019017258W WO2020117021A1 WO 2020117021 A1 WO2020117021 A1 WO 2020117021A1 KR 2019017258 W KR2019017258 W KR 2019017258W WO 2020117021 A1 WO2020117021 A1 WO 2020117021A1
Authority
WO
WIPO (PCT)
Prior art keywords
fundus image
image
fundus
examinee
pupil
Prior art date
Application number
PCT/KR2019/017258
Other languages
English (en)
Korean (ko)
Inventor
강욱
신일형
박상민
박기호
오백록
장주영
Original Assignee
인더스마트 주식회사
서울대학교병원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 인더스마트 주식회사, 서울대학교병원 filed Critical 인더스마트 주식회사
Publication of WO2020117021A1 publication Critical patent/WO2020117021A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • A61B3/145Arrangements specially adapted for eye photography by video means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • A61B5/02416Detecting, measuring or recording pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/25Bioelectric electrodes therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/398Electrooculography [EOG], e.g. detecting nystagmus; Electroretinography [ERG]

Definitions

  • the present invention is a self fundus image taking device, a self funding image taking and reading device, and a self funding image taking and reading system, more specifically, the subject can take a fundus image by himself or herself without the help of a medical staff, as well as a captured fundus
  • the present invention relates to a self fundus imaging device, a self fundus imaging and reading device, and a self fundus imaging and reading system capable of reading a lesion from an image and supporting medical diagnosis.
  • Fundus imaging is one of the most used ophthalmic imaging for the purpose of diagnosis or recording in ophthalmology, and is mainly used for ophthalmic disease examination.
  • deep learning models including machine learning models, and more specifically, Convolutional Neural Networks, Recurrent Neural Networks (RNNs), detect, classify, and characterize medical imaging images. It is being used to learn.
  • Convolutional Neural Networks including machine learning models, and more specifically, Convolutional Neural Networks, Recurrent Neural Networks (RNNs), detect, classify, and characterize medical imaging images. It is being used to learn.
  • RNNs Recurrent Neural Networks
  • the fundus field in which image reading is important also supports reading/diagnosis of the fundus image of an examinee using the learning model.
  • the present invention has been devised to solve the above problems, and the object of the present invention is that the subject's pupil is enlarged so that the fundus image can be taken when the pupil is enlarged. It is to provide a fundus image taking device that can take a fundus image.
  • Another object of the present invention is a self fundus image capable of obtaining a clear fundus image suitable as a fundus image to be read as the fundus image to be reconstituted so that when the photographed fundus image is unsuitable as a fundus image to be read as a lesion, It is to provide a photographing device.
  • Another object of the present invention is to provide a self fundus image photographing apparatus that is configured to simultaneously photograph the left eye and the right eye when taking a fundus image, thereby obtaining equal clarity of both fundus images.
  • Another object of the present invention is to read the lesion by applying an artificial intelligence algorithm to the fundus image to be read by the lesion, a self fundus imaging and reading device supporting a diagnosis of a lesion of a medical staff, and a self fundus imaging and reading system Is to provide.
  • the apparatus for photographing a fundus image includes: a display unit displaying a focus image, a first sensor unit measuring a pupil state of a test subject, and a second sensor measuring a blink of the eyelid of the test subject.
  • the camera unit photographs at least one of the pupil image of the examinee and the fundus image of the examinee, and the measured value of the first sensor portion, the measured value of the second sensor portion, and the pupil image of the examinee based on the pupil image of the examinee It may include a control unit for controlling the camera unit to shoot the fundus image, and if the photographed fundus image of the subject is within a preset range of the fundus image, the determined fundus image of the subject is determined as a fundus image to be read.
  • the pupil state of the subject may be measured by the pulse of the subject.
  • the first sensor unit includes an ECG (Electro-CardioGram) sensor or a PPG (Pulse-PlethysmoGram) sensor.
  • ECG Electro-CardioGram
  • PPG Pulse-PlethysmoGram
  • the first sensor unit is.
  • the second sensor unit may include at least one left eye sensor and a right eye sensor.
  • the second sensor unit may include an Electro OculoGram (EOG) sensor.
  • EOG Electro OculoGram
  • the focusing image may include a focusing image for the left eye and a focusing image for the right eye.
  • the controller may control the camera unit to first capture one fundus image among the fundus image of the left eye and the fundus image of the right eye, and then photograph the remaining fundus image.
  • the control unit may control the focus image for the left eye and the focus image for the right eye so that the focus image for the left eye and the focus image for the right eye displayed on the display unit are separated from each other.
  • the controller may control the camera unit to simultaneously photograph the fundus image of the left eye and the fundus image of the right eye of the examinee.
  • the subject The camera unit may be controlled to photograph the fundus image.
  • the controller may determine whether the pupil size in the pupil image of the subject photographed by the camera unit is equal to or greater than a pupil size of a preset pupil image based on a machine learning algorithm or a DSP algorithm.
  • the controller may determine whether the photographed fundus image of the examinee is within a preset information range of the fundus image based on the machine learning algorithm.
  • the controller may control the camera unit to re-photograph the fundus image of the examinee when the photographed fundus image of the examinee is outside the preset range of the fundus image.
  • the controller may determine the photographed fundus image of the examinee as a lesion reading target fundus image.
  • the camera unit is controlled to photograph the fundus image of the testee, and when the fundus image of the testee is within a preset range of the fundus image, the fundus image of the testee is determined as a fundus image to be read by the lesion, and the It may include a control unit for reading the lesion from the fundus image to be read.
  • the controller may determine whether the photographed fundus image of the examinee is within a preset information range of the fundus image based on the machine learning algorithm.
  • the control unit may read the lesion from the fundus image to be read, based on the machine learning algorithm.
  • the test subject may further include a second display unit displaying the result of reading the lesion from the fundus image to be read.
  • An auto fundus image photographing and reading system for achieving the above another object, an auto fundus image photographing apparatus, and a lesion reading apparatus for reading a lesion from a fundus image to be read from a lesion received from the auto fundus image photographing apparatus
  • the self fundus image taking device the display unit for displaying the focus image, the first sensor unit for measuring the pupil state of the subject, the second sensor unit for measuring the blink of the eyelid of the subject, the subject
  • the camera unit photographing at least one of the pupil image and the fundus image of the examinee, and the measured value of the first sensor portion, the measured value of the second sensor portion, and the pupil image of the examinee based on the pupil image of the examinee
  • It may include a control unit for controlling the camera unit, and if the photographed fundus image of the subject is within a preset range of the fundus image, the determined fundus image of the subject is determined as a fundus image to be read.
  • the controller may determine whether the photographed fundus image of the examinee is within a preset information range of the fundus image based on the machine learning algorithm.
  • the lesion reading apparatus may read a lesion from the fundus image to be read based on the machine learning algorithm.
  • the examinee since the size of the pupil of the examinee is enlarged so that the fundus image can be taken, the examinee has an advantage of taking a clear fundus image by himself without the help of a medical staff.
  • the photographed fundus image when the photographed fundus image is unsuitable as a fundus image to be read as a lesion, it is configured to re-photograph the fundus image, which is advantageous in obtaining a suitable fundus image that is suitable as the fundus image to be read.
  • the present invention is configured to simultaneously photograph the left and right eyes when photographing the fundus image, and thus has the advantage of obtaining uniform clarity and the like for both fundus images.
  • the lesion is read, and thus there is an advantage of supporting the diagnosis of the lesion of the medical staff.
  • FIG. 1 schematically shows a configuration diagram of an auto fundus imaging apparatus according to an embodiment of the present invention.
  • FIG. 2 schematically shows an image for focus displayed on a display unit of an autonomous fundus imaging apparatus according to an embodiment of the present invention.
  • Figure 4 schematically shows the PPG signal measured by the PPG sensor.
  • FIG. 5 is a view schematically illustrating a process of horizontally focusing an examinee according to an embodiment of the present invention.
  • FIG. 6 illustrates a fundus image of the left eye and a fundus image of the right eye photographed by a camera according to an embodiment of the present invention.
  • FIG. 7 schematically shows a configuration diagram of an auto fundus image photographing and reading device according to an embodiment of the present invention.
  • FIG. 8 schematically illustrates a configuration diagram of an autonomous fundus image photographing and reading device according to another embodiment of the present invention.
  • FIG. 9 schematically shows a configuration diagram of an autonomous fundus imaging and reading system according to an embodiment of the present invention.
  • FIG 10 schematically illustrates the appearance of an auto fundus imaging apparatus according to an embodiment of the present invention.
  • FIG. 11 schematically illustrates a form in which an auto fundus imaging device according to an embodiment of the present invention is connected to another support device.
  • FIG. 1 schematically shows a configuration diagram of an auto fundus imaging apparatus according to an embodiment of the present invention
  • FIG. 2 shows a focus displayed on a display unit of the auto fundus imaging apparatus according to an embodiment of the present invention It is a schematic illustration of a dragon image.
  • an auto fundus imaging apparatus 100 includes a display unit 110, a first sensor unit 120, a second sensor unit 130, and a camera unit 140 ), and the computing unit (C).
  • the computing unit C may include a control unit 150, a memory 160, a storage 170, an input/output interface 184, and the like.
  • the controller 150 may include one or more processors.
  • an image for focus is displayed on the display unit 110 to focus an examinee before taking a fundus image.
  • the display unit 110 may include a first region 112 in which the left focus image L is displayed and a second region 114 in which the right focus image R is displayed.
  • a wall W is provided between the first region 112 and the second region 114 so that both regions are physically separated.
  • the first sensor unit 120 may measure the pupil state of the examinee.
  • the pupil state of the subject may be measured by the pulse of the subject.
  • the first sensor unit 120 may be any suitable sensor technology including a pressure sensor, a temperature sensor, a mechanical sensor, a motion sensor, an optical sensor, and an electronic sensor, but is not limited thereto. According to various embodiments, for example, at least one sensor among an Electro MyoGraphy (EMG) sensor, an Electro CardioGram (ECG) sensor, and a Pulse PlethysmoGram (PPG) sensor may be used as the first predecessor 120.
  • EMG Electro MyoGraphy
  • ECG Electro CardioGram
  • PPG Pulse PlethysmoGram
  • the ECG sensor is also referred to as an EKG sensor, and is a sensor that measures an electrocardiogram, a weak electrical signal that reflects the electrical activity stage of the heart that can be measured on the body surface.
  • FIG. 3 schematically shows the ECG signal measured by the ECG sensor. As illustrated in FIG. 3, it can be divided into a constant low-frequency region and an irregular high-frequency region according to how regular and irregular the R-R interval is.
  • the R-R interval is gradually changing when the value is actually quantitatively extracted.
  • the change in the R-R interval occurs because the autonomic nervous system controls antagonism in the heart rhythm generator. Therefore, by analyzing changes in the R-R interval, it is possible to grasp the active aspects of the sympathetic and parasympathetic nervous systems that make up the autonomic nervous system.
  • the PPG sensor is a sensor that measures pulse waves, and measures the pulse by sensing the flow of a radial artery on the wrist.
  • Figure 4 schematically shows the PPG signal measured by the PPG sensor.
  • the IBI shown in FIG. 4 like the R-R interval shown in FIG. 3, may be judged to be in a tension state when the interval is irregular.
  • the first analog front end includes a gain stage (Gain, 122a) and an analog-to-digital converter (Analog-Digital Convertor, 122b).
  • the first analog front end 122 is electrically connected to the first sensor unit 120 and converts a signal detected by the first sensor unit 120 into a digital signal and provides it to a processor (control unit, 150). do.
  • the gain stage 122a is composed of a single-ended differential amplifier or a measurement amplifier to adjust the output signal width of the first sensor unit 120 having a narrow output signal range.
  • the analog-to-digital converter 122b converts the output signal of the adjusted first sensor unit 120 to a digital signal.
  • the processor 150 may be a CPU, an application processor (AP), a micro controller, a digital signal processor (DSP), or the like, but may also be a processing module that automatically processes through a machine learning model, but is not limited thereto.
  • AP application processor
  • DSP digital signal processor
  • the processor 150 may communicate with the display adapter 180 to display the operation and user interface of the self fundus imaging apparatus on a display device (not shown).
  • the auto fundus imaging apparatus may transmit and receive commands including messages, data, information, and one or more programs (ie, application codes) through the network adapter 182.
  • commands including messages, data, information, and one or more programs (ie, application codes) through the network adapter 182.
  • the network adapter 182 may include separate or integrated antennas to enable transmission and reception over a network link.
  • the network adapter 182 may connect to a network and communicate with a remote computing device (not shown), such as a remote fundus image reading device.
  • the network may include, but is not limited to, at least one of a LAN, WLAN, PSTN, and cellular phone network.
  • the network adapter 182 may include at least one of a network interface and a mobile communication module for accessing the network.
  • the mobile communication module is connectable to a mobile communication network for each generation (for example, 2G to 5G mobile communication networks).
  • the memory 160 may store an operating system, a driver, an application program, data, and a database required for the operation of the self fundus imaging device according to an embodiment of the present invention, but is not limited thereto.
  • the memory 160 may include a computer-readable medium in the form of a volatile memory such as Random Acces Memory (RAM), and a non-volatile memory such as ROM (Read Only Memory) and flash memory, and further, a disk drive example
  • a hard disk drive Hard Disk Drive
  • a solid state drive Solid State Drive
  • the memory 160 may typically include data such as imaging data, program modules such as an operating system and imaging software that can be immediately connected to be operated by a processor.
  • the second sensor unit 130 may include at least one sensor capable of tracking pupil movement, eye blinking, eye vision, and the like, of an examinee who is using a self fundus imaging device.
  • the pupil image described herein refers to an image of a pupil state that can be obtained through the first sensor unit 120 or the second sensor unit 130, or a pupil image for overall eye tracking. it means.
  • the second sensor unit 130 may be any sensor technology including a motion sensor, an optical sensor, and an electronic sensor, but is not limited thereto.
  • an EOG (Electro OculoGram) sensor may be used as the second sensor unit 130, and the EOG sensor measures the safety latitude of an examinee.
  • the second sensor unit 130 may include a left eye sensor and a right eye sensor.
  • the sensor for the left eye may be preferably at least one, more preferably at least two
  • the sensor for the right eye may also be preferably at least one, more preferably at least two.
  • the fundus image When the size of the pupil of the test subject is turned off, the fundus image must be photographed to have appropriate sharpness and the like as the fundus image to be read.
  • the fundus image described in this specification means an image photographed to search for lesions of the eye, and may be a still image or a motion image.
  • the pulse of the test subject is irregularly running through the ECK or PPG sensor as described above.
  • it is configured to photograph the fundus image of the examinee after confirming that the eye of the examinee blinks through the EOG sensor. That is, it is configured to shoot the fundus image at a time when there is no blinking of the eye.
  • the camera unit 140 includes an image sensor (not shown) that captures an image of an object and converts the image into an image signal photoelectrically, and photographs a pupil image and a fundus image of the examinee.
  • the photographed pupil image is provided to a control unit (processor, 150) and processed based on an image processing or machine learning model.
  • the camera unit 140 photographs the pupil image of the examinee for the purpose of determining whether the pupil size of the examinee is suitable for capturing the fundus image before photographing the fundus image of the examinee.
  • a Fundus Camera may be used as the camera unit 140.
  • the controller 150 controls the camera unit 140 to photograph the fundus image of the examinee based on the measured value of the first sensor portion, the measured value of the second sensor portion, and the pupil image of the examinee.
  • control unit 150 has an irregular pulse of the subject measured by the first sensor unit 120, and the pupil size in the pupil image of the subject photographed by the camera unit 140 is preset.
  • the camera unit 140 may be controlled to photograph the fundus image of the subject.
  • control unit 150 is based on a machine learning model or a digital signal processor (DSP) algorithm, the pupil size of the pupil image photographed by the camera unit 140 is set in the pupil image preset. It is possible to determine whether the pupil is larger than the size.
  • DSP digital signal processor
  • various pupil state information may be used in addition to the pupil size described above.
  • Deep learning algorithms which are one of the machine learning algorithms, have various models such as deep neural networks (DNNs) and convolutional neural networks (CNNs).
  • DNNs deep neural networks
  • CNNs convolutional neural networks
  • Deep neural network is an artificial neural network (ANN) composed of several hidden layers between an input layer and an output layer. Deep neural networks (DNNs) can model complex non-linear relationships, just like a normal artificial neural network.
  • each object may be represented by a hierarchical configuration of basic elements of an image.
  • the additional layers can aggregate features of the gradually collected lower layers. This feature of the deep neural network allows modeling of complex data with fewer units (nodes) than a similarly performed artificial neural network.
  • Convolutional neural networks are a type of multilayer perceptrons designed to use minimal preprocessing.
  • the convolutional neural network (CNN) consists of one or several convolutional layers and common artificial neural network layers on top of it, and additionally uses weights and pooling layers. Thanks to this structure, a convolutional neural network (CNN) can fully utilize input data of a two-dimensional structure. Compared with other deep learning structures, the convolutional neural network (CNN) shows good performance in both video and audio fields.
  • Convolutional neural networks (CNNs) can also be trained through standard reverse propagation. Convolutional neural networks (CNNs) are easier to train than other feedforward artificial neural network techniques and have the advantage of using fewer parameters.
  • CDBN Convolutional Deep Belief Network
  • CDBN convolutional deep trust neural network
  • Recurrent neural network refers to a neural network in which a connection between units constituting an artificial neural network constitutes a directed cycle. Cyclic neural networks can utilize memory inside the neural network to process arbitrary input. Due to these characteristics, the circulatory neural network is used in fields such as handwriting recognition and exhibits a high recognition rate.
  • the control unit 150 may control the camera unit 140 to first capture one fundus image among the fundus image of the left eye and the fundus image of the right eye, and then photograph the remaining fundus image.
  • the control unit 150 may focus the left eye focus image L and the right eye focus image L and the right eye so that the left eye focus image L and the right eye focus image R displayed on the display 110 are separated from each other. You can control the image R for focus. This will be described in detail with reference to FIG. 5.
  • FIG. 5 is a view schematically illustrating a process of horizontally focusing an examinee according to an embodiment of the present invention.
  • a left focus image L and a right focus image R are displayed on the first area 112 and the second area 112, respectively.
  • the focus of the examinee is formed in the region S.
  • the fundus image of the left eye and the fundus image of the right eye can be simultaneously photographed.
  • the control unit 150 may control the camera unit 140 to simultaneously shoot the left and right eye images when the left eye and the right eye images are simultaneously photographed. Through this, both the left eye and the right eye can obtain clear fundus images, and the fundus images of both eyes are illustrated in FIG. 6.
  • the control unit 150 determines the fundus image of the subject to be read as a lesion fundus image.
  • the information of the preset fundus image may include tissue (vessel, fovea, optic disc, macula) basically seen in the fundus image, contrast or brightness of the fundus image.
  • tissue vehicle, fovea, optic disc, macula
  • the information listed above is only an example and is not necessarily limited thereto.
  • the controller 150 may apply a machine learning model to determine whether the fundus image of the examinee is within a predetermined range of fundus images.
  • the machine learning algorithm described above determines whether the subject's fundus image is within the information range of a preset fundus image with high accuracy. You can.
  • the controller 150 may control the camera unit 140 to re-photograph the fundus image of the examinee.
  • the controller 150 may determine the photographed fundus image of the examinee as a target image of the lesion to be read.
  • FIG. 7 schematically illustrates a configuration of an autonomous fundus image photographing and reading device according to an embodiment of the present invention.
  • the self fundus image photographing and reading device 200 includes a display unit 210, a first sensor unit 220, a second sensor unit 230, a camera unit 240, and a computing unit C ).
  • the computing unit C may include a control unit 250, a memory 260, a storage 270, an input/output interface 284, and the like.
  • the controller 150 may include one or more processors.
  • the display unit 210, the first sensor unit 220, the second sensor unit 230, and the camera unit 240 are the same as those described with reference to FIGS. 1 to 6, description thereof will be omitted. I will do it.
  • the control unit 250 performs the same function as the control unit 150 described above with reference to FIGS. 1 to 6 and additionally, reads the lesion from the fundus image determined as the fundus image to be read.
  • control unit 250 may apply a machine learning algorithm to read the lesion from the fundus image to support the diagnosis of the medical staff. Since the content of the machine learning algorithm is the same as described above, the description will be omitted.
  • FIG. 8 schematically illustrates a configuration of an auto fundus image photographing and reading device according to another embodiment of the present invention.
  • the self fundus image capturing and reading device 300 includes a first display unit 310, a first sensor unit 320, a second sensor unit 330, a camera unit 340, and a computing unit ( C) and a second display unit 390 may be included.
  • the computing unit C may include a control unit 350, a memory 360, a storage 370, an input/output interface 384, and the like.
  • the control unit 350 may include one or more processors.
  • the first display unit 310, the first sensor unit 320, the second sensor unit 330, the camera unit 340, and the control unit 350 are the same as those described with reference to FIG. 7. Therefore, the description thereof will be omitted.
  • the second display unit 390 may display the read result when the control unit 350 reads the lesion from the fundus image to be read by the subject by applying a machine learning algorithm.
  • the second display unit 390 may be provided outside the self fundus image photographing and reading device 300 unlike the first display unit 310 provided inside the auto fundus image taking and reading device 300. In this way, an examinee who has finished taking a self fundus image can easily check the reading result of his fundus image.
  • FIG. 9 schematically shows a configuration diagram of an autonomous fundus imaging and reading system according to an embodiment of the present invention.
  • an auto fundus image capturing and reading system 1000 may include an auto fundus image capturing apparatus 1100 and a lesion reading apparatus 1200.
  • the self fundus image photographing apparatus 1100 includes a display unit 1100 displaying a focus image, a first sensor unit 1120 measuring a pupil's pupil state, and a second sensor unit measuring a blink of the eyelid of the examinee 1130, a camera unit 1140 photographing at least one of a pupil image of the examinee and a fundus image of the examinee, and the controller 1150.
  • the control unit 1150 is based on the measured value of the first sensor unit 1110, the measured value of the second sensor unit 1120, and the camera unit 1140 to photograph the fundus image of the examinee based on the pupil image of the examinee ), and if the photographed fundus image of the examinee is within a preset range of the fundus image, the photographed fundus image of the examinee may be determined as a fundus image to be read.
  • the lesion reading apparatus 1200 is connected to the self fundus imaging apparatus 1110 through a wired or wireless communication network.
  • the wired/wireless communication network includes, for example, Wireless LAN (WLAN), Wireless Fidelity (WiFi) Direct, Digital Living Network Alliance (DLNA), Wireless broadband (Wibro), World Interoperability for Microwave Access (Wimax), High Speed Downlink Packet (HSDPA) Access), Long Term Evolution (LTE), BluetoothTM, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), ZigBee, Near Field Communication (NFC), Wi- Fi (Wireless-Fidelity) and Wi-Fi Direct.
  • WLAN Wireless LAN
  • WiFi Wireless Fidelity
  • DLNA Digital Living Network Alliance
  • Wibro Wireless broadband
  • Wimax World Interoperability for Microwave Access
  • HSDPA High Speed Downlink Packet
  • LTE Long Term Evolution
  • BluetoothTM BluetoothTM
  • the lesion reading apparatus 1200 may receive a lesion reading target fundus image transmitted from the self fundus imaging apparatus 1110 and read a lesion from the lesion reading target fundus image.
  • the read result may be displayed on a display unit (not shown) provided in the lesion reading apparatus 1200.
  • the read result may be transmitted to another remote device (not shown) or the self fundus imaging apparatus 1110.
  • FIG 10 schematically illustrates the appearance of an auto fundus imaging apparatus according to an embodiment of the present invention.
  • the self- fundus imaging apparatus 400 includes a main body 410 and a handle 420 rotatably attached to the main body 400. You can.
  • the handle part 420 may be provided with the first sensor part described above, but is not limited thereto.
  • the tension state of the testee that is, irregularity of the pulse, may be measured by the first sensor part.
  • FIG. 11 schematically illustrates a form in which an auto fundus imaging device according to an embodiment of the present invention is connected to another support device.
  • the self fundus imaging apparatus 600 is attached to a support device 700 having a folding arm 710.
  • the self fundus imaging device 600 attached to the support device 700 may be fixed to the position of the subject's eye, and then fundus imaging may be performed.
  • the support device 700 is provided with a folding arm 710, the testee can place the self fundus imaging device 600 in the proper position of the eye.
  • a fundus image with high clarity may be captured when the pupil's pupil size is enlarged without the assistance of medical staff.
  • the photographed fundus image can be used as a fundus image to be diagnosed with a lesion is determined based on a machine learning algorithm, and reading of an ophthalmic disease with respect to a lesion diagnosis image can be performed based on a machine learning algorithm.
  • the EOG sensor of the self funding image taking device 600 tracks the entire pupil. Thereafter, the ECG sensor or the PPG sensor tracks the pupil image to check whether the pupil is expanded, and when the pupil of the subject gazes at the focus image displayed on the display unit, whether to shoot in a single mode (Dual Mode) Whether to shoot with the controller (DSP) controls the focus image, and in this state, the EOG sensor detects blinking of the eye.
  • DSP controller
  • the fundus image of the subject is photographed, and the captured fundus image is judged to be compared with a preset fundus image using a DSP algorithm, or a fundus image through a machine learning model.
  • the goodness of fit (Good or Poor as a fundus image to be diagnosed with a lesion) is determined.
  • the fundus image is re-photographed.
  • the method for photographing the fundus image using the self fundus image capturing apparatus is only an example, and the present invention is not limited thereto. As described above, satisfaction of the optimal conditions for fundus imaging may be performed simultaneously or sequentially.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Ophthalmology & Optometry (AREA)
  • Pathology (AREA)
  • Cardiology (AREA)
  • Physiology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

L'invention concerne un dispositif de capture d'image de fond d'oeil autonome, comprenant : une unité d'affichage sur laquelle une image pour un point focal est affichée; une première unité de détection qui spécifie l'état des pupilles d'une personne examinée; une seconde unité de détection qui mesure le clignement des yeux de la personne examinée; une camréa qui capture une image des pupilles de la personne examinée et/ou une image des fonds de l'oeil de la personne examinée; et une unité de commande qui si, sur la base de la valeur de mesure de la première unité de détection, de la valeur de mesure de la seconde unité de détection et de l'image des pupilles de la personne examinée, l'image de fond de l'oeil de la personne examinée se trouve dans une plage d'informations d'image de fond d'oeil prédéfinie, détermine l'image de fond d'oeil capturée comme étant une image de fond d'oeil permettant l'interprétation d'une lésion.
PCT/KR2019/017258 2018-12-07 2019-12-09 Dispositif de capture d'image de fond d'oeil autonome, dispositif de capture et d'interprétation d'image de fond d'oeil autonome et système de capture et d'interprétation d'image de fond d'oeil autonome WO2020117021A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020180156711A KR102184761B1 (ko) 2018-12-07 2018-12-07 자가 안저 영상 촬영 장치, 자가 안저 영상 촬영 및 판독 장치, 및 자가 안저 영상 촬영 및 판독 시스템
KR10-2018-0156711 2018-12-07

Publications (1)

Publication Number Publication Date
WO2020117021A1 true WO2020117021A1 (fr) 2020-06-11

Family

ID=70974951

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/017258 WO2020117021A1 (fr) 2018-12-07 2019-12-09 Dispositif de capture d'image de fond d'oeil autonome, dispositif de capture et d'interprétation d'image de fond d'oeil autonome et système de capture et d'interprétation d'image de fond d'oeil autonome

Country Status (2)

Country Link
KR (1) KR102184761B1 (fr)
WO (1) WO2020117021A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0984764A (ja) * 1995-09-20 1997-03-31 Terumo Corp 眼底カメラの対物アダプタ、眼底カメラ装置、眼底カメラ用画像処理装置、および眼底カメラの制御方法
JP5397656B2 (ja) * 2007-11-08 2014-01-22 株式会社ニデック 眼底カメラ
JP2016140636A (ja) * 2015-02-04 2016-08-08 株式会社ニデック 眼科装置、眼科システム、および眼科撮影プログラム
WO2017057631A1 (fr) * 2015-10-01 2017-04-06 株式会社夏目綜合研究所 Appareil de détermination de l'émotion d'un spectateur, qui élimine l'influence de la luminosité, de la respiration et du pouls, système de détermination de l'émotion d'un spectateur, et programme
JP2018121886A (ja) * 2017-01-31 2018-08-09 株式会社ニデック 画像処理装置、および画像処理プログラム
KR20180095180A (ko) * 2017-02-17 2018-08-27 주식회사 씨엠랩 안과용 촬영장치

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101848322B1 (ko) 2017-10-27 2018-04-20 주식회사 뷰노 피검체에 대한 안저 영상의 소견 및 진단 정보의 생성을 위하여 판독을 지원하는 방법 및 이를 이용한 장치

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0984764A (ja) * 1995-09-20 1997-03-31 Terumo Corp 眼底カメラの対物アダプタ、眼底カメラ装置、眼底カメラ用画像処理装置、および眼底カメラの制御方法
JP5397656B2 (ja) * 2007-11-08 2014-01-22 株式会社ニデック 眼底カメラ
JP2016140636A (ja) * 2015-02-04 2016-08-08 株式会社ニデック 眼科装置、眼科システム、および眼科撮影プログラム
WO2017057631A1 (fr) * 2015-10-01 2017-04-06 株式会社夏目綜合研究所 Appareil de détermination de l'émotion d'un spectateur, qui élimine l'influence de la luminosité, de la respiration et du pouls, système de détermination de l'émotion d'un spectateur, et programme
JP2018121886A (ja) * 2017-01-31 2018-08-09 株式会社ニデック 画像処理装置、および画像処理プログラム
KR20180095180A (ko) * 2017-02-17 2018-08-27 주식회사 씨엠랩 안과용 촬영장치

Also Published As

Publication number Publication date
KR102184761B1 (ko) 2020-11-30
KR20200069547A (ko) 2020-06-17

Similar Documents

Publication Publication Date Title
US10004410B2 (en) System and methods for measuring physiological parameters
US20090216092A1 (en) System for analyzing eye responses to accurately detect deception
CN109431452B (zh) 无人眼健康筛查仪
KR101637314B1 (ko) 안구 촬영 장치 및 방법
KR102416878B1 (ko) 심박수 측정을 위한 헬스케어 장치
KR102435808B1 (ko) 스트레스 지수 측정을 위한 헬스케어 장치
CN112294282A (zh) 基于rppg的情绪检测装置的自标定方法
US10835120B2 (en) Extended medical test system
CN106725295A (zh) 一种微型体检设备、装置及其使用方法
US20210390692A1 (en) Detecting and tracking macular degeneration
WO2020117021A1 (fr) Dispositif de capture d'image de fond d'oeil autonome, dispositif de capture et d'interprétation d'image de fond d'oeil autonome et système de capture et d'interprétation d'image de fond d'oeil autonome
US10631727B2 (en) Method and system for detecting time domain cardiac parameters by using pupillary response
CN114795125A (zh) 基于多模态生理信号处理的中老年体检系统及装置
WO2022258560A1 (fr) Système d'aide pour fournir une information de diagnostic
CN116407096A (zh) 生命体征检测装置、系统及数据处理方法
EP3695775B1 (fr) Dispositif optique portatif basé sur un téléphone intelligent et procédé de capture d'images rétiniennes non mydriatiques
WO2021141186A1 (fr) Système et procédé d'apprentissage de réadaptation cognitive de types multiples
WO2023128454A1 (fr) Dispositif de soins de santé numérique permettant de mesurer la fréquence cardiaque à l'aide d'une photopléthysmographie à distance
WO2023128455A1 (fr) Dispositif de soins de santé numériques pour mesurer un indice de stress à l'aide d'un procédé de ppg à distance
TWI689895B (zh) 非穩定光源下之皮膚顏色變化監測系統及方法
WO2020209401A1 (fr) Appareil d'imagerie oculaire
CN218484554U (zh) 多功能融合检测装置
RU2531132C1 (ru) Способ определения скорости сложной зрительно-моторной реакции испытуемого и устройство для его осуществления
TWI839124B (zh) 光學斷層掃描自測系統、光學斷層掃描方法及眼部病變監控系統
EP4331477A1 (fr) Appareil et système de détection des signes vitaux et procédé de traitement des données

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19892668

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19892668

Country of ref document: EP

Kind code of ref document: A1