WO2022157838A1 - Procédé de traitement d'image, programme, dispositif de traitement d'image, et système ophtalmologique - Google Patents

Procédé de traitement d'image, programme, dispositif de traitement d'image, et système ophtalmologique Download PDF

Info

Publication number
WO2022157838A1
WO2022157838A1 PCT/JP2021/001731 JP2021001731W WO2022157838A1 WO 2022157838 A1 WO2022157838 A1 WO 2022157838A1 JP 2021001731 W JP2021001731 W JP 2021001731W WO 2022157838 A1 WO2022157838 A1 WO 2022157838A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
abnormal
partial
wide
processing method
Prior art date
Application number
PCT/JP2021/001731
Other languages
English (en)
Japanese (ja)
Inventor
泰士 田邉
真梨子 向井
媛テイ 吉
仁志 田淵
Original Assignee
株式会社ニコン
株式会社シンクアウト
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン, 株式会社シンクアウト filed Critical 株式会社ニコン
Priority to PCT/JP2021/001731 priority Critical patent/WO2022157838A1/fr
Priority to JP2022576259A priority patent/JPWO2022157838A1/ja
Publication of WO2022157838A1 publication Critical patent/WO2022157838A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions

Definitions

  • the present invention relates to an image processing method, program, image processing apparatus, and ophthalmic system.
  • Patent Document 1 Conventionally, there has been known a method of displaying an enlarged image by a user's operation in order to observe details in a fundus image (see Patent Document 1, for example).
  • the technology of the present disclosure provides a novel image processing method.
  • One embodiment of the present invention includes processing for acquiring a wide-angle fundus image obtained by photographing with an ophthalmologic apparatus, processing for extracting a plurality of partial images from the wide-angle fundus image to which an abnormal finding has been added, and the abnormal finding. and a display process for displaying the first partial image.
  • FIG. 1 is an overall configuration diagram of an information processing system according to a first embodiment;
  • FIG. It is a figure which shows the hardware constitutions of the information processing apparatus which concerns on 1st Embodiment.
  • 1 is a diagram showing the configuration of an ophthalmologic apparatus according to a first embodiment;
  • FIG. It is a figure which shows the functional structure of the server which concerns on 1st Embodiment.
  • It is a figure which shows the structure of the detection part which concerns on 1st Embodiment.
  • FIG. 10 is a diagram showing an example of partial area setting of a wide-angle fundus image and an example of a GUI using the generated partial image
  • FIG. 10 is a diagram showing an example of partial area setting of a wide-angle fundus image and an example of a GUI using the generated partial image
  • It is a figure which shows an example of GUI.
  • It is a figure which shows an example of GUI.
  • It is a figure which shows an example of GUI.
  • GUI the functional structure of the server which concerns on 2nd Embodiment.
  • 9 is a flowchart showing estimation processing according to the second embodiment
  • FIG. 1 shows the configuration of an information processing system 1 according to one embodiment of the present invention.
  • the information processing system 1 includes a server 10 , a terminal 20 and an ophthalmologic apparatus 30 .
  • the server 10, the terminal 20, and the ophthalmologic apparatus 30 are connected via the network 5 so as to be able to transmit and receive data to each other.
  • the network 5 is a wireless or wired communication means, such as the Internet, WAN (Wide Area Network), LAN (Local Area Network), public communication network, dedicated line, and the like.
  • the information processing system 1 is composed of a plurality of information management devices, the present invention does not limit the number of these devices. Therefore, the information processing system 1 can be configured with one or more devices as long as they have the following functions.
  • the server 10 and the terminal 20 are information processing devices installed in medical institutions such as hospitals and clinics, and are mainly operated by staff members of the medical institutions to acquire images captured by the ophthalmic device 30 and to obtain images. edit and analyze the data.
  • the ophthalmic device 30 is a device that performs SLO (Scanning Laser Ophthalmoscope) and OCT (Optical Coherence Tomography) (Fig. 3).
  • the ophthalmologic apparatus 30 has a control device 31 and an imaging device 32 .
  • FIG. 2 shows an example of hardware (hereinafter referred to as "information processing apparatus 100") used for realizing the server 10, the terminal 20, and the control device 31 of the ophthalmologic apparatus 30.
  • the information processing apparatus 100 includes a processor 101 , a main memory device 102 , an auxiliary memory device 103 , an input device 104 , an output device 105 and a communication device 106 . These are communicably connected to each other via communication means such as a bus (not shown).
  • the information processing apparatus 100 does not necessarily have to be implemented entirely by hardware. may be realized by virtual resources such as
  • the processor 101 is configured using a CPU (Central Processing Unit), an MPU (Micro Processing Unit), and the like.
  • the functions of the server 10, the terminal 20, and the control device 31 are implemented by the processor 101 reading out and executing the programs stored in the main storage device 102.
  • FIG. 1 Central Processing Unit
  • MPU Micro Processing Unit
  • the main memory device 102 is a device that stores programs and data, and includes ROM (Read Only Memory), RAM (Random Access Memory), nonvolatile semiconductor memory (NVRAM (Non Volatile RAM)), and the like.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • NVRAM Non Volatile RAM
  • the auxiliary storage device 103 is, for example, SSD (Solid State Drive), various non-volatile memories (NVRAM: Non-volatile memory) such as SD memory cards, hard disk drives, optical storage devices (CD (Compact Disc), DVD (Digital Versatile Disc), etc.), cloud server storage area, etc. Programs and data stored in the auxiliary storage device 103 are read into the main storage device 102 at any time.
  • SSD Solid State Drive
  • NVRAM Non-volatile memory
  • CD Compact Disc
  • DVD Digital Versatile Disc
  • cloud server storage area etc.
  • Programs and data stored in the auxiliary storage device 103 are read into the main storage device 102 at any time.
  • the input device 104 is an interface that accepts input of information, and includes, for example, a keyboard, mouse, touch panel, card reader, voice input device (microphone, etc.), voice recognition device, and the like.
  • the information processing device 100 may be configured to receive input of information from another device via the communication device 106 .
  • the output device 105 is an interface that outputs various types of information. synthesizer and the like.
  • the information processing device 100 may be configured to output information to another device via the communication device 106 .
  • the output device 105 corresponds to the display section in the present invention.
  • the communication device 106 is a wired or wireless communication interface that realizes communication with other devices via the network 5, and includes, for example, a NIC (Network Interface Card), a wireless communication module, a USB (Universal Serial Interface ) module, serial communication module, and the like.
  • NIC Network Interface Card
  • USB Universal Serial Interface
  • the configuration of the ophthalmologic apparatus 30 is shown in FIG.
  • the ophthalmologic apparatus 30 includes an imaging device 32 and a control device 31 .
  • the control device 31 may be provided in the same housing as the imaging device 32 or may be provided separately from the imaging device 32 .
  • the imaging device 32 operates under the control of the control device 31.
  • the imaging device 32 includes an SLO unit 33 , an imaging optical system 34 and an OCT unit 35 .
  • the imaging optical system 34 includes an optical scanner 341 and a wide-angle optical system 342 .
  • the photographing device 32 photographs an image of the subject's eye.
  • the imaging device 32 captures, for example, the fundus of the subject's eye, and obtains a fundus image and a tomographic image (OCT image), which will be described later.
  • OCT image tomographic image
  • the SLO unit 18 acquires an image of the fundus 12A of the eye 12 to be examined.
  • the OCT unit 20 acquires a tomographic image of the eye 12 to be examined.
  • a front view image of the retina created based on the SLO data acquired by the SLO unit 18 is referred to as an SLO image
  • a tomographic image or a front view image of the retina created based on the OCT data acquired by the OCT unit 20 is referred to as an SLO image.
  • An image (en-face image) or the like is called an OCT image.
  • the SLO image is sometimes referred to as a two-dimensional fundus image.
  • An OCT image may also be referred to as a fundus tomographic image, a posterior segment tomographic image, or an anterior segment tomographic image, depending on the imaging region of the subject's eye 12 .
  • the image captured by the ophthalmologic apparatus 30 may also be referred to as the image of the eye to be examined.
  • the image of the subject's eye is an image of the fundus captured using a wide-angle optical system as described later, it may be referred to as a wide-angle fundus image P.
  • the optical scanner 341 scans the light emitted from the SLO unit 33 in the X direction and the Y direction.
  • the optical scanner 341 may be an optical element capable of deflecting a light beam, and for example, a polygon mirror, a galvanomirror, or the like can be used.
  • the wide-angle optical system 342 includes an objective optical system. Wide-angle optics 342 provide a wide-angle field of view at the fundus.
  • the SLO system is realized by the control device 31, the SLO unit 33, and the imaging optical system 34 shown in FIG. Since the SLO system includes the wide-angle optical system 342, observation in a wide field of view (FOV) is realized at the fundus.
  • FOV indicates a range that can be photographed by the photographing device 32 .
  • FOV can be expressed as a viewing angle.
  • a viewing angle can be defined by an internal illumination angle and an external illumination angle in this embodiment.
  • the external irradiation angle is an irradiation angle defined by using the pupil as a reference for the irradiation angle of the light flux irradiated from the UWF ophthalmic apparatus 110 to the eye to be examined.
  • the internal illumination angle is an illumination angle defined with the center O of the eyeball as a reference for the illumination angle of the luminous flux with which the fundus 12A is illuminated.
  • the external illumination angle and the internal illumination angle are in correspondence.
  • an external illumination angle of 120 degrees corresponds to an internal illumination angle of approximately 160 degrees.
  • the internal illumination angle is 200 degrees.
  • an SLO fundus image obtained by photographing at an internal illumination angle of 160 degrees or more is referred to as a UWF-SLO fundus image.
  • the wide-angle optical system 342 may be a reflective optical system using a concave mirror such as an elliptical mirror, a refractive optical system using a wide-angle lens, or a catadioptric system combining concave mirrors and lenses.
  • a wide-angle optical system using an elliptical mirror, a wide-angle lens, etc. it is possible to photograph not only the central part of the fundus but also the peripheral part of the retina.
  • a configuration using a system using an elliptical mirror described in International Publication WO2016/103484 or International Publication WO2016/103489 may be used.
  • the disclosure of International Publication WO2016/103484 and the disclosure of International Publication WO2016/103489 are each incorporated herein by reference in its entirety.
  • the SLO unit 33 includes a blue light (B light) light source 331B, a green light (G light) light source 331G, a red light (R light) light source 331R, and an infrared light (IR light) light source such as near infrared light. 331IR and an optical system 335 that reflects or transmits light from these light sources and guides them to one optical path.
  • the SLO unit 33 also includes a beam splitter 332 and detection elements 333B, 333G, 333R, and 333IR that detect B light, G light, R light, and IR light, respectively.
  • the SLO unit 33 can switch between a light source that emits light or a combination of light sources that emit light, such as a mode that emits B light, R light, and G light, and a mode that emits IR light.
  • the beam splitter 332 has the function of splitting the reflected light from the fundus into B light, R light, G light, and IR light, and reflecting each light toward the detection elements 333B, 333G, 333R, and 333IR. .
  • the detection elements 333B, 333G, 333R, and 333IR can detect B light, R light, G light, and IR light, respectively.
  • the light incident on the imaging optical system 34 from the SLO unit 33 is scanned in the X direction and the Y direction by the optical scanner 341 .
  • the scanning light passes through the wide-angle optical system 342 and irradiates the fundus. Reflected light reflected by the fundus enters the SLO unit 33 via the wide-angle optical system 342 and the optical scanner 341 .
  • the reflected light reflected by the fundus enters the SLO unit 33 via the wide-angle optical system 342 and the optical scanner 341, and is decomposed into B light, R light, G light, and IR light by the beam splitter 332. These lights are detected by detection elements 333B, 333G, 333R, and 333IR, respectively.
  • the processor 101 of the control device 31 can generate an SLO fundus image.
  • the OCT system is composed of the control device 31, the OCT unit 35, and the imaging optical system 34 shown in FIG.
  • OCT unit 35 comprises light source 351 , sensor 352 , first optical coupler 353 , reference optics 354 , collimating lens 355 and second optical coupler 356 .
  • the light emitted from the light source 351 is split by the first optical coupler 353 .
  • One of the split lights is collimated by the collimating lens 355 and enters the imaging optical system 34 as measurement light.
  • Light incident on the imaging optical system 34 is scanned in the X and Y directions by an optical scanner 341 .
  • the scanning light passes through the wide-angle optical system 342 and irradiates the fundus.
  • the measurement light reflected by the fundus enters the OCT unit 35 via the wide-angle optical system 342 and enters the second optical coupler 356 via the collimating lens 355 and the first optical coupler 353 .
  • the other light emitted by the light source 351 and branched by the first optical coupler 353 enters the second optical coupler 356 via the reference optical system 354 as reference light.
  • the reference light and the measurement light reflected by the fundus are interfered by the second optical coupler 356 to generate interference light.
  • the interfering light is received by sensor 352 .
  • the control device 31 receives signals from the sensor 352 and generates a tomographic image. Imaging using an OCT system and an image obtained by the imaging may hereinafter be simply referred to as OCT imaging and OCT image, respectively.
  • FIG. 4 shows main functions (functional configuration) of the server 10 .
  • the server 10 has functions of a database 114 and a management unit 120 .
  • the management unit 120 particularly has the functions of the image processing unit 116 and the detection unit 118 .
  • the database 114 is stored in the main storage device 102 of the server 10 .
  • Each function of the management unit 120 is implemented by the processor 101 of the server 10 reading and executing a program stored in the main storage device 102 of the server 10 .
  • the server 10 also has functions such as an operating system, a file system, a device driver, and a DBMS (DataBase Management System).
  • functions such as an operating system, a file system, a device driver, and a DBMS (DataBase Management System).
  • the management unit 120 performs processing executed by the server 10, such as acquisition and management of images. Images acquired and managed by the management unit include images captured by the ophthalmologic apparatus 30 .
  • the image processing unit 116 mainly performs GUI generation and processing of images captured by the ophthalmologic apparatus 30 .
  • the detection unit 118 detects the presence or absence of abnormal findings such as bleeding or retinal detachment (hereinafter referred to as “abnormal findings”) in the fundus including the retina or choroid, and the details of the findings, captured by the ophthalmologic apparatus 30 . It has a function of estimating from the captured image. In the detection unit 118, the presence or absence of an abnormal finding and its details are estimated based on the abnormal region in the image of the subject's eye. In this embodiment, the detection unit 118 is a trained model generated by machine learning.
  • the detection unit 118 is a model that performs deep learning to learn the image feature amount of the abnormal region in the image of the eye to be inspected.
  • the detection unit 118 constructs a neural network that outputs information indicating the result of estimating the presence or absence of an abnormal finding for the input image of the eye to be inspected.
  • the neural network is a deep neural network (DNN).
  • the detection unit 118 has an input layer that receives the input of the image of the subject's eye, an output layer that outputs the estimation result of the presence or absence of an abnormal finding, and an intermediate layer that extracts the image feature amount of the image of the subject's eye (Fig. 5).
  • Each of the input layer, the output layer, and the intermediate layer has nodes (indicated by white circles in the figure), and the nodes of these layers are connected by edges (indicated by arrows in the figure). Note that the configuration of the detection unit 118 shown in FIG. 5 is an example, and the number of nodes and edges, the number of intermediate layers, and the like can be changed as appropriate.
  • the intermediate layer has a convolution layer for convolving the pixel value of each pixel in the image of the eye to be inspected input from the input layer, and a pooling layer for mapping the pixel value. , and these layers are used to extract the feature amount of the image of the subject's eye.
  • the output layer has one or more neurons that output results of estimating abnormal findings of the input image of the subject's eye.
  • the detection unit 118 can also output the likelihood of the estimation result together with the estimation result.
  • the likelihood is, for example, a probability value output from the output layer of the detection unit 118, and for example, the degree of reliability of the estimated abnormal finding is indicated by a value from "0" to "1". . By notifying the user of the likelihood, the user can know how accurate the estimation result is.
  • the detection unit 118 also outputs the severity of the abnormal finding.
  • the severity can be explained as the seriousness of the symptoms, the grade of the symptoms, the speed of progression, the magnitude of the effects of the symptoms on the human body, and the like. For example, if the symptom or the like is bleeding, the detection unit 118 estimates the severity of the bleeding from the size, amount, and the like of the bleeding. Similarly, the detection unit 118 outputs the degree of severity for other abnormal findings such as retinal detachment and neovascularization.
  • the detection unit 118 is described as being a CNN, but the detection unit 118 is not limited to a CNN, and may be a neural network other than a CNN, or a trained model constructed by another learning algorithm. you can
  • the database 114 stores images such as wide-angle fundus images P and OCT images obtained by photographing fundus tissues having retinas and choroids. Annotations can also be added to the image of the eye to be examined by a medical professional or the like. In the database 114, annotations are added in the image in association with the image of the eye to be examined and an image showing a part of the image of the eye to be examined. It is possible to save the marked locations and the contents of the annotations together. The saved data is used for learning and re-learning of the detection unit 118 .
  • the annotation includes information indicating the abnormal region of the fundus and details of abnormal findings such as macular degeneration attached to the abnormal region.
  • Database 114 also stores patient medical records, such as electronic medical records, patient IDs, and other data, including historical records of images of the eye being examined.
  • the database 118 also stores information about findings obtained by the detection unit 118, which will be described later, in association with the image.
  • the deep learning of the detection unit 118 uses a dataset containing a large number of images containing abnormal areas such as bleeding and neovascularization, and annotations attached to these images. Deep learning of the detection unit 118 is performed by having the detection unit 118 learn or re-learn this data set.
  • FIG. A program stored in the main storage device 102 of the server 10 is started, and the processing of the information processing system 1 is executed by the management unit 120 of the server 10 as follows.
  • the process performed by the management part 120 of the server 10 may be described simply as what "the server 10" performs.
  • the outline of the processing performed by the server 10 is roughly composed of four steps, as shown in FIG.
  • the server 10 acquires an image of the subject's eye obtained by photographing the retina with the ophthalmologic apparatus 30 (S1).
  • the wide-angle fundus image P is used as the image captured by the ophthalmologic apparatus 30 .
  • the wide-angle fundus image P is captured by the SLO system of the ophthalmologic apparatus 30 as described above.
  • the management unit 120 acquires the wide-angle fundus image P stored in the ophthalmologic apparatus 30 via the network 5 .
  • the management unit 120 estimates whether there is an abnormal finding in the acquired wide-angle fundus image P (S3). After that, the management unit 120 performs output processing of the image of the subject's eye in which the presence or absence of an abnormal finding is estimated (S5).
  • the management unit 120 performs additional processing (S7).
  • the additional processing includes filling in an electronic medical record, re-learning, etc., and is mainly executed according to instructions from the user.
  • step S ⁇ b>31 the management unit 120 inputs the wide-angle fundus image P to the detection unit 118 .
  • the management unit 120 acquires an estimation result regarding the presence or absence of an abnormality in the detection unit 118 (S33).
  • the output of the detection unit 118 includes the type of abnormal finding and information specifying the abnormal region.
  • An abnormal region is a region in which there is a difference from a normal eye. Examples of abnormal regions captured in images of the subject's eye include bleeding points, neovascular regions, retinal detachment regions, non-perfused regions, etc. in the retina or choroid.
  • a finding based on an abnormal region is an abnormal finding, and examples of types of abnormal findings include the presence and degree of hemorrhage, neovascularization, retinal detachment, and non-perfused region in the retina or choroid.
  • FIG. 10 shows a wide-angle fundus image P including a macula M as an example of an abnormal finding. As the estimation result, an image area estimated to include an abnormal area and an identification result identifying the type of abnormal finding are obtained.
  • the detection unit 118 also outputs the likelihood of the estimation result and the severity of the abnormal findings. Then, the management unit 120 generates abnormal finding information including an image area estimated to include an abnormal area, the identification result of identifying the type of abnormal finding, the likelihood of the estimation result, the severity of the abnormal finding, and the like. do.
  • the wide-angle fundus image P is stored in association with the abnormal finding information.
  • the estimation process generates a wide-angle fundus image P associated with abnormal finding information.
  • FIG. 10 shows an example of a wide-angle fundus image P to which an abnormal finding is added by the estimation process (S3).
  • bleeding is used as an example of an abnormal finding.
  • a bleeding point B is displayed in the wide-angle fundus image P as an abnormal area.
  • the management unit 120 uses the image processing unit 116 to extract a partial image indicating a partial area in the wide-angle fundus image P (S51).
  • the management unit 120 sets a partial area of the wide-angle fundus image P and generates a partial image.
  • a plurality of partial images are extracted, and the plurality of partial images may or may not overlap with each other. If an abnormal finding is given, the area of the partial image is determined such that the abnormal area is included in any one of the plurality of partial images.
  • the regions of the partial images may be determined so that one partial image includes all the abnormal regions, and the partial images are determined so that the plurality of partial images include the abnormal regions.
  • a region of the image may be determined.
  • frames F1, F2, F3, and F4 superimposed on the wide-angle fundus image P indicate four partial regions extracted from the wide-angle fundus image P.
  • FIG. In addition, four partial images D1, D2, D3, and D4 generated as a result of extracting each partial area of the wide-angle fundus image P are shown on the right side of each figure. Partial images D1, D2, D3, and D4 correspond to the areas indicated by frames F1, F2, F3, and F4, respectively.
  • the image processing unit 116 extracts partial images from the wide-angle fundus image P such that one of the partial areas includes the bleeding point B, which is an abnormal area.
  • the region of the frame F2 includes the bleeding point B, and the other regions (F1, F3, F4) do not include the bleeding point B.
  • FIG. 10 the image processing unit 116 determines the size and arrangement of the regions so that the partial images D1, D2, D3, and D4 all include the entire macula M.
  • the image processing unit 116 determines the size and arrangement of the region so that the bleeding point B is included in one of the regions, in other words, the bleeding point B is displayed in one of the partial images. .
  • the macula M is displayed in each of the partial images D1, D2, D3, and D4.
  • a bleeding point B is also displayed in the partial image D2.
  • the image processing unit 116 determines the division positions so that the areas corresponding to the frames F1 to F4, that is, the partial images D1 to D4 do not overlap with the macula M at the center. Also, the image processing unit 116 adjusts the arrangement and size of the partial images so that the bleeding point B is displayed in one of the partial images. As a result, the bleeding point B is displayed in the partial image D1.
  • the method of setting partial images is determined according to a predetermined method.
  • the placement and size of each image may be determined so that the frames F1 to F4, that is, the partial images D1 to D4 all include the entire macula M (FIG. 10).
  • the division positions may be determined so that the frames F1 to F4, that is, the regions of the partial images D1 to D4 do not overlap with the macula M as the center (FIG. 11).
  • the image processing unit 116 extracts partial images from the wide-angle fundus image P as described above, and creates partial images D1, D2, D3, and D4.
  • the image processing unit 116 performs image processing for sharpening the abnormal area (S53).
  • image processing for sharpening the abnormal area (S53).
  • the image processing unit 116 removes reflection of eyelashes and other artifacts from the partial images D1, D2, D3, D4 and the wide-angle fundus image P to make the images easy to view.
  • the image processing unit 116 further performs processing for setting the method of emphasizing the abnormal region (S55).
  • the emphasis method is appropriately selected according to the type, severity, and certainty of the abnormal findings. For example, when abnormal findings are recognized at multiple locations, a partial image including an abnormal region with a high degree of severity is set to be preferentially displayed in the subsequent display processing (S59). Further, the image processing unit 116 performs processing for setting a display method for displaying a partial image in which an abnormal region is displayed among the plurality of partial images in a manner to distinguish it from a partial image in which an abnormal region is not displayed. Specifically, the enlargement ratio and display order of the partial images in which the abnormal region is displayed are selected according to the type of abnormal finding or likelihood.
  • the image processing unit 116 selects an image such as a frame or an icon to be displayed superimposed on the portion to be emphasized according to the type or probability of the abnormal finding, and is displayed as an emphasized display in the subsequent display processing (S59). set to Note that these emphasis methods may be set by accepting user input.
  • the image processing unit 116 creates a GUI (Graphical User Interface) according to the partial image extraction method, abnormal region enhancement method, and highlighting selected up to step S55 (S57).
  • This GUI is transmitted to the terminal 20 via the network 5 and displayed on the output device 105 of the terminal 20 (S59).
  • the GUI receives instructions from the user who operates the terminal 20, changes the display according to the instructions, and further displays images such as icons and frames.
  • GUIs are shown in Figures 10 to 14.
  • a wide-angle fundus image P is arranged on the left side, and four partial images D1 to D4 are arranged on the right side thereof.
  • Frames F1, F2, F3, and F4 corresponding to the partial images D1, D2, D3, and D4 are superimposed on the wide-angle fundus image P and displayed.
  • By displaying the frames F1 to F4 it is easy to understand which region of the wide-angle fundus image P each of the partial images D1 to D4 indicates.
  • the partial image D2 displaying the bleeding point B which is the abnormal area, is preferentially displayed in the largest size.
  • FIG. 12 shows the GUI when the enhancement method (see S55) for further enlarging the partial image including the abnormal region is selected. Moreover, all of the partial images D1 to D4 are set so as to display the macula M (see S51). As shown, the partial image D2 in which the bleeding point B is displayed has a larger magnification than the other partial images D1, D3, and D4. Therefore, the user can easily confirm the bleeding point B.
  • the bleeding point B displayed in the partial image D2 is highlighted by a small frame S (see S55).
  • the image processing unit 116 further enlarges the image within the small frame S and displays it as an enlarged image L. FIG. Thereby, the user can easily confirm the bleeding point B.
  • the user's instruction is performed by, for example, clicking the image of the small frame S with a mouse or tapping on the display screen.
  • the magnification or angle of view of the enlarged image L is arbitrarily set.
  • the magnification is set to 5 times and the angle of view is set to 30 degrees, which is easy for the doctor to visually recognize.
  • the enlargement ratio or the angle of view preset by the doctor may be used.
  • the user can change the positions of the partial images D1 to D4 and change the range of enlargement by operating the positions and sizes of the frames F1 to F4.
  • icons IC1 to IC4 are highlighted. These highlights differ depending on the type of abnormal finding.
  • the area judged to be abnormal by the doctor and added to the display, the abnormal area output by AI, the area judged to be abnormal by the doctor and AI, and the area judged to be not abnormal by the doctor Icons IC1 to IC4 with different displays are used for the four types of areas instructed to change the display on the GUI. In this way, the method of highlighting is differentiated according to the type of abnormal findings.
  • the icons IC1 to IC4 are displayed differently from each other by, for example, changing line types, shapes, colors, and the like.
  • the display order of the partial images D1 to D4 or the display mode of the enlarged image L is changed according to the severity.
  • the mode of highlighting will also change according to the severity. For example, brighter or more saturated color highlighting is used for more severe anomalous regions.
  • any type of emphasis can be selected, regardless of the image division method or GUI display mode, as shown in FIGS. It is also possible to combine multiple enhancement methods. For example, in the GUI display in which the partial image D2 is enlarged as shown in FIG. 11, the small frame S and the enlarged image L may be superimposed and displayed on the partial image D2.
  • the image processing unit 116 can cooperate with the ophthalmologic apparatus 30 to perform OCT imaging using the OCT system and acquire a tomographic image (S61).
  • An instruction for OCT imaging can also be accepted via the GUI in the display processing (S59).
  • the user can specify an abnormal region to be subjected to OCT imaging by operating a mouse click or the like.
  • the image processing unit 116 identifies the position of the fundus where the abnormal area is located, and instructs the ophthalmologic apparatus 30 via the network 5 to take an image. Based on this instruction, the ophthalmologic apparatus 30 uses the OCT system to perform OCT imaging on the abnormal region.
  • a tomographic image of the retina obtained by OCT imaging is obtained by the server 10 and displayed on the output device 105 of the terminal 20 . At that time, it may be displayed together with the wide-angle fundus image P or the partial images D1 to D4 on the GUI.
  • OCT imaging may be performed without user instructions.
  • the image processing unit 116 identifies the position of the fundus where the abnormal region exists, and instructs the ophthalmologic apparatus 30 to take an image via the network 5 . Based on this instruction, the ophthalmologic apparatus 30 uses the OCT system to perform OCT imaging on the abnormal region.
  • imaging may be performed according to the patient's information. For example, if a diagnosis of macular degeneration or the like is obtained in the electronic medical record stored in the database 114, the server 10 instructs the ophthalmologic apparatus 30 to perform OCT imaging of the vicinity of the macula M. FIG. Based on this instruction, the ophthalmologic apparatus 30 performs OCT imaging on the abnormal region.
  • the addition process will be explained using the flow of FIG. 9 .
  • the user can check the abnormal findings and their regions in the display process (S59), and then add annotations to the estimation results and save them (S71).
  • the saved annotations are saved in the database 114 and saved in association with the wide-angle fundus image P and the estimation result.
  • the saved annotations and wide-angle fundus image P are used for re-learning of the detection unit 118 (S73).
  • the detection unit 118 can improve the ability to estimate an abnormal finding by learning the wide-angle fundus image P together with annotations indicating the types of abnormal findings and correct abnormal regions.
  • the detection unit 118 uses a trained model with CNN.
  • an image processing algorithm may be used for estimating an abnormal finding instead of a trained model that has undergone machine learning.
  • a second embodiment will be described below.
  • FIG. 15 shows the software configuration of the server 10 according to the second embodiment.
  • the second embodiment includes a detection unit 119 configured by an image processing algorithm.
  • the configuration of the server 10, the terminal 20, and the ophthalmologic apparatus 30 is the same as in the first embodiment, except for the detection unit 119.
  • the same reference numerals as used in the first embodiment are assigned to the same configurations as in the first embodiment, and the description thereof is omitted.
  • the estimation process (S3) by the detection unit 119 in the second embodiment will be described below.
  • an algorithm for estimating the presence or absence of abnormal findings such as bleeding in the wide-angle fundus image P will be described.
  • step S35 the detection unit 119 extracts a blood region.
  • the blood region is a region in the wide-angle fundus image P that includes a blood vessel region showing blood vessels and a bleeding region formed by bleeding from blood vessels.
  • an image obtained by extracting the blood region from the wide-angle fundus image P is obtained.
  • the detection unit 119 extracts only the blood vessel region.
  • a general image analysis method is applied in the region extraction in steps S35 and S36.
  • the wide-angle fundus image P is displayed in 256 gradations or the like, and is converted into a binarized image by comparing the gradation value of each pixel with the threshold value.
  • a threshold value is set so as to distinguish between the blood region and other regions.
  • a threshold value is set so as to distinguish between the blood vessel region and other regions.
  • Various methods such as the mode method can be adopted as the method of setting the threshold.
  • step S37 the detection unit 119 removes noise in the binarized image, particularly around the blood vessel region.
  • noise removal processing for example, dilation processing (Dilation) and erosion processing (Erosion) are used.
  • step S37 an image obtained by extracting the blood vessel region from the wide-angle fundus image P is obtained.
  • step S38 the detection unit 119 can obtain an image showing the bleeding area by taking the difference between the image showing the blood area and the image showing the blood vessel area.
  • the detection unit 119 estimates the presence or absence of bleeding based on the image acquired in this way (S39). When determining that there is bleeding, the detection unit 119 outputs an abnormal finding of bleeding and an image showing the bleeding region.
  • the image processing in the above embodiment includes a process (S1) of acquiring a wide-angle fundus image (P) obtained by photographing with an ophthalmologic apparatus, and a plurality of partial images (D1 -D4), a selection process (S51) for selecting a first partial image showing the abnormal region (B) that is the basis of the abnormal finding from the plurality of partial images, and the first partial image.
  • An image processing method including display processing (S59) for displaying. including.
  • the partial areas where abnormal findings are observed are displayed on the wide-angle fundus image P so that they can be distinguished. Further, a partial image having an abnormal finding among the partial images D1 to D4 can be enlarged and displayed.
  • the above embodiment includes an estimation process (S3) for estimating an abnormal finding in the wide-angle fundus image P.
  • the estimation processing includes processing (S31) of inputting the wide-angle fundus image P to the detection units 118 and 119 for estimating abnormal findings, and processing for causing the detection units 118 and 119 to estimate abnormal findings.
  • the above embodiment includes a process of accepting addition of annotations related to abnormal findings for the wide-angle fundus image P (S71), and a process of re-learning the detection unit 118 with the annotations and the wide-angle fundus image P (S73).
  • the detection unit 118 Since the detection unit 118 re-learns using the annotated data, the accuracy of the estimation processing of the detection unit 118 can be improved. That is, it is possible to reduce the probability of outputting false positives or false negatives.
  • the severity of the abnormal finding is estimated. Also, the display method of the first area is changed according to the estimated severity.
  • the above-described embodiment includes a process of specifying the position of the abnormal region on the fundus (S61), and a process of acquiring a tomographic image at the specified position by performing OCT imaging with an OCT system on the specified position (S61). further includes
  • the partial area setting of the wide-angle fundus image P is performed using a rectangular area in the above embodiment, the present invention is not limited to this.
  • the division may be performed by extending the division lines radially, or by dividing into slits.
  • Annotations can also include the shape of partial area settings.
  • the partial area setting shape determined by the user to be optimal may be included in the annotation, and the detection unit 119 may learn and re-learn.
  • the optimum partial area shape and partial area position are output in accordance with the abnormal findings estimated by the detection unit 119 .
  • partial image extraction of the wide-angle fundus image P may be performed according to the partial image extraction method output by the detection unit 119 .
  • the display mode in the display processing (S59) and the like was a mode in which images were used to distinguish and display abnormal areas by color and shape, but the present invention is not limited to such a viewing mode.
  • a mode of setting display variations by voice may be used.
  • the present invention is not limited to this, and ophthalmologic apparatuses that do not have an OCT system or that use other systems may be used.
  • a plurality of terminals are connected to one server 10, and the above functions are exhibited.
  • the present invention does not limit the number of servers or the number of terminals, and for example, the functions described above may be realized by only one device. Also, the number of terminals and the number of servers may be further increased. Also, each function does not necessarily have to be implemented by the server 10 or the like, and may be shared by a plurality of devices to implement the function. That is, the present invention does not limit the number of controllers or devices, or the sharing of functions among devices.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

Ce procédé de traitement d'image comprend : un processus d'acquisition d'une image grand angle de fond d'œil obtenue par formation d'image à l'aide d'un appareil ophtalmique ; un processus d'extraction d'une pluralité d'images partielles à partir de l'image grand angle de fond d'œil pour laquelle une anomalie décelée est ajoutée ; un processus de sélection consistant à sélectionner une première image partielle montrant une région d'anomalie qui fournit une raison à l'anomalie décelée parmi la pluralité d'images partielles ; et un processus d'affichage consistant à afficher la première image partielle.
PCT/JP2021/001731 2021-01-19 2021-01-19 Procédé de traitement d'image, programme, dispositif de traitement d'image, et système ophtalmologique WO2022157838A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2021/001731 WO2022157838A1 (fr) 2021-01-19 2021-01-19 Procédé de traitement d'image, programme, dispositif de traitement d'image, et système ophtalmologique
JP2022576259A JPWO2022157838A1 (fr) 2021-01-19 2021-01-19

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/001731 WO2022157838A1 (fr) 2021-01-19 2021-01-19 Procédé de traitement d'image, programme, dispositif de traitement d'image, et système ophtalmologique

Publications (1)

Publication Number Publication Date
WO2022157838A1 true WO2022157838A1 (fr) 2022-07-28

Family

ID=82549624

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/001731 WO2022157838A1 (fr) 2021-01-19 2021-01-19 Procédé de traitement d'image, programme, dispositif de traitement d'image, et système ophtalmologique

Country Status (2)

Country Link
JP (1) JPWO2022157838A1 (fr)
WO (1) WO2022157838A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014110884A (ja) * 2012-10-30 2014-06-19 Canon Inc 画像処理装置及び画像処理方法
US8879813B1 (en) * 2013-10-22 2014-11-04 Eyenuk, Inc. Systems and methods for automated interest region detection in retinal images
JP2018020024A (ja) * 2016-08-05 2018-02-08 キヤノン株式会社 画像処理装置、画像処理方法およびプログラム
JP2019202229A (ja) * 2019-09-06 2019-11-28 キヤノン株式会社 画像処理装置、画像処理方法及びプログラム
JP2020058647A (ja) * 2018-10-11 2020-04-16 株式会社ニコン 画像処理方法、画像処理装置、及び画像処理プログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014110884A (ja) * 2012-10-30 2014-06-19 Canon Inc 画像処理装置及び画像処理方法
US8879813B1 (en) * 2013-10-22 2014-11-04 Eyenuk, Inc. Systems and methods for automated interest region detection in retinal images
JP2018020024A (ja) * 2016-08-05 2018-02-08 キヤノン株式会社 画像処理装置、画像処理方法およびプログラム
JP2020058647A (ja) * 2018-10-11 2020-04-16 株式会社ニコン 画像処理方法、画像処理装置、及び画像処理プログラム
JP2019202229A (ja) * 2019-09-06 2019-11-28 キヤノン株式会社 画像処理装置、画像処理方法及びプログラム

Also Published As

Publication number Publication date
JPWO2022157838A1 (fr) 2022-07-28

Similar Documents

Publication Publication Date Title
JP7229881B2 (ja) 医用画像処理装置、学習済モデル、医用画像処理方法及びプログラム
US20210104313A1 (en) Medical image processing apparatus, medical image processing method and computer-readable medium
JP2021154159A (ja) 機械学習ガイド付き撮影システム
Abràmoff et al. Retinal imaging and image analysis
US20210390696A1 (en) Medical image processing apparatus, medical image processing method and computer-readable storage medium
Niemeijer et al. Automated detection and differentiation of drusen, exudates, and cotton-wool spots in digital color fundus photographs for diabetic retinopathy diagnosis
JP7269413B2 (ja) 医用画像処理装置、医用画像処理システム、医用画像処理方法及びプログラム
JP7341874B2 (ja) 画像処理装置、画像処理方法、及びプログラム
JP6996682B2 (ja) 眼の画像内における病変の検知
US11284791B2 (en) Image processing method, program, and image processing device
US11922601B2 (en) Medical image processing apparatus, medical image processing method and computer-readable medium
US11941788B2 (en) Image processing method, program, opthalmic device, and choroidal blood vessel image generation method
JP7258354B2 (ja) 生体組織内の異常を検出するための方法及びシステム
JP2020166813A (ja) 医用画像処理装置、医用画像処理方法及びプログラム
JP7270686B2 (ja) 画像処理システムおよび画像処理方法
WO2020202680A1 (fr) Dispositif et procédé de traitement d'informations
JP2007097634A (ja) 画像解析システム、及び画像解析プログラム
WO2017020045A1 (fr) Systèmes et méthodes de dépistage de la rétinopathie palustre
US20230320584A1 (en) Image processing method, image processing program, image processing device, image display device, and image display method
Majumdar et al. An automated graphical user interface based system for the extraction of retinal blood vessels using kirsch‘s template
WO2022157838A1 (fr) Procédé de traitement d'image, programme, dispositif de traitement d'image, et système ophtalmologique
JP7494855B2 (ja) 画像処理方法、画像処理装置、及び画像処理プログラム
CN111954485A (zh) 图像处理方法、程序、图像处理装置及眼科系统
Raga A smartphone based application for early detection of diabetic retinopathy using normal eye extraction
WO2022208903A1 (fr) Dispositif de tco, procédé de traitement de données tco, programme et support de stockage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21920952

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022576259

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21920952

Country of ref document: EP

Kind code of ref document: A1