WO2022157838A1 - Image processing method, program, image processing device and ophthalmic system - Google Patents

Image processing method, program, image processing device and ophthalmic system Download PDF

Info

Publication number
WO2022157838A1
WO2022157838A1 PCT/JP2021/001731 JP2021001731W WO2022157838A1 WO 2022157838 A1 WO2022157838 A1 WO 2022157838A1 JP 2021001731 W JP2021001731 W JP 2021001731W WO 2022157838 A1 WO2022157838 A1 WO 2022157838A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
abnormal
partial
wide
processing method
Prior art date
Application number
PCT/JP2021/001731
Other languages
French (fr)
Japanese (ja)
Inventor
泰士 田邉
真梨子 向井
媛テイ 吉
仁志 田淵
Original Assignee
株式会社ニコン
株式会社シンクアウト
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン, 株式会社シンクアウト filed Critical 株式会社ニコン
Priority to PCT/JP2021/001731 priority Critical patent/WO2022157838A1/en
Priority to JP2022576259A priority patent/JPWO2022157838A1/ja
Publication of WO2022157838A1 publication Critical patent/WO2022157838A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions

Definitions

  • the present invention relates to an image processing method, program, image processing apparatus, and ophthalmic system.
  • Patent Document 1 Conventionally, there has been known a method of displaying an enlarged image by a user's operation in order to observe details in a fundus image (see Patent Document 1, for example).
  • the technology of the present disclosure provides a novel image processing method.
  • One embodiment of the present invention includes processing for acquiring a wide-angle fundus image obtained by photographing with an ophthalmologic apparatus, processing for extracting a plurality of partial images from the wide-angle fundus image to which an abnormal finding has been added, and the abnormal finding. and a display process for displaying the first partial image.
  • FIG. 1 is an overall configuration diagram of an information processing system according to a first embodiment;
  • FIG. It is a figure which shows the hardware constitutions of the information processing apparatus which concerns on 1st Embodiment.
  • 1 is a diagram showing the configuration of an ophthalmologic apparatus according to a first embodiment;
  • FIG. It is a figure which shows the functional structure of the server which concerns on 1st Embodiment.
  • It is a figure which shows the structure of the detection part which concerns on 1st Embodiment.
  • FIG. 10 is a diagram showing an example of partial area setting of a wide-angle fundus image and an example of a GUI using the generated partial image
  • FIG. 10 is a diagram showing an example of partial area setting of a wide-angle fundus image and an example of a GUI using the generated partial image
  • It is a figure which shows an example of GUI.
  • It is a figure which shows an example of GUI.
  • It is a figure which shows an example of GUI.
  • GUI the functional structure of the server which concerns on 2nd Embodiment.
  • 9 is a flowchart showing estimation processing according to the second embodiment
  • FIG. 1 shows the configuration of an information processing system 1 according to one embodiment of the present invention.
  • the information processing system 1 includes a server 10 , a terminal 20 and an ophthalmologic apparatus 30 .
  • the server 10, the terminal 20, and the ophthalmologic apparatus 30 are connected via the network 5 so as to be able to transmit and receive data to each other.
  • the network 5 is a wireless or wired communication means, such as the Internet, WAN (Wide Area Network), LAN (Local Area Network), public communication network, dedicated line, and the like.
  • the information processing system 1 is composed of a plurality of information management devices, the present invention does not limit the number of these devices. Therefore, the information processing system 1 can be configured with one or more devices as long as they have the following functions.
  • the server 10 and the terminal 20 are information processing devices installed in medical institutions such as hospitals and clinics, and are mainly operated by staff members of the medical institutions to acquire images captured by the ophthalmic device 30 and to obtain images. edit and analyze the data.
  • the ophthalmic device 30 is a device that performs SLO (Scanning Laser Ophthalmoscope) and OCT (Optical Coherence Tomography) (Fig. 3).
  • the ophthalmologic apparatus 30 has a control device 31 and an imaging device 32 .
  • FIG. 2 shows an example of hardware (hereinafter referred to as "information processing apparatus 100") used for realizing the server 10, the terminal 20, and the control device 31 of the ophthalmologic apparatus 30.
  • the information processing apparatus 100 includes a processor 101 , a main memory device 102 , an auxiliary memory device 103 , an input device 104 , an output device 105 and a communication device 106 . These are communicably connected to each other via communication means such as a bus (not shown).
  • the information processing apparatus 100 does not necessarily have to be implemented entirely by hardware. may be realized by virtual resources such as
  • the processor 101 is configured using a CPU (Central Processing Unit), an MPU (Micro Processing Unit), and the like.
  • the functions of the server 10, the terminal 20, and the control device 31 are implemented by the processor 101 reading out and executing the programs stored in the main storage device 102.
  • FIG. 1 Central Processing Unit
  • MPU Micro Processing Unit
  • the main memory device 102 is a device that stores programs and data, and includes ROM (Read Only Memory), RAM (Random Access Memory), nonvolatile semiconductor memory (NVRAM (Non Volatile RAM)), and the like.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • NVRAM Non Volatile RAM
  • the auxiliary storage device 103 is, for example, SSD (Solid State Drive), various non-volatile memories (NVRAM: Non-volatile memory) such as SD memory cards, hard disk drives, optical storage devices (CD (Compact Disc), DVD (Digital Versatile Disc), etc.), cloud server storage area, etc. Programs and data stored in the auxiliary storage device 103 are read into the main storage device 102 at any time.
  • SSD Solid State Drive
  • NVRAM Non-volatile memory
  • CD Compact Disc
  • DVD Digital Versatile Disc
  • cloud server storage area etc.
  • Programs and data stored in the auxiliary storage device 103 are read into the main storage device 102 at any time.
  • the input device 104 is an interface that accepts input of information, and includes, for example, a keyboard, mouse, touch panel, card reader, voice input device (microphone, etc.), voice recognition device, and the like.
  • the information processing device 100 may be configured to receive input of information from another device via the communication device 106 .
  • the output device 105 is an interface that outputs various types of information. synthesizer and the like.
  • the information processing device 100 may be configured to output information to another device via the communication device 106 .
  • the output device 105 corresponds to the display section in the present invention.
  • the communication device 106 is a wired or wireless communication interface that realizes communication with other devices via the network 5, and includes, for example, a NIC (Network Interface Card), a wireless communication module, a USB (Universal Serial Interface ) module, serial communication module, and the like.
  • NIC Network Interface Card
  • USB Universal Serial Interface
  • the configuration of the ophthalmologic apparatus 30 is shown in FIG.
  • the ophthalmologic apparatus 30 includes an imaging device 32 and a control device 31 .
  • the control device 31 may be provided in the same housing as the imaging device 32 or may be provided separately from the imaging device 32 .
  • the imaging device 32 operates under the control of the control device 31.
  • the imaging device 32 includes an SLO unit 33 , an imaging optical system 34 and an OCT unit 35 .
  • the imaging optical system 34 includes an optical scanner 341 and a wide-angle optical system 342 .
  • the photographing device 32 photographs an image of the subject's eye.
  • the imaging device 32 captures, for example, the fundus of the subject's eye, and obtains a fundus image and a tomographic image (OCT image), which will be described later.
  • OCT image tomographic image
  • the SLO unit 18 acquires an image of the fundus 12A of the eye 12 to be examined.
  • the OCT unit 20 acquires a tomographic image of the eye 12 to be examined.
  • a front view image of the retina created based on the SLO data acquired by the SLO unit 18 is referred to as an SLO image
  • a tomographic image or a front view image of the retina created based on the OCT data acquired by the OCT unit 20 is referred to as an SLO image.
  • An image (en-face image) or the like is called an OCT image.
  • the SLO image is sometimes referred to as a two-dimensional fundus image.
  • An OCT image may also be referred to as a fundus tomographic image, a posterior segment tomographic image, or an anterior segment tomographic image, depending on the imaging region of the subject's eye 12 .
  • the image captured by the ophthalmologic apparatus 30 may also be referred to as the image of the eye to be examined.
  • the image of the subject's eye is an image of the fundus captured using a wide-angle optical system as described later, it may be referred to as a wide-angle fundus image P.
  • the optical scanner 341 scans the light emitted from the SLO unit 33 in the X direction and the Y direction.
  • the optical scanner 341 may be an optical element capable of deflecting a light beam, and for example, a polygon mirror, a galvanomirror, or the like can be used.
  • the wide-angle optical system 342 includes an objective optical system. Wide-angle optics 342 provide a wide-angle field of view at the fundus.
  • the SLO system is realized by the control device 31, the SLO unit 33, and the imaging optical system 34 shown in FIG. Since the SLO system includes the wide-angle optical system 342, observation in a wide field of view (FOV) is realized at the fundus.
  • FOV indicates a range that can be photographed by the photographing device 32 .
  • FOV can be expressed as a viewing angle.
  • a viewing angle can be defined by an internal illumination angle and an external illumination angle in this embodiment.
  • the external irradiation angle is an irradiation angle defined by using the pupil as a reference for the irradiation angle of the light flux irradiated from the UWF ophthalmic apparatus 110 to the eye to be examined.
  • the internal illumination angle is an illumination angle defined with the center O of the eyeball as a reference for the illumination angle of the luminous flux with which the fundus 12A is illuminated.
  • the external illumination angle and the internal illumination angle are in correspondence.
  • an external illumination angle of 120 degrees corresponds to an internal illumination angle of approximately 160 degrees.
  • the internal illumination angle is 200 degrees.
  • an SLO fundus image obtained by photographing at an internal illumination angle of 160 degrees or more is referred to as a UWF-SLO fundus image.
  • the wide-angle optical system 342 may be a reflective optical system using a concave mirror such as an elliptical mirror, a refractive optical system using a wide-angle lens, or a catadioptric system combining concave mirrors and lenses.
  • a wide-angle optical system using an elliptical mirror, a wide-angle lens, etc. it is possible to photograph not only the central part of the fundus but also the peripheral part of the retina.
  • a configuration using a system using an elliptical mirror described in International Publication WO2016/103484 or International Publication WO2016/103489 may be used.
  • the disclosure of International Publication WO2016/103484 and the disclosure of International Publication WO2016/103489 are each incorporated herein by reference in its entirety.
  • the SLO unit 33 includes a blue light (B light) light source 331B, a green light (G light) light source 331G, a red light (R light) light source 331R, and an infrared light (IR light) light source such as near infrared light. 331IR and an optical system 335 that reflects or transmits light from these light sources and guides them to one optical path.
  • the SLO unit 33 also includes a beam splitter 332 and detection elements 333B, 333G, 333R, and 333IR that detect B light, G light, R light, and IR light, respectively.
  • the SLO unit 33 can switch between a light source that emits light or a combination of light sources that emit light, such as a mode that emits B light, R light, and G light, and a mode that emits IR light.
  • the beam splitter 332 has the function of splitting the reflected light from the fundus into B light, R light, G light, and IR light, and reflecting each light toward the detection elements 333B, 333G, 333R, and 333IR. .
  • the detection elements 333B, 333G, 333R, and 333IR can detect B light, R light, G light, and IR light, respectively.
  • the light incident on the imaging optical system 34 from the SLO unit 33 is scanned in the X direction and the Y direction by the optical scanner 341 .
  • the scanning light passes through the wide-angle optical system 342 and irradiates the fundus. Reflected light reflected by the fundus enters the SLO unit 33 via the wide-angle optical system 342 and the optical scanner 341 .
  • the reflected light reflected by the fundus enters the SLO unit 33 via the wide-angle optical system 342 and the optical scanner 341, and is decomposed into B light, R light, G light, and IR light by the beam splitter 332. These lights are detected by detection elements 333B, 333G, 333R, and 333IR, respectively.
  • the processor 101 of the control device 31 can generate an SLO fundus image.
  • the OCT system is composed of the control device 31, the OCT unit 35, and the imaging optical system 34 shown in FIG.
  • OCT unit 35 comprises light source 351 , sensor 352 , first optical coupler 353 , reference optics 354 , collimating lens 355 and second optical coupler 356 .
  • the light emitted from the light source 351 is split by the first optical coupler 353 .
  • One of the split lights is collimated by the collimating lens 355 and enters the imaging optical system 34 as measurement light.
  • Light incident on the imaging optical system 34 is scanned in the X and Y directions by an optical scanner 341 .
  • the scanning light passes through the wide-angle optical system 342 and irradiates the fundus.
  • the measurement light reflected by the fundus enters the OCT unit 35 via the wide-angle optical system 342 and enters the second optical coupler 356 via the collimating lens 355 and the first optical coupler 353 .
  • the other light emitted by the light source 351 and branched by the first optical coupler 353 enters the second optical coupler 356 via the reference optical system 354 as reference light.
  • the reference light and the measurement light reflected by the fundus are interfered by the second optical coupler 356 to generate interference light.
  • the interfering light is received by sensor 352 .
  • the control device 31 receives signals from the sensor 352 and generates a tomographic image. Imaging using an OCT system and an image obtained by the imaging may hereinafter be simply referred to as OCT imaging and OCT image, respectively.
  • FIG. 4 shows main functions (functional configuration) of the server 10 .
  • the server 10 has functions of a database 114 and a management unit 120 .
  • the management unit 120 particularly has the functions of the image processing unit 116 and the detection unit 118 .
  • the database 114 is stored in the main storage device 102 of the server 10 .
  • Each function of the management unit 120 is implemented by the processor 101 of the server 10 reading and executing a program stored in the main storage device 102 of the server 10 .
  • the server 10 also has functions such as an operating system, a file system, a device driver, and a DBMS (DataBase Management System).
  • functions such as an operating system, a file system, a device driver, and a DBMS (DataBase Management System).
  • the management unit 120 performs processing executed by the server 10, such as acquisition and management of images. Images acquired and managed by the management unit include images captured by the ophthalmologic apparatus 30 .
  • the image processing unit 116 mainly performs GUI generation and processing of images captured by the ophthalmologic apparatus 30 .
  • the detection unit 118 detects the presence or absence of abnormal findings such as bleeding or retinal detachment (hereinafter referred to as “abnormal findings”) in the fundus including the retina or choroid, and the details of the findings, captured by the ophthalmologic apparatus 30 . It has a function of estimating from the captured image. In the detection unit 118, the presence or absence of an abnormal finding and its details are estimated based on the abnormal region in the image of the subject's eye. In this embodiment, the detection unit 118 is a trained model generated by machine learning.
  • the detection unit 118 is a model that performs deep learning to learn the image feature amount of the abnormal region in the image of the eye to be inspected.
  • the detection unit 118 constructs a neural network that outputs information indicating the result of estimating the presence or absence of an abnormal finding for the input image of the eye to be inspected.
  • the neural network is a deep neural network (DNN).
  • the detection unit 118 has an input layer that receives the input of the image of the subject's eye, an output layer that outputs the estimation result of the presence or absence of an abnormal finding, and an intermediate layer that extracts the image feature amount of the image of the subject's eye (Fig. 5).
  • Each of the input layer, the output layer, and the intermediate layer has nodes (indicated by white circles in the figure), and the nodes of these layers are connected by edges (indicated by arrows in the figure). Note that the configuration of the detection unit 118 shown in FIG. 5 is an example, and the number of nodes and edges, the number of intermediate layers, and the like can be changed as appropriate.
  • the intermediate layer has a convolution layer for convolving the pixel value of each pixel in the image of the eye to be inspected input from the input layer, and a pooling layer for mapping the pixel value. , and these layers are used to extract the feature amount of the image of the subject's eye.
  • the output layer has one or more neurons that output results of estimating abnormal findings of the input image of the subject's eye.
  • the detection unit 118 can also output the likelihood of the estimation result together with the estimation result.
  • the likelihood is, for example, a probability value output from the output layer of the detection unit 118, and for example, the degree of reliability of the estimated abnormal finding is indicated by a value from "0" to "1". . By notifying the user of the likelihood, the user can know how accurate the estimation result is.
  • the detection unit 118 also outputs the severity of the abnormal finding.
  • the severity can be explained as the seriousness of the symptoms, the grade of the symptoms, the speed of progression, the magnitude of the effects of the symptoms on the human body, and the like. For example, if the symptom or the like is bleeding, the detection unit 118 estimates the severity of the bleeding from the size, amount, and the like of the bleeding. Similarly, the detection unit 118 outputs the degree of severity for other abnormal findings such as retinal detachment and neovascularization.
  • the detection unit 118 is described as being a CNN, but the detection unit 118 is not limited to a CNN, and may be a neural network other than a CNN, or a trained model constructed by another learning algorithm. you can
  • the database 114 stores images such as wide-angle fundus images P and OCT images obtained by photographing fundus tissues having retinas and choroids. Annotations can also be added to the image of the eye to be examined by a medical professional or the like. In the database 114, annotations are added in the image in association with the image of the eye to be examined and an image showing a part of the image of the eye to be examined. It is possible to save the marked locations and the contents of the annotations together. The saved data is used for learning and re-learning of the detection unit 118 .
  • the annotation includes information indicating the abnormal region of the fundus and details of abnormal findings such as macular degeneration attached to the abnormal region.
  • Database 114 also stores patient medical records, such as electronic medical records, patient IDs, and other data, including historical records of images of the eye being examined.
  • the database 118 also stores information about findings obtained by the detection unit 118, which will be described later, in association with the image.
  • the deep learning of the detection unit 118 uses a dataset containing a large number of images containing abnormal areas such as bleeding and neovascularization, and annotations attached to these images. Deep learning of the detection unit 118 is performed by having the detection unit 118 learn or re-learn this data set.
  • FIG. A program stored in the main storage device 102 of the server 10 is started, and the processing of the information processing system 1 is executed by the management unit 120 of the server 10 as follows.
  • the process performed by the management part 120 of the server 10 may be described simply as what "the server 10" performs.
  • the outline of the processing performed by the server 10 is roughly composed of four steps, as shown in FIG.
  • the server 10 acquires an image of the subject's eye obtained by photographing the retina with the ophthalmologic apparatus 30 (S1).
  • the wide-angle fundus image P is used as the image captured by the ophthalmologic apparatus 30 .
  • the wide-angle fundus image P is captured by the SLO system of the ophthalmologic apparatus 30 as described above.
  • the management unit 120 acquires the wide-angle fundus image P stored in the ophthalmologic apparatus 30 via the network 5 .
  • the management unit 120 estimates whether there is an abnormal finding in the acquired wide-angle fundus image P (S3). After that, the management unit 120 performs output processing of the image of the subject's eye in which the presence or absence of an abnormal finding is estimated (S5).
  • the management unit 120 performs additional processing (S7).
  • the additional processing includes filling in an electronic medical record, re-learning, etc., and is mainly executed according to instructions from the user.
  • step S ⁇ b>31 the management unit 120 inputs the wide-angle fundus image P to the detection unit 118 .
  • the management unit 120 acquires an estimation result regarding the presence or absence of an abnormality in the detection unit 118 (S33).
  • the output of the detection unit 118 includes the type of abnormal finding and information specifying the abnormal region.
  • An abnormal region is a region in which there is a difference from a normal eye. Examples of abnormal regions captured in images of the subject's eye include bleeding points, neovascular regions, retinal detachment regions, non-perfused regions, etc. in the retina or choroid.
  • a finding based on an abnormal region is an abnormal finding, and examples of types of abnormal findings include the presence and degree of hemorrhage, neovascularization, retinal detachment, and non-perfused region in the retina or choroid.
  • FIG. 10 shows a wide-angle fundus image P including a macula M as an example of an abnormal finding. As the estimation result, an image area estimated to include an abnormal area and an identification result identifying the type of abnormal finding are obtained.
  • the detection unit 118 also outputs the likelihood of the estimation result and the severity of the abnormal findings. Then, the management unit 120 generates abnormal finding information including an image area estimated to include an abnormal area, the identification result of identifying the type of abnormal finding, the likelihood of the estimation result, the severity of the abnormal finding, and the like. do.
  • the wide-angle fundus image P is stored in association with the abnormal finding information.
  • the estimation process generates a wide-angle fundus image P associated with abnormal finding information.
  • FIG. 10 shows an example of a wide-angle fundus image P to which an abnormal finding is added by the estimation process (S3).
  • bleeding is used as an example of an abnormal finding.
  • a bleeding point B is displayed in the wide-angle fundus image P as an abnormal area.
  • the management unit 120 uses the image processing unit 116 to extract a partial image indicating a partial area in the wide-angle fundus image P (S51).
  • the management unit 120 sets a partial area of the wide-angle fundus image P and generates a partial image.
  • a plurality of partial images are extracted, and the plurality of partial images may or may not overlap with each other. If an abnormal finding is given, the area of the partial image is determined such that the abnormal area is included in any one of the plurality of partial images.
  • the regions of the partial images may be determined so that one partial image includes all the abnormal regions, and the partial images are determined so that the plurality of partial images include the abnormal regions.
  • a region of the image may be determined.
  • frames F1, F2, F3, and F4 superimposed on the wide-angle fundus image P indicate four partial regions extracted from the wide-angle fundus image P.
  • FIG. In addition, four partial images D1, D2, D3, and D4 generated as a result of extracting each partial area of the wide-angle fundus image P are shown on the right side of each figure. Partial images D1, D2, D3, and D4 correspond to the areas indicated by frames F1, F2, F3, and F4, respectively.
  • the image processing unit 116 extracts partial images from the wide-angle fundus image P such that one of the partial areas includes the bleeding point B, which is an abnormal area.
  • the region of the frame F2 includes the bleeding point B, and the other regions (F1, F3, F4) do not include the bleeding point B.
  • FIG. 10 the image processing unit 116 determines the size and arrangement of the regions so that the partial images D1, D2, D3, and D4 all include the entire macula M.
  • the image processing unit 116 determines the size and arrangement of the region so that the bleeding point B is included in one of the regions, in other words, the bleeding point B is displayed in one of the partial images. .
  • the macula M is displayed in each of the partial images D1, D2, D3, and D4.
  • a bleeding point B is also displayed in the partial image D2.
  • the image processing unit 116 determines the division positions so that the areas corresponding to the frames F1 to F4, that is, the partial images D1 to D4 do not overlap with the macula M at the center. Also, the image processing unit 116 adjusts the arrangement and size of the partial images so that the bleeding point B is displayed in one of the partial images. As a result, the bleeding point B is displayed in the partial image D1.
  • the method of setting partial images is determined according to a predetermined method.
  • the placement and size of each image may be determined so that the frames F1 to F4, that is, the partial images D1 to D4 all include the entire macula M (FIG. 10).
  • the division positions may be determined so that the frames F1 to F4, that is, the regions of the partial images D1 to D4 do not overlap with the macula M as the center (FIG. 11).
  • the image processing unit 116 extracts partial images from the wide-angle fundus image P as described above, and creates partial images D1, D2, D3, and D4.
  • the image processing unit 116 performs image processing for sharpening the abnormal area (S53).
  • image processing for sharpening the abnormal area (S53).
  • the image processing unit 116 removes reflection of eyelashes and other artifacts from the partial images D1, D2, D3, D4 and the wide-angle fundus image P to make the images easy to view.
  • the image processing unit 116 further performs processing for setting the method of emphasizing the abnormal region (S55).
  • the emphasis method is appropriately selected according to the type, severity, and certainty of the abnormal findings. For example, when abnormal findings are recognized at multiple locations, a partial image including an abnormal region with a high degree of severity is set to be preferentially displayed in the subsequent display processing (S59). Further, the image processing unit 116 performs processing for setting a display method for displaying a partial image in which an abnormal region is displayed among the plurality of partial images in a manner to distinguish it from a partial image in which an abnormal region is not displayed. Specifically, the enlargement ratio and display order of the partial images in which the abnormal region is displayed are selected according to the type of abnormal finding or likelihood.
  • the image processing unit 116 selects an image such as a frame or an icon to be displayed superimposed on the portion to be emphasized according to the type or probability of the abnormal finding, and is displayed as an emphasized display in the subsequent display processing (S59). set to Note that these emphasis methods may be set by accepting user input.
  • the image processing unit 116 creates a GUI (Graphical User Interface) according to the partial image extraction method, abnormal region enhancement method, and highlighting selected up to step S55 (S57).
  • This GUI is transmitted to the terminal 20 via the network 5 and displayed on the output device 105 of the terminal 20 (S59).
  • the GUI receives instructions from the user who operates the terminal 20, changes the display according to the instructions, and further displays images such as icons and frames.
  • GUIs are shown in Figures 10 to 14.
  • a wide-angle fundus image P is arranged on the left side, and four partial images D1 to D4 are arranged on the right side thereof.
  • Frames F1, F2, F3, and F4 corresponding to the partial images D1, D2, D3, and D4 are superimposed on the wide-angle fundus image P and displayed.
  • By displaying the frames F1 to F4 it is easy to understand which region of the wide-angle fundus image P each of the partial images D1 to D4 indicates.
  • the partial image D2 displaying the bleeding point B which is the abnormal area, is preferentially displayed in the largest size.
  • FIG. 12 shows the GUI when the enhancement method (see S55) for further enlarging the partial image including the abnormal region is selected. Moreover, all of the partial images D1 to D4 are set so as to display the macula M (see S51). As shown, the partial image D2 in which the bleeding point B is displayed has a larger magnification than the other partial images D1, D3, and D4. Therefore, the user can easily confirm the bleeding point B.
  • the bleeding point B displayed in the partial image D2 is highlighted by a small frame S (see S55).
  • the image processing unit 116 further enlarges the image within the small frame S and displays it as an enlarged image L. FIG. Thereby, the user can easily confirm the bleeding point B.
  • the user's instruction is performed by, for example, clicking the image of the small frame S with a mouse or tapping on the display screen.
  • the magnification or angle of view of the enlarged image L is arbitrarily set.
  • the magnification is set to 5 times and the angle of view is set to 30 degrees, which is easy for the doctor to visually recognize.
  • the enlargement ratio or the angle of view preset by the doctor may be used.
  • the user can change the positions of the partial images D1 to D4 and change the range of enlargement by operating the positions and sizes of the frames F1 to F4.
  • icons IC1 to IC4 are highlighted. These highlights differ depending on the type of abnormal finding.
  • the area judged to be abnormal by the doctor and added to the display, the abnormal area output by AI, the area judged to be abnormal by the doctor and AI, and the area judged to be not abnormal by the doctor Icons IC1 to IC4 with different displays are used for the four types of areas instructed to change the display on the GUI. In this way, the method of highlighting is differentiated according to the type of abnormal findings.
  • the icons IC1 to IC4 are displayed differently from each other by, for example, changing line types, shapes, colors, and the like.
  • the display order of the partial images D1 to D4 or the display mode of the enlarged image L is changed according to the severity.
  • the mode of highlighting will also change according to the severity. For example, brighter or more saturated color highlighting is used for more severe anomalous regions.
  • any type of emphasis can be selected, regardless of the image division method or GUI display mode, as shown in FIGS. It is also possible to combine multiple enhancement methods. For example, in the GUI display in which the partial image D2 is enlarged as shown in FIG. 11, the small frame S and the enlarged image L may be superimposed and displayed on the partial image D2.
  • the image processing unit 116 can cooperate with the ophthalmologic apparatus 30 to perform OCT imaging using the OCT system and acquire a tomographic image (S61).
  • An instruction for OCT imaging can also be accepted via the GUI in the display processing (S59).
  • the user can specify an abnormal region to be subjected to OCT imaging by operating a mouse click or the like.
  • the image processing unit 116 identifies the position of the fundus where the abnormal area is located, and instructs the ophthalmologic apparatus 30 via the network 5 to take an image. Based on this instruction, the ophthalmologic apparatus 30 uses the OCT system to perform OCT imaging on the abnormal region.
  • a tomographic image of the retina obtained by OCT imaging is obtained by the server 10 and displayed on the output device 105 of the terminal 20 . At that time, it may be displayed together with the wide-angle fundus image P or the partial images D1 to D4 on the GUI.
  • OCT imaging may be performed without user instructions.
  • the image processing unit 116 identifies the position of the fundus where the abnormal region exists, and instructs the ophthalmologic apparatus 30 to take an image via the network 5 . Based on this instruction, the ophthalmologic apparatus 30 uses the OCT system to perform OCT imaging on the abnormal region.
  • imaging may be performed according to the patient's information. For example, if a diagnosis of macular degeneration or the like is obtained in the electronic medical record stored in the database 114, the server 10 instructs the ophthalmologic apparatus 30 to perform OCT imaging of the vicinity of the macula M. FIG. Based on this instruction, the ophthalmologic apparatus 30 performs OCT imaging on the abnormal region.
  • the addition process will be explained using the flow of FIG. 9 .
  • the user can check the abnormal findings and their regions in the display process (S59), and then add annotations to the estimation results and save them (S71).
  • the saved annotations are saved in the database 114 and saved in association with the wide-angle fundus image P and the estimation result.
  • the saved annotations and wide-angle fundus image P are used for re-learning of the detection unit 118 (S73).
  • the detection unit 118 can improve the ability to estimate an abnormal finding by learning the wide-angle fundus image P together with annotations indicating the types of abnormal findings and correct abnormal regions.
  • the detection unit 118 uses a trained model with CNN.
  • an image processing algorithm may be used for estimating an abnormal finding instead of a trained model that has undergone machine learning.
  • a second embodiment will be described below.
  • FIG. 15 shows the software configuration of the server 10 according to the second embodiment.
  • the second embodiment includes a detection unit 119 configured by an image processing algorithm.
  • the configuration of the server 10, the terminal 20, and the ophthalmologic apparatus 30 is the same as in the first embodiment, except for the detection unit 119.
  • the same reference numerals as used in the first embodiment are assigned to the same configurations as in the first embodiment, and the description thereof is omitted.
  • the estimation process (S3) by the detection unit 119 in the second embodiment will be described below.
  • an algorithm for estimating the presence or absence of abnormal findings such as bleeding in the wide-angle fundus image P will be described.
  • step S35 the detection unit 119 extracts a blood region.
  • the blood region is a region in the wide-angle fundus image P that includes a blood vessel region showing blood vessels and a bleeding region formed by bleeding from blood vessels.
  • an image obtained by extracting the blood region from the wide-angle fundus image P is obtained.
  • the detection unit 119 extracts only the blood vessel region.
  • a general image analysis method is applied in the region extraction in steps S35 and S36.
  • the wide-angle fundus image P is displayed in 256 gradations or the like, and is converted into a binarized image by comparing the gradation value of each pixel with the threshold value.
  • a threshold value is set so as to distinguish between the blood region and other regions.
  • a threshold value is set so as to distinguish between the blood vessel region and other regions.
  • Various methods such as the mode method can be adopted as the method of setting the threshold.
  • step S37 the detection unit 119 removes noise in the binarized image, particularly around the blood vessel region.
  • noise removal processing for example, dilation processing (Dilation) and erosion processing (Erosion) are used.
  • step S37 an image obtained by extracting the blood vessel region from the wide-angle fundus image P is obtained.
  • step S38 the detection unit 119 can obtain an image showing the bleeding area by taking the difference between the image showing the blood area and the image showing the blood vessel area.
  • the detection unit 119 estimates the presence or absence of bleeding based on the image acquired in this way (S39). When determining that there is bleeding, the detection unit 119 outputs an abnormal finding of bleeding and an image showing the bleeding region.
  • the image processing in the above embodiment includes a process (S1) of acquiring a wide-angle fundus image (P) obtained by photographing with an ophthalmologic apparatus, and a plurality of partial images (D1 -D4), a selection process (S51) for selecting a first partial image showing the abnormal region (B) that is the basis of the abnormal finding from the plurality of partial images, and the first partial image.
  • An image processing method including display processing (S59) for displaying. including.
  • the partial areas where abnormal findings are observed are displayed on the wide-angle fundus image P so that they can be distinguished. Further, a partial image having an abnormal finding among the partial images D1 to D4 can be enlarged and displayed.
  • the above embodiment includes an estimation process (S3) for estimating an abnormal finding in the wide-angle fundus image P.
  • the estimation processing includes processing (S31) of inputting the wide-angle fundus image P to the detection units 118 and 119 for estimating abnormal findings, and processing for causing the detection units 118 and 119 to estimate abnormal findings.
  • the above embodiment includes a process of accepting addition of annotations related to abnormal findings for the wide-angle fundus image P (S71), and a process of re-learning the detection unit 118 with the annotations and the wide-angle fundus image P (S73).
  • the detection unit 118 Since the detection unit 118 re-learns using the annotated data, the accuracy of the estimation processing of the detection unit 118 can be improved. That is, it is possible to reduce the probability of outputting false positives or false negatives.
  • the severity of the abnormal finding is estimated. Also, the display method of the first area is changed according to the estimated severity.
  • the above-described embodiment includes a process of specifying the position of the abnormal region on the fundus (S61), and a process of acquiring a tomographic image at the specified position by performing OCT imaging with an OCT system on the specified position (S61). further includes
  • the partial area setting of the wide-angle fundus image P is performed using a rectangular area in the above embodiment, the present invention is not limited to this.
  • the division may be performed by extending the division lines radially, or by dividing into slits.
  • Annotations can also include the shape of partial area settings.
  • the partial area setting shape determined by the user to be optimal may be included in the annotation, and the detection unit 119 may learn and re-learn.
  • the optimum partial area shape and partial area position are output in accordance with the abnormal findings estimated by the detection unit 119 .
  • partial image extraction of the wide-angle fundus image P may be performed according to the partial image extraction method output by the detection unit 119 .
  • the display mode in the display processing (S59) and the like was a mode in which images were used to distinguish and display abnormal areas by color and shape, but the present invention is not limited to such a viewing mode.
  • a mode of setting display variations by voice may be used.
  • the present invention is not limited to this, and ophthalmologic apparatuses that do not have an OCT system or that use other systems may be used.
  • a plurality of terminals are connected to one server 10, and the above functions are exhibited.
  • the present invention does not limit the number of servers or the number of terminals, and for example, the functions described above may be realized by only one device. Also, the number of terminals and the number of servers may be further increased. Also, each function does not necessarily have to be implemented by the server 10 or the like, and may be shared by a plurality of devices to implement the function. That is, the present invention does not limit the number of controllers or devices, or the sharing of functions among devices.

Abstract

This image processing method comprises: a process of acquiring a wide-angle fundus image obtained by imaging with an ophthalmic apparatus; a process of extracting a plurality of partial images from the wide-angle fundus image to which an abnormal finding is added; a selection process of selecting a first partial image showing an abnormal region that provides a reason of the abnormal finding from among the plurality of partial images; and a display process of displaying the first partial image.

Description

画像処理方法、プログラム、画像処理装置、眼科システムImage processing method, program, image processing apparatus, ophthalmic system
 本発明は、画像処理方法、プログラム、画像処理装置、及び眼科システムに関する。 The present invention relates to an image processing method, program, image processing apparatus, and ophthalmic system.
 従来、眼底画像中の詳細観察のため、ユーザの操作により拡大画像を表示する方法が知られている(例えば、特許文献1参照)。 Conventionally, there has been known a method of displaying an enlarged image by a user's operation in order to observe details in a fundus image (see Patent Document 1, for example).
米国特許出願公開第2020/0069175号明細書U.S. Patent Application Publication No. 2020/0069175
 本開示の技術は、新規な画像処理方法を提供する。 The technology of the present disclosure provides a novel image processing method.
 本発明の一実施態様は、眼科装置で撮影して得られた広角眼底画像を取得する処理と、異常所見が付与された前記広角眼底画像から複数の部分画像を抽出する処理と、前記異常所見の根拠となる異常領域が示される第一部分画像を前記複数の部分画像から選択する選択処理と、前記第一部分画像を表示する表示処理と、を含む画像処理方法である。 One embodiment of the present invention includes processing for acquiring a wide-angle fundus image obtained by photographing with an ophthalmologic apparatus, processing for extracting a plurality of partial images from the wide-angle fundus image to which an abnormal finding has been added, and the abnormal finding. and a display process for displaying the first partial image.
第1実施形態に係る情報処理システムの全体構成図である。1 is an overall configuration diagram of an information processing system according to a first embodiment; FIG. 第1実施形態に係る情報処理装置のハードウェア構成を示す図である。It is a figure which shows the hardware constitutions of the information processing apparatus which concerns on 1st Embodiment. 第1実施形態に係る眼科装置の構成を示す図である。1 is a diagram showing the configuration of an ophthalmologic apparatus according to a first embodiment; FIG. 第1実施形態に係るサーバの機能構成を示す図である。It is a figure which shows the functional structure of the server which concerns on 1st Embodiment. 第1実施形態に係る検出部の構成を示す図である。It is a figure which shows the structure of the detection part which concerns on 1st Embodiment. 第1実施形態に係る処理概要を示すフローチャートである。4 is a flowchart showing an overview of processing according to the first embodiment; 第1実施形態に係る推定処理を示すフローチャートである。6 is a flowchart showing estimation processing according to the first embodiment; 第1実施形態に係る出力処理を示すフローチャートである。4 is a flowchart showing output processing according to the first embodiment; 第1実施形態に係る追加処理を示すフローチャートである。6 is a flowchart showing addition processing according to the first embodiment; 広角眼底画像の部分領域設定例と、生成された部分画像を用いたGUIの一例を示す図である。FIG. 10 is a diagram showing an example of partial area setting of a wide-angle fundus image and an example of a GUI using the generated partial image; 広角眼底画像の部分領域設定例と、生成された部分画像を用いたGUIの一例を示す図である。FIG. 10 is a diagram showing an example of partial area setting of a wide-angle fundus image and an example of a GUI using the generated partial image; GUIの一例を示す図である。It is a figure which shows an example of GUI. GUIの一例を示す図である。It is a figure which shows an example of GUI. GUIの一例を示す図である。It is a figure which shows an example of GUI. 第2実施形態に係るサーバの機能構成を示す図である。It is a figure which shows the functional structure of the server which concerns on 2nd Embodiment. 第2実施形態に係る推定処理を示すフローチャートである。9 is a flowchart showing estimation processing according to the second embodiment;
 <第1実施形態>
  以下、本発明をその一実施形態である第1実施形態に即して図面を参照しつつ説明する。
<First Embodiment>
BEST MODE FOR CARRYING OUT THE INVENTION Hereinafter, the present invention will be described in accordance with a first embodiment, which is one embodiment thereof, with reference to the drawings.
 〔構成〕
  図1に本発明の一実施形態に係る情報処理システム1の構成を示す。情報処理システム1は、サーバ10、端末20、及び眼科装置30を含む。サーバ10、端末20、および眼科装置30は、ネットワーク5を介して互いにデータの送受信が可能となるように接続されている。
〔Constitution〕
FIG. 1 shows the configuration of an information processing system 1 according to one embodiment of the present invention. The information processing system 1 includes a server 10 , a terminal 20 and an ophthalmologic apparatus 30 . The server 10, the terminal 20, and the ophthalmologic apparatus 30 are connected via the network 5 so as to be able to transmit and receive data to each other.
 ネットワーク5は、無線方式または有線方式の通信手段であり、例えば、インターネット、WAN(Wide Area Network)、LAN(Local Area Network)、公衆通信網、専用線等である。なお、本実施形態による情報処理システム1は複数の情報管理装置によって構成されているが、本発明はこれらの装置の数を限定するものではない。そのため、情報処理システム1は、以下のような機能を備えるものであれば、1以上の装置によって構成することができる。 The network 5 is a wireless or wired communication means, such as the Internet, WAN (Wide Area Network), LAN (Local Area Network), public communication network, dedicated line, and the like. Although the information processing system 1 according to this embodiment is composed of a plurality of information management devices, the present invention does not limit the number of these devices. Therefore, the information processing system 1 can be configured with one or more devices as long as they have the following functions.
 サーバ10及び端末20は、病院や診療所等の医療機関に備えられている情報処理装置であり、主に医療機関の従事者によって操作され、眼科装置30で撮影された画像の取得、ならびに画像の編集及び解析の処理を行う。 The server 10 and the terminal 20 are information processing devices installed in medical institutions such as hospitals and clinics, and are mainly operated by staff members of the medical institutions to acquire images captured by the ophthalmic device 30 and to obtain images. edit and analyze the data.
 眼科装置30は、SLO(Scanning. Laser Ophthalmoscope)やOCT(光干渉断層撮影)を行う装置である(図3)。眼科装置30は、制御装置31と、撮影装置32とを有する。 The ophthalmic device 30 is a device that performs SLO (Scanning Laser Ophthalmoscope) and OCT (Optical Coherence Tomography) (Fig. 3). The ophthalmologic apparatus 30 has a control device 31 and an imaging device 32 .
 図2は、サーバ10、端末20、及び眼科装置30の制御装置31の実現に用いるハードウェア(以下、「情報処理装置100」と称する。)の一例である。同図に示すように、情報処理装置100は、プロセッサ101、主記憶装置102、補助記憶装置103、入力装置104、出力装置105、および通信装置106を備える。これらは図示しないバス等の通信手段を介して互いに通信可能に接続されている。 FIG. 2 shows an example of hardware (hereinafter referred to as "information processing apparatus 100") used for realizing the server 10, the terminal 20, and the control device 31 of the ophthalmologic apparatus 30. FIG. As shown in the figure, the information processing apparatus 100 includes a processor 101 , a main memory device 102 , an auxiliary memory device 103 , an input device 104 , an output device 105 and a communication device 106 . These are communicably connected to each other via communication means such as a bus (not shown).
 尚、情報処理装置100は、その全ての構成が必ずしもハードウェアで実現されている必要はなく、構成の全部又は一部が、例えば、クラウドシステム(cloud system)のクラウドサーバ(cloud server)のような仮想的な資源によって実現されていてもよい。 It should be noted that the information processing apparatus 100 does not necessarily have to be implemented entirely by hardware. may be realized by virtual resources such as
 プロセッサ101は、CPU(Central Processing Unit)、MPU(Micro Processing Unit)等を用いて構成される。プロセッサ101が、主記憶装置102に格納されているプログラムを読み出して実行することにより、サーバ10や端末20、及び制御装置31の機能が実現される。 The processor 101 is configured using a CPU (Central Processing Unit), an MPU (Micro Processing Unit), and the like. The functions of the server 10, the terminal 20, and the control device 31 are implemented by the processor 101 reading out and executing the programs stored in the main storage device 102. FIG.
 主記憶装置102は、プログラムやデータを記憶する装置であり、ROM(Read Only Memory)、RAM(Random Access Memory)、不揮発性半導体メモリ(NVRAM(Non Volatile RAM))等である。 The main memory device 102 is a device that stores programs and data, and includes ROM (Read Only Memory), RAM (Random Access Memory), nonvolatile semiconductor memory (NVRAM (Non Volatile RAM)), and the like.
 補助記憶装置103は、例えば、SSD(Solid State Drive)、SDメモリカード等の各種不揮発性メモリ(NVRAM:Non-volatile memory)、ハードディスクドライブ、光学式記憶装置(CD(Compact Disc)、DVD(Digital Versatile Disc)等)、クラウドサーバの記憶領域等である。補助記憶装置103に格納されているプログラムやデータは主記憶装置102に随時読み込まれる。 The auxiliary storage device 103 is, for example, SSD (Solid State Drive), various non-volatile memories (NVRAM: Non-volatile memory) such as SD memory cards, hard disk drives, optical storage devices (CD (Compact Disc), DVD (Digital Versatile Disc), etc.), cloud server storage area, etc. Programs and data stored in the auxiliary storage device 103 are read into the main storage device 102 at any time.
 入力装置104は、情報の入力を受け付けるインタフェースであり、例えば、キーボード、マウス、タッチパネル、カードリーダ、音声入力装置(マイクロフォン等)、音声認識装置等である。情報処理装置100が通信装置106を介して他の装置との間で情報の入力を受け付ける構成としてもよい。 The input device 104 is an interface that accepts input of information, and includes, for example, a keyboard, mouse, touch panel, card reader, voice input device (microphone, etc.), voice recognition device, and the like. The information processing device 100 may be configured to receive input of information from another device via the communication device 106 .
 出力装置105は、各種の情報を出力するインタフェースであり、例えば、画面表示装置(液晶モニタ、LCD(Liquid Crystal Display)、グラフィックカード等)、印字装置等)、音声出力装置(スピーカ等)、音声合成装置等である。情報処理装置100が通信装置106を介して他の装置との間で情報の出力を行う構成としてもよい。出力装置105は本発明における表示部に相当する。 The output device 105 is an interface that outputs various types of information. synthesizer and the like. The information processing device 100 may be configured to output information to another device via the communication device 106 . The output device 105 corresponds to the display section in the present invention.
 通信装置106は、ネットワーク5を介した他の装置との間の通信を実現する有線方式又は無線方式の通信インタフェースであり、例えば、NIC(Network Interface Card)、無線通信モジュール、USB(Universal Serial Interface)モジュール、シリアル通信モジュール等である。 The communication device 106 is a wired or wireless communication interface that realizes communication with other devices via the network 5, and includes, for example, a NIC (Network Interface Card), a wireless communication module, a USB (Universal Serial Interface ) module, serial communication module, and the like.
 眼科装置30が備える構成を図3に示す。眼科装置30は、撮影装置32及び制御装置31を含む。なお、制御装置31は、撮影装置32と同じ筐体内に設けられてよいし、撮影装置32とは別体としてもよい。 The configuration of the ophthalmologic apparatus 30 is shown in FIG. The ophthalmologic apparatus 30 includes an imaging device 32 and a control device 31 . Note that the control device 31 may be provided in the same housing as the imaging device 32 or may be provided separately from the imaging device 32 .
 撮影装置32は、制御装置31の制御下で作動する。撮影装置32は、SLOユニット33、撮影光学系34、及びOCTユニット35を含む。撮影光学系34は、光学スキャナ341及び広角光学系342を含む。撮影装置32は、被検眼の画像を撮影する。撮影装置32は、例えば被検眼の眼底を撮像し、後述する眼底画像や断層画像(OCT画像)を取得する。 The imaging device 32 operates under the control of the control device 31. The imaging device 32 includes an SLO unit 33 , an imaging optical system 34 and an OCT unit 35 . The imaging optical system 34 includes an optical scanner 341 and a wide-angle optical system 342 . The photographing device 32 photographs an image of the subject's eye. The imaging device 32 captures, for example, the fundus of the subject's eye, and obtains a fundus image and a tomographic image (OCT image), which will be described later.
 SLOユニット18は被検眼12の眼底12Aの画像を取得する。OCTユニット20は被検眼12の断層画像を取得する。以下では、SLOユニット18により取得されたSLOデータに基づいて作成された網膜の正面視画像をSLO画像と称し、OCTユニット20により取得されたOCTデータに基づいて作成された網膜の断層画像や正面画像(en-face画像)等をOCT画像と称する。なお、SLO画像は、二次元眼底画像と言及されることもある。また、OCT画像は、被検眼12の撮影部位に応じて、眼底断層画像、後眼部断層画像、前眼部断層画像と言及されることもある。 The SLO unit 18 acquires an image of the fundus 12A of the eye 12 to be examined. The OCT unit 20 acquires a tomographic image of the eye 12 to be examined. Hereinafter, a front view image of the retina created based on the SLO data acquired by the SLO unit 18 is referred to as an SLO image, and a tomographic image or a front view image of the retina created based on the OCT data acquired by the OCT unit 20 is referred to as an SLO image. An image (en-face image) or the like is called an OCT image. Note that the SLO image is sometimes referred to as a two-dimensional fundus image. An OCT image may also be referred to as a fundus tomographic image, a posterior segment tomographic image, or an anterior segment tomographic image, depending on the imaging region of the subject's eye 12 .
 以下、眼科装置30で撮影された画像を被検眼画像と称することもある。また、被検眼画像が、後述の通り広角光学系を用いて眼底を撮影した画像である場合には、広角眼底画像Pと称することもある。 Hereinafter, the image captured by the ophthalmologic apparatus 30 may also be referred to as the image of the eye to be examined. Further, when the image of the subject's eye is an image of the fundus captured using a wide-angle optical system as described later, it may be referred to as a wide-angle fundus image P.
 光学スキャナ341は、SLOユニット33から射出された光をX方向、及びY方向に走査する。光学スキャナ341は、光束を偏向できる光学素子であればよく、例えば、ポリゴンミラーやガルバノミラー等を用いることができる。 The optical scanner 341 scans the light emitted from the SLO unit 33 in the X direction and the Y direction. The optical scanner 341 may be an optical element capable of deflecting a light beam, and for example, a polygon mirror, a galvanomirror, or the like can be used.
 広角光学系342は、対物光学系を含む。広角光学系342によって、眼底において広角の視野が得られる。SLOシステムは、図3に示す制御装置31、SLOユニット33、及び撮影光学系34によって実現される。SLOシステムは、広角光学系342を備えるため、眼底において広い視野(FOV:Field of View)での観察が実現される。FOVは、撮影装置32によって撮影可能な範囲を示している。FOVは、視野角として表現され得る。視野角は、本実施の形態において、内部照射角と外部照射角とで規定され得る。外部照射角とは、UWF眼科装置110から被検眼へ照射される光束の照射角を、瞳孔を基準として規定した照射角である。また、内部照射角とは、眼底12Aへ照射される光束の照射角を、眼球中心Oを基準として規定した照射角である。外部照射角と内部照射角とは、対応関係にある。例えば、外部照射角が120度の場合、内部照射角は約160度に相当する。本実施の形態では、内部照射角は200度としている。ここで、内部照射角で160度以上の撮影画角で撮影されて得られたSLO眼底画像をUWF-SLO眼底画像と称する。 The wide-angle optical system 342 includes an objective optical system. Wide-angle optics 342 provide a wide-angle field of view at the fundus. The SLO system is realized by the control device 31, the SLO unit 33, and the imaging optical system 34 shown in FIG. Since the SLO system includes the wide-angle optical system 342, observation in a wide field of view (FOV) is realized at the fundus. FOV indicates a range that can be photographed by the photographing device 32 . FOV can be expressed as a viewing angle. A viewing angle can be defined by an internal illumination angle and an external illumination angle in this embodiment. The external irradiation angle is an irradiation angle defined by using the pupil as a reference for the irradiation angle of the light flux irradiated from the UWF ophthalmic apparatus 110 to the eye to be examined. Further, the internal illumination angle is an illumination angle defined with the center O of the eyeball as a reference for the illumination angle of the luminous flux with which the fundus 12A is illuminated. The external illumination angle and the internal illumination angle are in correspondence. For example, an external illumination angle of 120 degrees corresponds to an internal illumination angle of approximately 160 degrees. In this embodiment, the internal illumination angle is 200 degrees. Here, an SLO fundus image obtained by photographing at an internal illumination angle of 160 degrees or more is referred to as a UWF-SLO fundus image.
 広角光学系342は楕円鏡などの凹面ミラーを用いた反射光学系や、広角レンズなどを用いた屈折光学系、あるいは、凹面ミラーやレンズを組み合わせた反射屈折光学系でもよい。楕円鏡や広角レンズなどを用いた広角光学系を用いることにより、眼底中心部だけでなく眼底周辺部の網膜を撮影することが可能となる。楕円鏡を含むシステムを用いる場合には、国際公開WO2016/103484あるいは国際公開WO2016/103489に記載された楕円鏡を用いたシステムを用いる構成でもよい。国際公開WO2016/103484の開示および国際公開WO2016/103489の開示の各々は、その全体が参照により本明細書に取り込まれる。 The wide-angle optical system 342 may be a reflective optical system using a concave mirror such as an elliptical mirror, a refractive optical system using a wide-angle lens, or a catadioptric system combining concave mirrors and lenses. By using a wide-angle optical system using an elliptical mirror, a wide-angle lens, etc., it is possible to photograph not only the central part of the fundus but also the peripheral part of the retina. When using a system including an elliptical mirror, a configuration using a system using an elliptical mirror described in International Publication WO2016/103484 or International Publication WO2016/103489 may be used. The disclosure of International Publication WO2016/103484 and the disclosure of International Publication WO2016/103489 are each incorporated herein by reference in its entirety.
 SLOユニット33は、青色光(B光)の光源331B、緑色光(G光)の光源331G、赤色光(R光)の光源331R、及び、近赤外光などの赤外線(IR光)の光源331IRと、これらの光源からの光を反射又は透過して1つの光路に導く光学系335とを備えている。また、SLOユニット33は、ビームスプリッタ332、B光、G光、R光及び、IR光をそれぞれ検出する検出素子333B、333G、333R、及び、333IRを備える。 The SLO unit 33 includes a blue light (B light) light source 331B, a green light (G light) light source 331G, a red light (R light) light source 331R, and an infrared light (IR light) light source such as near infrared light. 331IR and an optical system 335 that reflects or transmits light from these light sources and guides them to one optical path. The SLO unit 33 also includes a beam splitter 332 and detection elements 333B, 333G, 333R, and 333IR that detect B light, G light, R light, and IR light, respectively.
 SLOユニット33は、B光、R光及びG光を発するモードと、IR光を発するモードなど、発光させる光源あるいは発光させる光源の組合せを切り替え可能である。 The SLO unit 33 can switch between a light source that emits light or a combination of light sources that emit light, such as a mode that emits B light, R light, and G light, and a mode that emits IR light.
 ビームスプリッタ332は、眼底からの反射光を、B光、R光、G光、及びIR光に分解し、それぞれの光を、検出素子333B、333G、333R、333IRへ向けて反射する機能を有する。 The beam splitter 332 has the function of splitting the reflected light from the fundus into B light, R light, G light, and IR light, and reflecting each light toward the detection elements 333B, 333G, 333R, and 333IR. .
 検出素子333B、333G、333R、333IRは、それぞれ、B光、R光、G光、及びIR光を検出することができる。 The detection elements 333B, 333G, 333R, and 333IR can detect B light, R light, G light, and IR light, respectively.
 SLOユニット33から撮影光学系34に入射した光は、光学スキャナ341によってX方向およびY方向に走査される。走査光は広角光学系342を経由して、眼底に照射される。眼底により反射された反射光は、広角光学系342および光学スキャナ341を経由してSLOユニット33へ入射する。 The light incident on the imaging optical system 34 from the SLO unit 33 is scanned in the X direction and the Y direction by the optical scanner 341 . The scanning light passes through the wide-angle optical system 342 and irradiates the fundus. Reflected light reflected by the fundus enters the SLO unit 33 via the wide-angle optical system 342 and the optical scanner 341 .
 眼底により反射された反射光は、広角光学系342および光学スキャナ341を経由してSLOユニット33へ入射し、ビームスプリッタ332によってB光、R光、G光、及びIR光に分解される。これらの光は、それぞれ検出素子333B、333G、333R、333IRに検出される。 The reflected light reflected by the fundus enters the SLO unit 33 via the wide-angle optical system 342 and the optical scanner 341, and is decomposed into B light, R light, G light, and IR light by the beam splitter 332. These lights are detected by detection elements 333B, 333G, 333R, and 333IR, respectively.
 検出素子333B、333G、333R、333IRが検出したB光、R光、G光、及びIR光の情報を収集することにより、制御装置31のプロセッサ101は、SLO眼底画像を生成することができる。 By collecting information of B light, R light, G light, and IR light detected by the detection elements 333B, 333G, 333R, and 333IR, the processor 101 of the control device 31 can generate an SLO fundus image.
 OCTシステムは、図3に示す制御装置31、OCTユニット35、及び撮影光学系34によって構成される。OCTユニット35は、光源351、センサ352、第1の光カプラ353、参照光学系354、コリメートレンズ355、及び第2の光カプラ356を備える。 The OCT system is composed of the control device 31, the OCT unit 35, and the imaging optical system 34 shown in FIG. OCT unit 35 comprises light source 351 , sensor 352 , first optical coupler 353 , reference optics 354 , collimating lens 355 and second optical coupler 356 .
 光源351から射出された光は、第1の光カプラ353で分岐される。分岐された一方の光は、測定光として、コリメートレンズ355で平行光化され、撮影光学系34に入射する。撮影光学系34に入射した光は光学スキャナ341によってX方向およびY方向に走査される。走査光は広角光学系342を経由して、眼底に照射される。眼底により反射された測定光は、広角光学系342を経由してOCTユニット35へ入射し、コリメートレンズ355及び第1の光カプラ353を介して、第2の光カプラ356に入射する。 The light emitted from the light source 351 is split by the first optical coupler 353 . One of the split lights is collimated by the collimating lens 355 and enters the imaging optical system 34 as measurement light. Light incident on the imaging optical system 34 is scanned in the X and Y directions by an optical scanner 341 . The scanning light passes through the wide-angle optical system 342 and irradiates the fundus. The measurement light reflected by the fundus enters the OCT unit 35 via the wide-angle optical system 342 and enters the second optical coupler 356 via the collimating lens 355 and the first optical coupler 353 .
 光源351が出射し、第1の光カプラ353で分岐された他方の光は、参照光として、参照光学系354を経由して、第2の光カプラ356に入射する。 The other light emitted by the light source 351 and branched by the first optical coupler 353 enters the second optical coupler 356 via the reference optical system 354 as reference light.
 参照光、及び眼底で反射された測定光は、第2の光カプラ356で干渉されて干渉光を生成する。干渉光はセンサ352で受光される。制御装置31は、センサ352からの信号を受信し、断層画像を生成する。なお、OCTシステムを用いて撮影を行うこと、及び撮影によって得られた画像を、以下では簡略に、それぞれOCT撮影及びOCT画像と称する場合がある。 The reference light and the measurement light reflected by the fundus are interfered by the second optical coupler 356 to generate interference light. The interfering light is received by sensor 352 . The control device 31 receives signals from the sensor 352 and generates a tomographic image. Imaging using an OCT system and an image obtained by the imaging may hereinafter be simply referred to as OCT imaging and OCT image, respectively.
 〔ソフトウェア構成〕
  サーバ10が備える主な機能(機能構成)を図4に示す。同図に示すように、サーバ10は、データベース114、及び管理部120の各機能を備える。管理部120は、特に画像処理部116、検出部118の各機能を備える。データベース114は、サーバ10の主記憶装置102に格納されている。管理部120の各機能は、サーバ10のプロセッサ101がサーバ10の主記憶装置102に格納されているプログラムを読み出して実行することにより実現される。
[Software configuration]
FIG. 4 shows main functions (functional configuration) of the server 10 . As shown in the figure, the server 10 has functions of a database 114 and a management unit 120 . The management unit 120 particularly has the functions of the image processing unit 116 and the detection unit 118 . The database 114 is stored in the main storage device 102 of the server 10 . Each function of the management unit 120 is implemented by the processor 101 of the server 10 reading and executing a program stored in the main storage device 102 of the server 10 .
 またサーバ10は、上記の機能に加えて、オペレーティングシステム、ファイルシステム、デバイスドライバ、DBMS(DataBase Management System)等の機能を備える。 In addition to the functions described above, the server 10 also has functions such as an operating system, a file system, a device driver, and a DBMS (DataBase Management System).
 管理部120は、画像の取得や管理等、サーバ10が実行する処理を行う。管理部において取得および管理される画像は、眼科装置30で撮影された画像を含む。画像処理部116は、GUIの生成や、眼科装置30で撮影された画像に対する処理を主に行う。 The management unit 120 performs processing executed by the server 10, such as acquisition and management of images. Images acquired and managed by the management unit include images captured by the ophthalmologic apparatus 30 . The image processing unit 116 mainly performs GUI generation and processing of images captured by the ophthalmologic apparatus 30 .
 検出部118は、網膜または脈絡膜などを含む眼底において、出血や網膜剥離等の異常があるとの所見(以下、「異常所見」と称する)の有無、及びその詳細を、眼科装置30で撮影された画像から推定する機能を有する。検出部118において、異常所見の有無及びその詳細は、被検眼画像内における異常領域を根拠として推定される。本実施形態において検出部118は、機械学習により生成された学習済みモデルである。 The detection unit 118 detects the presence or absence of abnormal findings such as bleeding or retinal detachment (hereinafter referred to as “abnormal findings”) in the fundus including the retina or choroid, and the details of the findings, captured by the ophthalmologic apparatus 30 . It has a function of estimating from the captured image. In the detection unit 118, the presence or absence of an abnormal finding and its details are estimated based on the abnormal region in the image of the subject's eye. In this embodiment, the detection unit 118 is a trained model generated by machine learning.
 詳細には、検出部118は、被検眼画像内における異常領域の画像特徴量を学習するディープラーニングを行うモデルである。検出部118は、入力された被検眼画像に対し、異常所見の有無の推定結果を示す情報を出力するニューラルネットワークを構築する。例えばニューラルネットワークは、深層ニューラルネットワーク(DNN:Deep Neural Network)である。 Specifically, the detection unit 118 is a model that performs deep learning to learn the image feature amount of the abnormal region in the image of the eye to be inspected. The detection unit 118 constructs a neural network that outputs information indicating the result of estimating the presence or absence of an abnormal finding for the input image of the eye to be inspected. For example, the neural network is a deep neural network (DNN).
 検出部118は、被検眼画像の入力を受け付ける入力層と、異常所見の有無の推定結果を出力する出力層と、被検眼画像の画像特徴量を抽出する中間層とを有する(図5)。入力層、出力層、及び中間層の各層は、ノード(図中、白丸で示す)を備えており、これらの各層のノードは、エッジ(図中、矢印で示す)によって接続されている。なお、図5に示す検出部118の構成は例示であり、ノード及びエッジの数、中間層の数などは適宜変更可能である。 The detection unit 118 has an input layer that receives the input of the image of the subject's eye, an output layer that outputs the estimation result of the presence or absence of an abnormal finding, and an intermediate layer that extracts the image feature amount of the image of the subject's eye (Fig. 5). Each of the input layer, the output layer, and the intermediate layer has nodes (indicated by white circles in the figure), and the nodes of these layers are connected by edges (indicated by arrows in the figure). Note that the configuration of the detection unit 118 shown in FIG. 5 is an example, and the number of nodes and edges, the number of intermediate layers, and the like can be changed as appropriate.
 検出部118がCNN(Convolutional Neural Network)である場合、中間層は、入力層から入力された被検眼画像における各画素の画素値を畳み込む畳み込み層と、画素値をマッピングするプーリング層とを有し、これらの層を用いて被検眼画像の特徴量を抽出する。出力層は、入力された被検眼画像の異常所見を推定した結果を出力する、一つ又は複数のニューロンを有する。 When the detection unit 118 is a CNN (Convolutional Neural Network), the intermediate layer has a convolution layer for convolving the pixel value of each pixel in the image of the eye to be inspected input from the input layer, and a pooling layer for mapping the pixel value. , and these layers are used to extract the feature amount of the image of the subject's eye. The output layer has one or more neurons that output results of estimating abnormal findings of the input image of the subject's eye.
 検出部118は、推定結果と共に、推定結果の確からしさを併せて出力することも可能である。確からしさは、例えば検出部118の出力層から出力される確率値であり、例えば、推定された異常所見がどの程度の信頼性を持つかを「0」から「1」までの値で示される。確からしさをユーザに通知することで、ユーザは推定結果がどの程度正確なものかを知ることができる。 The detection unit 118 can also output the likelihood of the estimation result together with the estimation result. The likelihood is, for example, a probability value output from the output layer of the detection unit 118, and for example, the degree of reliability of the estimated abnormal finding is indicated by a value from "0" to "1". . By notifying the user of the likelihood, the user can know how accurate the estimation result is.
 また、検出部118は、異常所見の深刻度を併せて出力する。深刻度とは、症状の重篤度や、症状のグレード、進行の速さ、症状が人体に与える影響の大きさなどと説明できる。症状などが例えば出血である場合、検出部118は、出血の大きさ、量などから、出血の深刻度を推定する。網膜剥離や新生血管など、他の異常所見に対しても同様に、検出部118は深刻度を出力する。 The detection unit 118 also outputs the severity of the abnormal finding. The severity can be explained as the seriousness of the symptoms, the grade of the symptoms, the speed of progression, the magnitude of the effects of the symptoms on the human body, and the like. For example, if the symptom or the like is bleeding, the detection unit 118 estimates the severity of the bleeding from the size, amount, and the like of the bleeding. Similarly, the detection unit 118 outputs the degree of severity for other abnormal findings such as retinal detachment and neovascularization.
 特に本実施形態においては、検出部118がCNNであるものとして説明するが、検出部118はCNNに限定されず、CNN以外のニューラルネットワークや、他の学習アルゴリズムで構築された学習済みモデルであってよい。 In particular, in the present embodiment, the detection unit 118 is described as being a CNN, but the detection unit 118 is not limited to a CNN, and may be a neural network other than a CNN, or a trained model constructed by another learning algorithm. you can
 データベース114には、広角眼底画像P、OCT画像など、網膜及び脈絡膜など有する眼底組織を撮影して得られた画像が保存される。また、被検眼画像には、アノテーションも医療従事者等によって付与することができ、データベース114には、被検眼画像、及び、被検眼画像の一部を示す画像と関連付けて画像中においてアノテーションが付与された箇所及びアノテーションの内容を併せて保存することができる。保存されたデータは、検出部118の学習及び再学習に用いられる。アノテーションには、眼底の異常領域と、異常領域に付された黄斑変性など異常所見の詳細とを示す情報が含まれる。したがって検出部118の学習に用いられる際、アノテーションは、入力画像の異常所見及び異常領域の、正解を示すデータとしての機能を有する。このほかにもデータベース114は、被検眼画像の過去の記録を含む、電子カルテなどの患者の診療記録、患者ID、及び、その他データを保存する。また、データベース118は後述する検出部118で得られた所見に関する情報も、画像と関連付けられて保存する。 The database 114 stores images such as wide-angle fundus images P and OCT images obtained by photographing fundus tissues having retinas and choroids. Annotations can also be added to the image of the eye to be examined by a medical professional or the like. In the database 114, annotations are added in the image in association with the image of the eye to be examined and an image showing a part of the image of the eye to be examined. It is possible to save the marked locations and the contents of the annotations together. The saved data is used for learning and re-learning of the detection unit 118 . The annotation includes information indicating the abnormal region of the fundus and details of abnormal findings such as macular degeneration attached to the abnormal region. Therefore, when used for learning of the detection unit 118, the annotation has a function as data indicating the correct answer of the abnormal findings and abnormal regions of the input image. Database 114 also stores patient medical records, such as electronic medical records, patient IDs, and other data, including historical records of images of the eye being examined. The database 118 also stores information about findings obtained by the detection unit 118, which will be described later, in association with the image.
 検出部118の深層学習には、出血、新生血管などの異常領域を含む多数の画像と、これらの画像に対して付されたアノテーションとを含むデータセットが用いられる。このデータセットを検出部118に学習または再学習させることにより、検出部118の深層学習が行われる。 The deep learning of the detection unit 118 uses a dataset containing a large number of images containing abnormal areas such as bleeding and neovascularization, and annotations attached to these images. Deep learning of the detection unit 118 is performed by having the detection unit 118 learn or re-learn this data set.
 〔処理〕
  情報処理システム1によって実行される処理の一例について、図6~図9のフローチャートを用いて以下に説明する。サーバ10の主記憶装置102に保存されたプログラムが起動され、サーバ10の管理部120によって、情報処理システム1の処理が以下のように実行される。なお、以下ではサーバ10の管理部120によって実行される処理を、簡略に「サーバ10」が実行するものとして記載する場合がある。
〔process〕
An example of processing executed by the information processing system 1 will be described below with reference to flowcharts of FIGS. 6 to 9. FIG. A program stored in the main storage device 102 of the server 10 is started, and the processing of the information processing system 1 is executed by the management unit 120 of the server 10 as follows. In addition, below, the process performed by the management part 120 of the server 10 may be described simply as what "the server 10" performs.
 サーバ10が行う処理の概要は、図6に示すように、大きく4つのステップによって構成される。まず、サーバ10は、眼科装置30が網膜を撮影することによって得られた被検眼画像の取得を行う(S1)。この例では、眼科装置30により撮影された画像として広角眼底画像Pを用いるものとする。広角眼底画像Pは、上記のとおり、眼科装置30のSLOシステムによって撮影されたものである。管理部120は、ネットワーク5を介して眼科装置30に保存された広角眼底画像Pを取得する。 The outline of the processing performed by the server 10 is roughly composed of four steps, as shown in FIG. First, the server 10 acquires an image of the subject's eye obtained by photographing the retina with the ophthalmologic apparatus 30 (S1). In this example, the wide-angle fundus image P is used as the image captured by the ophthalmologic apparatus 30 . The wide-angle fundus image P is captured by the SLO system of the ophthalmologic apparatus 30 as described above. The management unit 120 acquires the wide-angle fundus image P stored in the ophthalmologic apparatus 30 via the network 5 .
 次に、管理部120は、取得した広角眼底画像P内における異常所見の有無を推定する(S3)。その後、管理部120は、異常所見の有無を推定した被検眼画像の出力処理を行う(S5)。 Next, the management unit 120 estimates whether there is an abnormal finding in the acquired wide-angle fundus image P (S3). After that, the management unit 120 performs output processing of the image of the subject's eye in which the presence or absence of an abnormal finding is estimated (S5).
 次に、管理部120は追加処理を行う(S7)。追加処理は、電子カルテへの記入や、再学習などであり、主にユーザからの指示に応じて実行される。 Next, the management unit 120 performs additional processing (S7). The additional processing includes filling in an electronic medical record, re-learning, etc., and is mainly executed according to instructions from the user.
 (推定処理)
  推定処理(S3)の詳細を、図7に示すとともに、以下に説明する。ステップS31において管理部120は、検出部118に広角眼底画像Pを入力する。
(estimation process)
Details of the estimation process (S3) are shown in FIG. 7 and described below. In step S<b>31 , the management unit 120 inputs the wide-angle fundus image P to the detection unit 118 .
 次に、管理部120は、検出部118の異常の有無に関する推定結果を取得する(S33)。異常があると推定される場合、検出部118の出力には、異常所見の種別、及び異常領域を特定する情報が含まれる。異常領域とは、正常眼との差異が生じている領域である。被検眼画像において撮影される異常領域の例としては、網膜または脈絡膜における、出血点、新生血管領域、網膜剥離領域、無灌流領域等が挙げられる。異常領域を根拠として付された所見が異常所見であり、異常所見の種別としては、網膜または脈絡膜における、出血、新生血管、網膜剥離、無灌流領域の有無や程度などがあげられる。図10には、異常所見の一例として、黄斑Mを含む広角眼底画像Pが示されている。推定結果では、異常領域を含むと推定される画像領域、異常所見の種別を識別した識別結果が取得される。 Next, the management unit 120 acquires an estimation result regarding the presence or absence of an abnormality in the detection unit 118 (S33). When it is estimated that there is an abnormality, the output of the detection unit 118 includes the type of abnormal finding and information specifying the abnormal region. An abnormal region is a region in which there is a difference from a normal eye. Examples of abnormal regions captured in images of the subject's eye include bleeding points, neovascular regions, retinal detachment regions, non-perfused regions, etc. in the retina or choroid. A finding based on an abnormal region is an abnormal finding, and examples of types of abnormal findings include the presence and degree of hemorrhage, neovascularization, retinal detachment, and non-perfused region in the retina or choroid. FIG. 10 shows a wide-angle fundus image P including a macula M as an example of an abnormal finding. As the estimation result, an image area estimated to include an abnormal area and an identification result identifying the type of abnormal finding are obtained.
 検出部118は、推定結果の確からしさ、及び、異常所見の深刻度を併せて出力する。そして、管理部120は、異常領域を含むと推定される画像領域、異常所見の種別等を識別した識別結果、推定結果の確からしさ、及び、異常所見の深刻度などを含む異常所見情報を生成する。 The detection unit 118 also outputs the likelihood of the estimation result and the severity of the abnormal findings. Then, the management unit 120 generates abnormal finding information including an image area estimated to include an abnormal area, the identification result of identifying the type of abnormal finding, the likelihood of the estimation result, the severity of the abnormal finding, and the like. do.
 上記の結果、充分な確からしさを持った異常所見が出力される場合、広角眼底画像Pには、異常所見情報が関連付けられて保存された状態となる。推定処理により、異常所見情報が関連付けられた広角眼底画像Pが生成される。 As a result of the above, when an abnormal finding with sufficient certainty is output, the wide-angle fundus image P is stored in association with the abnormal finding information. The estimation process generates a wide-angle fundus image P associated with abnormal finding information.
 (出力処理)
  出力処理(S5)の詳細を、図8、図10及び図11を用いて以下に説明する。図10は、推定処理(S3)によって異常所見が付与された広角眼底画像Pの例を示す。以下では出血を異常所見の例として用い、説明を行う。広角眼底画像Pには、異常領域として、出血点Bが広角眼底画像Pに表示されている。
(output processing)
Details of the output processing (S5) will be described below with reference to FIGS. 8, 10 and 11. FIG. FIG. 10 shows an example of a wide-angle fundus image P to which an abnormal finding is added by the estimation process (S3). In the following explanation, bleeding is used as an example of an abnormal finding. A bleeding point B is displayed in the wide-angle fundus image P as an abnormal area.
 管理部120は、画像処理部116を用いて、広角眼底画像Pにおける部分領域を示す部分画像の抽出を行う(S51)。管理部120は、広角眼底画像Pの部分領域設定を行い、部分画像を生成する。部分画像は複数枚抽出され、複数の部分画像は互いに重複する領域を有してもよく、また互いに重複していなくてもよい。異常所見が付与されている場合、異常領域が複数の部分画像のいずれかに含まれるように、部分画像の領域が決定される。複数の異常領域が互いに離れて位置する場合には、一つの部分画像がすべての異常領域を含むように部分画像の領域が決定されてもよく、複数の部分画像が異常領域を含むように部分画像の領域が決定されてもよい。 The management unit 120 uses the image processing unit 116 to extract a partial image indicating a partial area in the wide-angle fundus image P (S51). The management unit 120 sets a partial area of the wide-angle fundus image P and generates a partial image. A plurality of partial images are extracted, and the plurality of partial images may or may not overlap with each other. If an abnormal finding is given, the area of the partial image is determined such that the abnormal area is included in any one of the plurality of partial images. When a plurality of abnormal regions are located apart from each other, the regions of the partial images may be determined so that one partial image includes all the abnormal regions, and the partial images are determined so that the plurality of partial images include the abnormal regions. A region of the image may be determined.
 図10および図11において、広角眼底画像Pに重ねて示す枠F1、F2、F3、F4は、広角眼底画像Pから抽出される4つの部分領域を示している。また、各図の右側には、広角眼底画像Pが各部分領域を抽出した結果生成された、4つの部分画像D1、D2、D3、D4が示されている。部分画像D1、D2、D3、D4はそれぞれ、枠F1、F2、F3、F4の示す各領域に対応している。 10 and 11, frames F1, F2, F3, and F4 superimposed on the wide-angle fundus image P indicate four partial regions extracted from the wide-angle fundus image P. FIG. In addition, four partial images D1, D2, D3, and D4 generated as a result of extracting each partial area of the wide-angle fundus image P are shown on the right side of each figure. Partial images D1, D2, D3, and D4 correspond to the areas indicated by frames F1, F2, F3, and F4, respectively.
 画像処理部116は、部分領域のいずれかが異常領域である出血点Bを含むように、広角眼底画像Pから部分画像を抽出する。図10および図11の例においては、枠F2の領域が出血点Bを含み、それ以外の領域(F1、F3、F4)では、出血点Bが含まれない。図10に一例として示すように、画像処理部116は、部分画像D1、D2、D3、D4がいずれも黄斑Mの全体を含むように領域の大きさと配置とを決定する。同時に画像処理部116は、出血点Bがいずれかの領域に含まれるように、換言すれば、出血点Bがいずれかの部分画像に表示されるように、領域の大きさと配置とを決定する。その結果、図10の右側に示すように、部分画像D1、D2、D3、D4には、いずれも黄斑Mが表示される。また、部分画像D2には出血点Bが表示される。 The image processing unit 116 extracts partial images from the wide-angle fundus image P such that one of the partial areas includes the bleeding point B, which is an abnormal area. In the examples of FIGS. 10 and 11, the region of the frame F2 includes the bleeding point B, and the other regions (F1, F3, F4) do not include the bleeding point B. FIG. As shown in FIG. 10 as an example, the image processing unit 116 determines the size and arrangement of the regions so that the partial images D1, D2, D3, and D4 all include the entire macula M. FIG. At the same time, the image processing unit 116 determines the size and arrangement of the region so that the bleeding point B is included in one of the regions, in other words, the bleeding point B is displayed in one of the partial images. . As a result, as shown on the right side of FIG. 10, the macula M is displayed in each of the partial images D1, D2, D3, and D4. A bleeding point B is also displayed in the partial image D2.
 本実施形態のように、各部分画像に黄斑や視神経乳頭等の眼底の特徴的構造物が表示される場合、各部分画像や異常領域の広角眼底画像Pにおける位置が、ユーザにとって理解しやすい。 When characteristic structures of the fundus such as the macula and the optic papilla are displayed in each partial image as in the present embodiment, it is easy for the user to understand the position of each partial image and the abnormal region in the wide-angle fundus image P.
 図11の部分画像抽出例では、画像処理部116は、黄斑Mを中心とし、枠F1~F4、すなわち部分画像D1~D4に対応する各領域に重複が無いように分割位置を決定する。また、画像処理部116は、出血点Bがいずれかの部分画像に表示されるように、部分画像の配置及び大きさを調整する。この結果、部分画像D1には出血点Bが表示される。 In the partial image extraction example of FIG. 11, the image processing unit 116 determines the division positions so that the areas corresponding to the frames F1 to F4, that is, the partial images D1 to D4 do not overlap with the macula M at the center. Also, the image processing unit 116 adjusts the arrangement and size of the partial images so that the bleeding point B is displayed in one of the partial images. As a result, the bleeding point B is displayed in the partial image D1.
 なお、異常領域が無い場合、部分画像の設定の方法は、予め定められた方法に従って決定される。例えば、枠F1~F4、すなわち部分画像D1~D4がいずれも黄斑Mの全体を含むように各画像の配置、大きさが定められてもよい(図10)。あるいは、黄斑Mを中心とし、枠F1~F4、すなわち部分画像D1~D4の各領域が重複しないように分割位置が決定されてもよい(図11)。 It should be noted that when there is no abnormal area, the method of setting partial images is determined according to a predetermined method. For example, the placement and size of each image may be determined so that the frames F1 to F4, that is, the partial images D1 to D4 all include the entire macula M (FIG. 10). Alternatively, the division positions may be determined so that the frames F1 to F4, that is, the regions of the partial images D1 to D4 do not overlap with the macula M as the center (FIG. 11).
 画像処理部116は、上記のようにして広角眼底画像Pから部分画像を抽出し、部分画像D1、D2、D3、D4を作成する。 The image processing unit 116 extracts partial images from the wide-angle fundus image P as described above, and creates partial images D1, D2, D3, and D4.
 次に画像処理部116は、異常領域を鮮明化する画像処理を行う(S53)。図10及び図11の例では、出血点Bを表示する部分画像D2において、出血点Bを視認し易いように、出血点B及びそれ以外の領域の画素値を変更する処理が行われる。また、画像処理部116は、部分画像D1、D2、D3、D4及び広角眼底画像Pから睫毛の映り込みや、その他のアーチファクトを除去し、画像を視認し易い状態にする。 Next, the image processing unit 116 performs image processing for sharpening the abnormal area (S53). In the example of FIGS. 10 and 11, in the partial image D2 displaying the bleeding point B, processing is performed to change the pixel values of the bleeding point B and other areas so that the bleeding point B can be easily recognized. Further, the image processing unit 116 removes reflection of eyelashes and other artifacts from the partial images D1, D2, D3, D4 and the wide-angle fundus image P to make the images easy to view.
 画像処理部116は、さらに異常領域の強調方法を設定する処理を行う(S55)。強調方法は、異常所見の種別、深刻度、確からしさに応じて適宜選択される。例えば、複数個所において異常所見が認められる場合、深刻度の高い異常領域を含む部分画像が、後の表示処理(S59)において優先して表示されるように設定される。また画像処理部116は、複数の部分画像のうち、異常領域が表示される部分画像を、異常領域が表示されない部分画像と区別する形で表示する表示方法を設定する処理を行う。具体的には、異常領域が表示される部分画像の拡大率や表示順を、異常所見の種別、または確からしさに応じて選択する。さらに画像処理部116は、強調する部分に重ねて表示する枠やアイコン等の画像を、異常所見の種別、または確からしさに応じて選択し、後の表示処理(S59)において強調表示として表示されるように設定する。なお、これらの強調方法はユーザの入力を受け付けて設定してもよい。 The image processing unit 116 further performs processing for setting the method of emphasizing the abnormal region (S55). The emphasis method is appropriately selected according to the type, severity, and certainty of the abnormal findings. For example, when abnormal findings are recognized at multiple locations, a partial image including an abnormal region with a high degree of severity is set to be preferentially displayed in the subsequent display processing (S59). Further, the image processing unit 116 performs processing for setting a display method for displaying a partial image in which an abnormal region is displayed among the plurality of partial images in a manner to distinguish it from a partial image in which an abnormal region is not displayed. Specifically, the enlargement ratio and display order of the partial images in which the abnormal region is displayed are selected according to the type of abnormal finding or likelihood. Further, the image processing unit 116 selects an image such as a frame or an icon to be displayed superimposed on the portion to be emphasized according to the type or probability of the abnormal finding, and is displayed as an emphasized display in the subsequent display processing (S59). set to Note that these emphasis methods may be set by accepting user input.
 画像処理部116は、ステップS55までに選択された、部分画像抽出方法、異常領域強調方法、及び強調表示にしたがって、GUI(Graphical User Interface)を作成する(S57)。このGUIがネットワーク5を介して端末20に送信され、端末20の出力装置105に表示される(S59)。GUIは、端末20を操作するユーザの指示を受け付け、この指示に応じて表示を変え、また、アイコンや枠などの画像をさらに加えて表示する。 The image processing unit 116 creates a GUI (Graphical User Interface) according to the partial image extraction method, abnormal region enhancement method, and highlighting selected up to step S55 (S57). This GUI is transmitted to the terminal 20 via the network 5 and displayed on the output device 105 of the terminal 20 (S59). The GUI receives instructions from the user who operates the terminal 20, changes the display according to the instructions, and further displays images such as icons and frames.
 GUIの例を図10~図14に示す。各図に示すように、GUIでは広角眼底画像Pが左側に配置され、その右方には4つの部分画像D1~D4が配置される。広角眼底画像Pには、部分画像D1、D2、D3、D4に対応する枠F1、F2、F3、F4が重ねて表示される。この枠F1~F4の表示により、部分画像D1~D4のそれぞれが、広角眼底画像Pのどの領域を示しているか、理解が容易となっている。 Examples of GUIs are shown in Figures 10 to 14. As shown in each figure, in the GUI, a wide-angle fundus image P is arranged on the left side, and four partial images D1 to D4 are arranged on the right side thereof. Frames F1, F2, F3, and F4 corresponding to the partial images D1, D2, D3, and D4 are superimposed on the wide-angle fundus image P and displayed. By displaying the frames F1 to F4, it is easy to understand which region of the wide-angle fundus image P each of the partial images D1 to D4 indicates.
 図10では、異常領域である出血点Bが表示される部分画像D2が優先して、最も大きく表示されている。 In FIG. 10, the partial image D2 displaying the bleeding point B, which is the abnormal area, is preferentially displayed in the largest size.
 図12では、異常領域を含む部分画像をさらに拡大して示す強調方法(S55参照)が選択された場合のGUIを示している。また、部分画像D1~D4は、いずれも黄斑Mを表示するように、設定されている(S51参照)。図示のように、出血点Bが表示される部分画像D2の拡大率が、他の部分画像D1、D3、D4に比較して大きい。このためユーザは、出血点Bを確認し易い。 FIG. 12 shows the GUI when the enhancement method (see S55) for further enlarging the partial image including the abnormal region is selected. Moreover, all of the partial images D1 to D4 are set so as to display the macula M (see S51). As shown, the partial image D2 in which the bleeding point B is displayed has a larger magnification than the other partial images D1, D3, and D4. Therefore, the user can easily confirm the bleeding point B.
 図13のGUIでは、部分画像D2に表示された出血点Bに対して、小枠Sによる強調表示が付されている(S55参照)。ユーザの指示に応じて、画像処理部116は、小枠S内の画像をさらに拡大し、拡大画像Lとして表示させる。これにより、ユーザは容易に出血点Bを確認できる。ユーザの指示は、例えば小枠Sの画像をマウスクリックする、表示画面上でタップするなどの操作によって行われる。 In the GUI of FIG. 13, the bleeding point B displayed in the partial image D2 is highlighted by a small frame S (see S55). According to the user's instruction, the image processing unit 116 further enlarges the image within the small frame S and displays it as an enlarged image L. FIG. Thereby, the user can easily confirm the bleeding point B. The user's instruction is performed by, for example, clicking the image of the small frame S with a mouse or tapping on the display screen.
 なお、拡大画像Lの表示は、1枚だけでなく、複数枚同時に表示させることも可能である。また、異常領域が複数ある場合に、ユーザの指示に応じて拡大画像Lを順次表示させる方式としてもよい。また、全ての拡大画像Lを順次並べ、サムネイル表示させてもよい。この際、異常所見の種別、深刻度、または確からしさの大きさに基づいて、拡大画像Lを並べることが望ましい。 It should be noted that it is possible to display not only one enlarged image L but also a plurality of enlarged images at the same time. Further, when there are a plurality of abnormal regions, a method may be adopted in which the enlarged images L are sequentially displayed according to the user's instruction. Alternatively, all the enlarged images L may be sequentially arranged and displayed as thumbnails. At this time, it is desirable to arrange the enlarged images L based on the type, severity, or likelihood of the abnormal findings.
 拡大画像Lの拡大率又は画角は、任意に設定される。例えば、医師が視認しやすい拡大率5倍、画角30度に設定される。拡大率又は画角をユーザの指示に応じて変える態様としてもよい。また、医師が予め設定した拡大率又は画角としてもよい。また、ユーザは、枠F1~F4の位置及び大きさを操作して、部分画像D1~D4の位置を変更したり、拡大の範囲を変更したりすることができる。 The magnification or angle of view of the enlarged image L is arbitrarily set. For example, the magnification is set to 5 times and the angle of view is set to 30 degrees, which is easy for the doctor to visually recognize. It is also possible to change the enlargement ratio or the angle of view according to the user's instruction. Alternatively, the enlargement ratio or the angle of view preset by the doctor may be used. Also, the user can change the positions of the partial images D1 to D4 and change the range of enlargement by operating the positions and sizes of the frames F1 to F4.
 図14のGUIでは、アイコンIC1~IC4による強調表示が付されている。これらの強調表示は、異常所見の種別に応じて異なる。図14においては、医師が異常であると判断して表示を加えた領域、AIが出力した異常領域、医師及びAIによって異常であると判断された領域、及び医師によって異常ではないと判断してGUI上で表示を変更するように指示した領域、の4つの種別に対して、互いに表示が異なるアイコンIC1~IC4が用いられる。このようにして、強調表示の方法が異常所見の種別に応じて区別されている。アイコンIC1~IC4は、例えば、線種、形状、及び色などを互いに変えることにより、互いに異なるように表示される。 In the GUI of FIG. 14, icons IC1 to IC4 are highlighted. These highlights differ depending on the type of abnormal finding. In FIG. 14, the area judged to be abnormal by the doctor and added to the display, the abnormal area output by AI, the area judged to be abnormal by the doctor and AI, and the area judged to be not abnormal by the doctor Icons IC1 to IC4 with different displays are used for the four types of areas instructed to change the display on the GUI. In this way, the method of highlighting is differentiated according to the type of abnormal findings. The icons IC1 to IC4 are displayed differently from each other by, for example, changing line types, shapes, colors, and the like.
 また、異常領域が複数ある場合、拡大画像を異常所見の深刻度順に表示することが可能である。また、深刻度に応じて部分画像D1~D4の表示順、または拡大画像Lの表示態様が変更される。 Also, if there are multiple abnormal areas, it is possible to display enlarged images in order of severity of abnormal findings. Also, the display order of the partial images D1 to D4 or the display mode of the enlarged image L is changed according to the severity.
 強調表示についても、深刻度に応じてその態様が変更される。例えば、深刻度がより高い異常領域に対しては、明度、または彩度の高い色による強調表示が用いられる。 The mode of highlighting will also change according to the severity. For example, brighter or more saturated color highlighting is used for more severe anomalous regions.
 また、図10~図14に示したような強調は画像の分割方法やGUIの表示態様に依存せず、任意の種類の選択が可能である。また、複数の強調方法を組み合わせることも可能である。例えば、図11に示すような部分画像D2を拡大したGUI表示において、さらに小枠S及び拡大画像Lを部分画像D2上に重ねて表示させてもよい。 In addition, any type of emphasis can be selected, regardless of the image division method or GUI display mode, as shown in FIGS. It is also possible to combine multiple enhancement methods. For example, in the GUI display in which the partial image D2 is enlarged as shown in FIG. 11, the small frame S and the enlarged image L may be superimposed and displayed on the partial image D2.
 画像処理部116は、眼科装置30と連携してOCTシステムによるOCT撮影を行い、断層画像を取得することも可能である(S61)。OCT撮影の指示は、表示処理(S59)においてGUIを経由して受け付けることも可能である。例として、OCT撮影を行いたい異常領域をマウスクリックなどの操作により、ユーザが指示することができる。 The image processing unit 116 can cooperate with the ophthalmologic apparatus 30 to perform OCT imaging using the OCT system and acquire a tomographic image (S61). An instruction for OCT imaging can also be accepted via the GUI in the display processing (S59). For example, the user can specify an abnormal region to be subjected to OCT imaging by operating a mouse click or the like.
 ユーザの指示がある場合、画像処理部116は、異常領域がある眼底位置を特定し、ネットワーク5を介して眼科装置30に撮影を指示する。この指示に基づき眼科装置30は、OCTシステムを用いて、異常領域に対してOCT撮影を実行する。OCT撮影により取得された網膜の断層画像は、サーバ10が取得し、さらに端末20の出力装置105に表示される。その際、GUI上に広角眼底画像Pまたは部分画像D1~D4とともに表示されてもよい。 When there is an instruction from the user, the image processing unit 116 identifies the position of the fundus where the abnormal area is located, and instructs the ophthalmologic apparatus 30 via the network 5 to take an image. Based on this instruction, the ophthalmologic apparatus 30 uses the OCT system to perform OCT imaging on the abnormal region. A tomographic image of the retina obtained by OCT imaging is obtained by the server 10 and displayed on the output device 105 of the terminal 20 . At that time, it may be displayed together with the wide-angle fundus image P or the partial images D1 to D4 on the GUI.
 ユーザの指示によらずに、OCT撮影がなされてもよい。例えば、異常所見がある場合に画像処理部116は、異常領域が有る眼底位置を特定し、ネットワーク5を介して眼科装置30に撮影を指示する。この指示に基づき眼科装置30は、OCTシステムを用いて、異常領域に対してOCT撮影を実行する。 OCT imaging may be performed without user instructions. For example, when there is an abnormal finding, the image processing unit 116 identifies the position of the fundus where the abnormal region exists, and instructs the ophthalmologic apparatus 30 to take an image via the network 5 . Based on this instruction, the ophthalmologic apparatus 30 uses the OCT system to perform OCT imaging on the abnormal region.
 OCT撮影の際、患者の情報に応じて撮影が行われてもよい。例えば、データベース114に保存された電子カルテにおいて、黄斑変性などの診断が得られていた場合、サーバ10は、黄斑M付近のOCT撮影を眼科装置30に指示する。この指示に基づいて、眼科装置30は、異常領域に対してOCT撮影を実行する。 During OCT imaging, imaging may be performed according to the patient's information. For example, if a diagnosis of macular degeneration or the like is obtained in the electronic medical record stored in the database 114, the server 10 instructs the ophthalmologic apparatus 30 to perform OCT imaging of the vicinity of the macula M. FIG. Based on this instruction, the ophthalmologic apparatus 30 performs OCT imaging on the abnormal region.
 (追加処理)
  追加処理を図9のフローを用いて説明する。ユーザは、表示処理(S59)において異常所見、及びその領域を確認し、その後、推定結果に対してアノテーションを加えて保存することができる(S71)。保存されたアノテーションは、データベース114に保存され、広角眼底画像P及び推定結果と互いに関連付けられた状態で保存される。
(additional processing)
The addition process will be explained using the flow of FIG. 9 . The user can check the abnormal findings and their regions in the display process (S59), and then add annotations to the estimation results and save them (S71). The saved annotations are saved in the database 114 and saved in association with the wide-angle fundus image P and the estimation result.
 保存された、アノテーション及び広角眼底画像Pは検出部118の再学習に使用される(S73)。検出部118は、例えば、異常所見の種別と、正しい異常領域とを示すアノテーションとともに広角眼底画像Pを学習することにより、異常所見を推定する能力を向上させることができる。 The saved annotations and wide-angle fundus image P are used for re-learning of the detection unit 118 (S73). For example, the detection unit 118 can improve the ability to estimate an abnormal finding by learning the wide-angle fundus image P together with annotations indicating the types of abnormal findings and correct abnormal regions.
 <第2実施形態>
  上記の第1実施形態では、検出部118はCNNを有する学習済みモデルを用いていた。第1実施形態とは異なる実施形態として、異常所見の推定のために機械学習を実施した学習済みモデルではなく、画像処理アルゴリズムを用いてもよい。第2実施形態として以下に説明する。
<Second embodiment>
In the first embodiment described above, the detection unit 118 uses a trained model with CNN. As an embodiment different from the first embodiment, an image processing algorithm may be used for estimating an abnormal finding instead of a trained model that has undergone machine learning. A second embodiment will be described below.
 第2実施形態によるサーバ10のソフトウェア構成を図15に示す。第2実施形態では、画像処理アルゴリズムによって構成された検出部119を備える。 FIG. 15 shows the software configuration of the server 10 according to the second embodiment. The second embodiment includes a detection unit 119 configured by an image processing algorithm.
 第2実施形態では、サーバ10、端末20、眼科装置30における、検出部119以外の構成は第1実施形態と同じである。第1実施形態と同じ構成については、第1実施形態で用いたものと同じ参照番号を付して、説明を省略する。 In the second embodiment, the configuration of the server 10, the terminal 20, and the ophthalmologic apparatus 30 is the same as in the first embodiment, except for the detection unit 119. The same reference numerals as used in the first embodiment are assigned to the same configurations as in the first embodiment, and the description thereof is omitted.
 第2実施形態における、検出部119による推定処理(S3)を以下に説明する。この例においては、広角眼底画像Pにおいて出血等の異常所見の有無を推定するためのアルゴリズムについて説明する。 The estimation process (S3) by the detection unit 119 in the second embodiment will be described below. In this example, an algorithm for estimating the presence or absence of abnormal findings such as bleeding in the wide-angle fundus image P will be described.
 ステップS35において、検出部119は、血液領域の抽出を行う。ここで血液領域とは、広角眼底画像Pのうち、血管を示す血管領域と、血管からの出血によって形成された出血領域とを含む領域である。ステップS35の処理の結果、広角眼底画像Pから血液領域を抽出した画像が得られる。 In step S35, the detection unit 119 extracts a blood region. Here, the blood region is a region in the wide-angle fundus image P that includes a blood vessel region showing blood vessels and a bleeding region formed by bleeding from blood vessels. As a result of the processing in step S35, an image obtained by extracting the blood region from the wide-angle fundus image P is obtained.
 ステップS36では、検出部119は、血管領域だけの抽出を行う。 At step S36, the detection unit 119 extracts only the blood vessel region.
 ステップS35及びS36における領域の抽出では、一般的な画像解析手法が応用される。例えば、広角眼底画像Pを256階調などに階調表示し、閾値と各画素の階調値とを比較することによって二値化画像に変換する。 A general image analysis method is applied in the region extraction in steps S35 and S36. For example, the wide-angle fundus image P is displayed in 256 gradations or the like, and is converted into a binarized image by comparing the gradation value of each pixel with the threshold value.
 二値化する際、ステップS35では、血液領域とその他の領域とを区別するように閾値が設定される。ステップS36では、血管領域とその他領域を区別するように閾値が設定される。閾値の設定方法についてはモード法など様々な方法が採り得る。 When binarizing, in step S35, a threshold value is set so as to distinguish between the blood region and other regions. At step S36, a threshold value is set so as to distinguish between the blood vessel region and other regions. Various methods such as the mode method can be adopted as the method of setting the threshold.
 ステップS37において、検出部119は、二値化画像中、特に血管領域周辺のノイズを除去する。ノイズ除去処理では、例えば膨張処理(Dilation)、及び収縮処理(Erosion)が用いられる。ステップS37を経ることにより、広角眼底画像Pから血管領域を抽出した画像が得られる。 In step S37, the detection unit 119 removes noise in the binarized image, particularly around the blood vessel region. For noise removal processing, for example, dilation processing (Dilation) and erosion processing (Erosion) are used. Through step S37, an image obtained by extracting the blood vessel region from the wide-angle fundus image P is obtained.
 ステップS38において、検出部119は、血液領域を示す画像と血管領域を示す画像との差分を取ることによって出血領域を示す画像を取得することができる。 In step S38, the detection unit 119 can obtain an image showing the bleeding area by taking the difference between the image showing the blood area and the image showing the blood vessel area.
 検出部119は、このように取得した画像を元に、出血の有無を推定する(S39)。検出部119は、出血が有ると判断した場合、出血という異常所見と、出血領域を示す画像とを出力する。 The detection unit 119 estimates the presence or absence of bleeding based on the image acquired in this way (S39). When determining that there is bleeding, the detection unit 119 outputs an abnormal finding of bleeding and an image showing the bleeding region.
 上記では出血に関する推定処理の例を示したが、その他、新生血管などの異常所見の場合も上記と同様に処理を実行する。その際には、二値化する際の閾値の設定など、領域の抽出に用いるパラメータは、異常所見の種別に応じて適切に設定される。 In the above, an example of estimation processing related to bleeding was shown, but in addition, the same processing is performed in the case of abnormal findings such as neovascularization. In this case, the parameters used for region extraction, such as threshold settings for binarization, are appropriately set according to the type of abnormal finding.
 <効果>
  上記の実施形態における画像処理は、眼科装置で撮影して得られた広角眼底画像(P)を取得する処理(S1)と、異常所見が付与された前記広角眼底画像から複数の部分画像(D1-D4)を抽出する処理(S1)と、前記異常所見の根拠となる異常領域(B)が示される第一部分画像を前記複数の部分画像から選択する選択処理(S51)と前記第一部分画像を表示する表示処理(S59)とを含む画像処理方法。
を含む。
<effect>
The image processing in the above embodiment includes a process (S1) of acquiring a wide-angle fundus image (P) obtained by photographing with an ophthalmologic apparatus, and a plurality of partial images (D1 -D4), a selection process (S51) for selecting a first partial image showing the abnormal region (B) that is the basis of the abnormal finding from the plurality of partial images, and the first partial image. An image processing method including display processing (S59) for displaying.
including.
 上記構成により、眼底の広範囲の異常所見を網羅しつつ、異常領域を容易に視認可能な表示を得ることができる。 With the above configuration, it is possible to obtain a display that allows easy visual recognition of the abnormal region while covering a wide range of abnormal findings of the fundus.
 表示処理(S59)では、広角眼底画像P上に、異常所見が見られる部分領域を区別できるように表示する。また、部分画像D1~D4のうち、異常所見のある部分画像を拡大表示できる。 In the display process (S59), the partial areas where abnormal findings are observed are displayed on the wide-angle fundus image P so that they can be distinguished. Further, a partial image having an abnormal finding among the partial images D1 to D4 can be enlarged and displayed.
 このような構成により、ユーザは、異常所見の詳細、及び異常領域を容易に視認することができる。 With such a configuration, the user can easily visually recognize the details of the abnormal finding and the abnormal area.
 また表示処理(S59)では、異常領域を強調表示することも可能である。この強調表示は、異常所見の種別に応じて変更される。 Also, in the display process (S59), it is possible to highlight the abnormal area. This highlighting is changed according to the type of abnormal finding.
 上記構成により、異常所見の詳細、及び異常領域が容易に視認可能となる。また、ユーザは、異常所見の種別が複数ある場合などにおいても、異常所見の種別ごとに整理して確認することができる。 With the above configuration, details of abnormal findings and abnormal areas can be easily visually recognized. In addition, even when there are multiple types of abnormal findings, the user can organize and check the abnormal findings for each type.
 上記実施形態は、広角眼底画像Pにおける異常所見を推定する推定処理(S3)を含む。推定処理は、異常所見を推定する検出部118、119に広角眼底画像Pを入力する処理(S31)と、検出部118、119に異常所見を推定させる処理と、を含む。 The above embodiment includes an estimation process (S3) for estimating an abnormal finding in the wide-angle fundus image P. The estimation processing includes processing (S31) of inputting the wide-angle fundus image P to the detection units 118 and 119 for estimating abnormal findings, and processing for causing the detection units 118 and 119 to estimate abnormal findings.
 このような推定処理により、医師や医療従事者の診断、所見を補助することが可能となる。異常の見落としを防止することも可能となる。 Such estimation processing makes it possible to assist the diagnosis and findings of doctors and medical staff. It is also possible to prevent an abnormality from being overlooked.
 上記実施形態は、広角眼底画像Pについて異常所見に関するアノテーションの追加を受け付ける処理(S71)と、アノテーションと広角眼底画像Pとにより検出部118を再学習させる処理(S73)と、を含む。 The above embodiment includes a process of accepting addition of annotations related to abnormal findings for the wide-angle fundus image P (S71), and a process of re-learning the detection unit 118 with the annotations and the wide-angle fundus image P (S73).
 アノテーションを付したデータを用いて、検出部118が再学習するため、検出部118の推定処理の精度を向上させることができる。すなわち、擬陽性や偽陰性などが出力される確率を低下させることができる。 Since the detection unit 118 re-learns using the annotated data, the accuracy of the estimation processing of the detection unit 118 can be improved. That is, it is possible to reduce the probability of outputting false positives or false negatives.
 推定処理(S3)では、異常所見の深刻度が推定される。また、推定された深刻度に応じて第1領域の表示方法が変更される。 In the estimation process (S3), the severity of the abnormal finding is estimated. Also, the display method of the first area is changed according to the estimated severity.
 上記実施形態は、異常領域の眼底上での位置を特定する処理(S61)と、特定した位置に対してOCTシステムによるOCT撮影を行い、特定した位置における断層画像を取得する処理(S61)とをさらに含む。 The above-described embodiment includes a process of specifying the position of the abnormal region on the fundus (S61), and a process of acquiring a tomographic image at the specified position by performing OCT imaging with an OCT system on the specified position (S61). further includes
 上記構成により、広角眼底画像Pを確認するだけでなく、異常所見をさらに確認したい場合に、断層画像を正確に、または容易に取得することが可能となる。 With the above configuration, it is possible not only to confirm the wide-angle fundus image P, but also to accurately or easily acquire a tomographic image when further confirmation of abnormal findings is desired.
 <変形例>
  上記の実施形態において広角眼底画像Pの部分領域設定は矩形の領域によって行われていたが、本発明はこれに限定されない。例えば、放射状に分割線を伸ばして分割してもよいし、スリット状に分割するなど、分割線及び部分領域の形状は、任意の形状が採用され得る。
<Modification>
Although the partial area setting of the wide-angle fundus image P is performed using a rectangular area in the above embodiment, the present invention is not limited to this. For example, the division may be performed by extending the division lines radially, or by dividing into slits.
 アノテーションには、部分領域設定の形状も含まれ得る。例えば、ユーザが最適と判断した部分領域設定形状をアノテーションに含め、検出部119に学習及び再学習させてもよい。この場合、検出部119が推定した異常所見に対応して、最適な部分領域形状、部分領域位置を出力する。部分画像抽出処理(S51)において、検出部119の出力した部分画像抽出方法にしたがって広角眼底画像Pの部分画像抽出が行われてもよい。 Annotations can also include the shape of partial area settings. For example, the partial area setting shape determined by the user to be optimal may be included in the annotation, and the detection unit 119 may learn and re-learn. In this case, the optimum partial area shape and partial area position are output in accordance with the abnormal findings estimated by the detection unit 119 . In the partial image extraction process ( S<b>51 ), partial image extraction of the wide-angle fundus image P may be performed according to the partial image extraction method output by the detection unit 119 .
 表示処理(S59)などにおける表示態様は、画像を用いて異常領域などを色や形状によって区別して表示する態様としていたが、本発明はそのような視認による態様に限定されない。例えば、音声によって表示のバリエーションを設定する態様を用いてもよい。 The display mode in the display processing (S59) and the like was a mode in which images were used to distinguish and display abnormal areas by color and shape, but the present invention is not limited to such a viewing mode. For example, a mode of setting display variations by voice may be used.
 上記実施形態では、OCTシステムを有する眼科装置30を用いたが、本発明はこれに限定されず、OCTシステムを有していない眼科装置、その他のシステムを用いる眼科装置であってもよい。 Although the ophthalmologic apparatus 30 having an OCT system is used in the above embodiment, the present invention is not limited to this, and ophthalmologic apparatuses that do not have an OCT system or that use other systems may be used.
 上記実施形態においては、1つのサーバ10に対して複数の端末が接続し、上記のような機能を発揮する態様としていた。本発明は、サーバの数や端末の数を限定するものではなく、例えば、1つの装置のみによって、上記のような機能を実現してもよい。また、端末の数やサーバの数をさらに増やしてもよい。また、各機能は、必ずしもサーバ10などによって実現される必要はなく、複数の装置で分担して、機能を実現する態様とすることができる。すなわち、本発明は制御部または装置の数、装置間での機能の分担を限定するものではない。 In the above embodiment, a plurality of terminals are connected to one server 10, and the above functions are exhibited. The present invention does not limit the number of servers or the number of terminals, and for example, the functions described above may be realized by only one device. Also, the number of terminals and the number of servers may be further increased. Also, each function does not necessarily have to be implemented by the server 10 or the like, and may be shared by a plurality of devices to implement the function. That is, the present invention does not limit the number of controllers or devices, or the sharing of functions among devices.
 1 情報処理システム
 10 サーバ
 20 端末
 30 眼科装置
 
1 Information Processing System 10 Server 20 Terminal 30 Ophthalmic Apparatus

Claims (16)

  1.  眼科装置で撮影して得られた広角眼底画像を取得する処理と、
     異常所見が付与された前記広角眼底画像から複数の部分画像を抽出する処理と、
     前記異常所見の根拠となる異常領域が示される第一部分画像を前記複数の部分画像から選択する選択処理と、
     前記第一部分画像を表示する表示処理と、
     を含む画像処理方法。
    a process of acquiring a wide-angle fundus image obtained by photographing with an ophthalmologic apparatus;
    a process of extracting a plurality of partial images from the wide-angle fundus image to which an abnormal finding is added;
    a selection process of selecting a first partial image showing an abnormal region serving as a basis for the abnormal finding from the plurality of partial images;
    display processing for displaying the first partial image;
    An image processing method including
  2.  前記表示処理は、前記複数の部分画像のうち、前記異常領域が示されない第二部分画像を、前記第一部分画像と同時に表示する処理である請求項1に記載の画像処理方法。 The image processing method according to claim 1, wherein the display process is a process of displaying a second partial image in which the abnormal region is not shown among the plurality of partial images, together with the first partial image.
  3.  前記表示処理は、前記第一部分画像と前記第二部分画像とを異なる表示形態で表示する処理である請求項2に記載の画像処理方法。 The image processing method according to claim 2, wherein the display processing is processing for displaying the first partial image and the second partial image in different display modes.
  4.  前記表示処理は、前記第一部分画像を前記第二部分画像よりも大きいサイズで表示する請求項3に記載の画像処理方法。 The image processing method according to claim 3, wherein the display processing displays the first partial image in a size larger than the second partial image.
  5.  前記表示処理は、前記第一部分画像から前記異常領域を抽出し、前記第二部分画像よりも高い表示倍率で前記異常領域を表示する請求項3または請求項4に記載の画像処理方法。 The image processing method according to claim 3 or 4, wherein the display processing extracts the abnormal region from the first partial image and displays the abnormal region at a display magnification higher than that of the second partial image.
  6.  前記表示処理は、前記広角眼底画像における前記第一部分画像の位置を表示する位置表示画像を、前記第一部分画像と同時に表示する処理である請求項1から請求項5のいずれか一項に記載の画像処理方法。 6. The display process according to any one of claims 1 to 5, wherein the display process is a process of displaying a position display image displaying the position of the first partial image in the wide-angle fundus image at the same time as the first partial image. Image processing method.
  7.  前記第一部分画像の中の前記異常領域を強調表示する処理をさらに含む、請求項1ないし6のいずれか1に記載の画像処理方法。 The image processing method according to any one of claims 1 to 6, further comprising processing for highlighting the abnormal region in the first partial image.
  8.  前記強調表示は、前記異常所見の種別に応じて変更される、請求項7に記載の画像処理方法。 The image processing method according to claim 7, wherein the highlighting is changed according to the type of the abnormal finding.
  9.  前記広角眼底画像における前記異常所見を推定する推定処理をさらに含む、請求項1から8のいずれか一項に記載の画像処理方法。 The image processing method according to any one of claims 1 to 8, further comprising estimation processing for estimating the abnormal finding in the wide-angle fundus image.
  10.  前記推定処理は、
     異常所見を推定する学習済みモデルに前記広角眼底画像を入力する処理と、
     前記学習済みモデルに前記異常所見を推定させる処理と、を含む、請求項9に記載の画像処理方法。
    The estimation process includes
    A process of inputting the wide-angle fundus image into a trained model that estimates an abnormal finding;
    and causing the trained model to estimate the abnormal finding.
  11.  前記広角眼底画像について異常所見に関するアノテーションの追加を受け付ける処理と、
     前記アノテーションと前記広角眼底画像とにより前記学習済みモデルを再学習させる処理と、をさらに含む請求項10に記載の画像処理方法。
    a process of receiving addition of annotations regarding abnormal findings for the wide-angle fundus image;
    11. The image processing method according to claim 10, further comprising re-learning the trained model using the annotation and the wide-angle fundus image.
  12.  前記推定処理は、前記異常所見の深刻度を推定する処理をさらに含み、
     前記表示処理は、
     前記深刻度に応じて前記第一部分画像の表示方法を変更する処理を含む、請求項9から12のいずれか一項に記載の画像処理方法。
    The estimation process further includes a process of estimating the severity of the abnormal finding,
    The display process includes
    13. The image processing method according to any one of claims 9 to 12, further comprising a process of changing a display method of said first partial image according to said severity.
  13.  前記広角眼底画像での前記異常領域の位置を特定する処理と、
     特定した前記位置に対してOCTシステムによる撮影を行い、前記位置における断層画像を取得する処理と、
     をさらに含む、請求項1から12のいずれか1項に記載の画像処理方法。
    a process of identifying the position of the abnormal region in the wide-angle fundus image;
    A process of performing imaging with an OCT system at the specified position and acquiring a tomographic image at the position;
    13. The image processing method according to any one of claims 1 to 12, further comprising:
  14.  請求項1から13のいずれか1項に記載の画像処理方法を、コンピュータに実行させるプログラム。 A program that causes a computer to execute the image processing method according to any one of claims 1 to 13.
  15.  請求項1から13のいずれか1項に記載の画像処理方法を実行する処理部を備える、画像処理装置。 An image processing apparatus comprising a processing unit that executes the image processing method according to any one of claims 1 to 13.
  16.  請求項1から13のいずれか1項に記載の画像処理方法を実行する処理部を備える、眼科システム。
     
    An ophthalmic system comprising a processing unit that executes the image processing method according to any one of claims 1 to 13.
PCT/JP2021/001731 2021-01-19 2021-01-19 Image processing method, program, image processing device and ophthalmic system WO2022157838A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2021/001731 WO2022157838A1 (en) 2021-01-19 2021-01-19 Image processing method, program, image processing device and ophthalmic system
JP2022576259A JPWO2022157838A1 (en) 2021-01-19 2021-01-19

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/001731 WO2022157838A1 (en) 2021-01-19 2021-01-19 Image processing method, program, image processing device and ophthalmic system

Publications (1)

Publication Number Publication Date
WO2022157838A1 true WO2022157838A1 (en) 2022-07-28

Family

ID=82549624

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/001731 WO2022157838A1 (en) 2021-01-19 2021-01-19 Image processing method, program, image processing device and ophthalmic system

Country Status (2)

Country Link
JP (1) JPWO2022157838A1 (en)
WO (1) WO2022157838A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014110884A (en) * 2012-10-30 2014-06-19 Canon Inc Image processor and image processing method
US8879813B1 (en) * 2013-10-22 2014-11-04 Eyenuk, Inc. Systems and methods for automated interest region detection in retinal images
JP2018020024A (en) * 2016-08-05 2018-02-08 キヤノン株式会社 Image processing device, image processing method, and program
JP2019202229A (en) * 2019-09-06 2019-11-28 キヤノン株式会社 Image processing device, image processing method, and program
JP2020058647A (en) * 2018-10-11 2020-04-16 株式会社ニコン Image processing method, image processing device and image processing program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014110884A (en) * 2012-10-30 2014-06-19 Canon Inc Image processor and image processing method
US8879813B1 (en) * 2013-10-22 2014-11-04 Eyenuk, Inc. Systems and methods for automated interest region detection in retinal images
JP2018020024A (en) * 2016-08-05 2018-02-08 キヤノン株式会社 Image processing device, image processing method, and program
JP2020058647A (en) * 2018-10-11 2020-04-16 株式会社ニコン Image processing method, image processing device and image processing program
JP2019202229A (en) * 2019-09-06 2019-11-28 キヤノン株式会社 Image processing device, image processing method, and program

Also Published As

Publication number Publication date
JPWO2022157838A1 (en) 2022-07-28

Similar Documents

Publication Publication Date Title
JP7229881B2 (en) MEDICAL IMAGE PROCESSING APPARATUS, TRAINED MODEL, MEDICAL IMAGE PROCESSING METHOD AND PROGRAM
US20210104313A1 (en) Medical image processing apparatus, medical image processing method and computer-readable medium
JP2021154159A (en) Machine learning guided imaging system
Abràmoff et al. Retinal imaging and image analysis
US20210390696A1 (en) Medical image processing apparatus, medical image processing method and computer-readable storage medium
Niemeijer et al. Automated detection and differentiation of drusen, exudates, and cotton-wool spots in digital color fundus photographs for diabetic retinopathy diagnosis
JP7341874B2 (en) Image processing device, image processing method, and program
JP7269413B2 (en) MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING SYSTEM, MEDICAL IMAGE PROCESSING METHOD AND PROGRAM
JP6996682B2 (en) Detection of lesions in eye images
US11284791B2 (en) Image processing method, program, and image processing device
US11922601B2 (en) Medical image processing apparatus, medical image processing method and computer-readable medium
CN112822972A (en) Image processing apparatus, image processing method, and program
US11941788B2 (en) Image processing method, program, opthalmic device, and choroidal blood vessel image generation method
JP7258354B2 (en) Method and system for detecting anomalies in living tissue
JP7270686B2 (en) Image processing system and image processing method
WO2020202680A1 (en) Information processing device and information processing method
WO2017020045A1 (en) System and methods for malarial retinopathy screening
JP2020166813A (en) Medical image processing device, medical image processing method, and program
JP2007097634A (en) Image analysis system and image analysis program
US20230320584A1 (en) Image processing method, image processing program, image processing device, image display device, and image display method
Majumdar et al. An automated graphical user interface based system for the extraction of retinal blood vessels using kirsch‘s template
WO2022157838A1 (en) Image processing method, program, image processing device and ophthalmic system
WO2021075026A1 (en) Image processing method, image processing device, and image processing program
CN111954485A (en) Image processing method, program, image processing apparatus, and ophthalmologic system
Raga A smartphone based application for early detection of diabetic retinopathy using normal eye extraction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21920952

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022576259

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21920952

Country of ref document: EP

Kind code of ref document: A1