WO2023282339A1 - 画像処理方法、画像処理プログラム、画像処理装置及び眼科装置 - Google Patents
画像処理方法、画像処理プログラム、画像処理装置及び眼科装置 Download PDFInfo
- Publication number
- WO2023282339A1 WO2023282339A1 PCT/JP2022/027008 JP2022027008W WO2023282339A1 WO 2023282339 A1 WO2023282339 A1 WO 2023282339A1 JP 2022027008 W JP2022027008 W JP 2022027008W WO 2023282339 A1 WO2023282339 A1 WO 2023282339A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- fundus
- fundus image
- eye
- subject
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 102
- 238000003672 processing method Methods 0.000 title claims description 16
- 238000000034 method Methods 0.000 claims description 59
- 210000001210 retinal vessel Anatomy 0.000 claims description 44
- 210000004204 blood vessel Anatomy 0.000 claims description 26
- 238000006073 displacement reaction Methods 0.000 claims description 21
- 210000004220 fundus oculi Anatomy 0.000 claims 1
- 210000001508 eye Anatomy 0.000 description 88
- 230000003287 optical effect Effects 0.000 description 73
- 238000003384 imaging method Methods 0.000 description 33
- 238000012014 optical coherence tomography Methods 0.000 description 33
- 238000005516 engineering process Methods 0.000 description 28
- 230000002093 peripheral effect Effects 0.000 description 12
- 238000005286 illumination Methods 0.000 description 10
- 230000004323 axial length Effects 0.000 description 9
- 238000001514 detection method Methods 0.000 description 7
- 239000000284 extract Substances 0.000 description 7
- 210000001525 retina Anatomy 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 210000001110 axial length eye Anatomy 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 210000005252 bulbus oculi Anatomy 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 210000003161 choroid Anatomy 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000004424 eye movement Effects 0.000 description 3
- 210000001747 pupil Anatomy 0.000 description 3
- 230000004304 visual acuity Effects 0.000 description 3
- 206010025421 Macule Diseases 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000009530 blood pressure measurement Methods 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 230000004313 glare Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000004410 intraocular pressure Effects 0.000 description 1
- 210000003786 sclera Anatomy 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
- 210000004127 vitreous body Anatomy 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
Definitions
- the technology of the present disclosure relates to an image processing method, a program, an image processing apparatus, and an ophthalmologic apparatus.
- US Patent Application Publication No. 2019/0059723 discloses a tracking method for moving an optical system according to the movement of an eye to be examined. Conventionally, there has been a demand for capturing a fundus image without blurring.
- An image processing method is image processing performed by a processor, comprising: obtaining a first fundus image of an eye to be examined; a step of obtaining a position for obtaining a tomographic image of the fundus of the eye to be examined; a step of obtaining a second fundus image of the eye to be examined; and determining whether or not the position to be obtained is included in a predetermined range of the first fundus image. and if the position to be acquired is included in the predetermined range, a first registration process of aligning the first fundus image and the second fundus image is used to determine the first position of the eye to be inspected. second registration, different from the first registration process, calculating the amount of movement of and aligning the first fundus image and the second fundus image when the acquired position is outside the predetermined range and calculating a second motion amount of the eye to be inspected using the process.
- An image processing device is an image processing device comprising a processor, wherein the processor acquires a first fundus image of an eye to be examined, and uses the first fundus image to set obtaining a position for obtaining a tomographic image of the fundus of the subject eye; acquiring a second fundus image of the eye to be inspected; determining whether the acquired position is included in a predetermined range of the first fundus image; and determining whether the acquired position is included in the predetermined range.
- a first registration process for aligning the first fundus image and the second fundus image is used to calculate a first movement amount of the subject's eye, and the position to be acquired is outside the predetermined range.
- a second registration process different from the first registration process for aligning the first fundus image and the second fundus image is used to determine the second amount of movement of the eye to be inspected. and calculating.
- An image processing program provides a computer with a step of acquiring a first fundus image of an eye to be examined, and a tomographic image of the fundus of the eye to be examined set using the first fundus image. acquiring a second fundus image of the eye to be inspected; determining whether the acquired position is included in a predetermined range of the first fundus image; When the position to be acquired is included in the predetermined range, calculating a first movement amount of the subject's eye using a first registration process for aligning the first fundus image and the second fundus image. using a second registration process different from the first registration process for aligning the first fundus image and the second fundus image when the position to be acquired is outside the predetermined range; and calculating a second motion amount of the subject's eye.
- FIG. 1 is a block diagram of an ophthalmic system 100;
- FIG. 1 is a schematic configuration diagram showing the overall configuration of an ophthalmologic apparatus 110;
- FIG. 3 is a functional block diagram of the CPU 16A of the control device 16 of the ophthalmologic apparatus 110.
- FIG. 4 is a flow chart showing a program executed by a CPU 16A of the ophthalmologic apparatus 110;
- FIG. 5 is a flowchart of a subroutine of eye tracking processing in step 306 of FIG. 4;
- FIG. 2 is a diagram showing a fundus central region and a fundus peripheral region in an eyeball;
- FIG. 4 shows a UWF-SLO fundus image 400G.
- FIG. 4 shows a UWF-SLO fundus image 400G superimposed with a position 402 for acquiring OCT data.
- 4 is a diagram showing a UWF-SLO fundus image 400G in which a position 402 for acquiring OCT data and a region 400 for acquiring a rectangular SLO fundus image are superimposed.
- FIG. Fig. 5 shows a screen 500 of the display of viewer 150;
- an ophthalmic system 100 includes an ophthalmic device 110, an eye axial length measuring device 120, a management server device (hereinafter referred to as “server”) 140, and an image display device (hereinafter referred to as "viewer”). 150 and .
- the ophthalmologic device 110 acquires a fundus image.
- the axial length measuring device 120 measures the axial length of the subject.
- the server 140 stores the fundus image obtained by photographing the fundus of the subject with the ophthalmologic apparatus 110 in association with the ID of the subject.
- the viewer 150 displays medical information such as fundus images acquired from the server 140 .
- the ophthalmologic apparatus 110 is an example of the "image processing apparatus" of the technology of the present disclosure.
- Network 130 is any network such as LAN, WAN, the Internet, or a wide area Ethernet network.
- LAN local area network
- WAN wide area network
- ophthalmologic equipment inspection equipment for visual field measurement, intraocular pressure measurement, etc.
- diagnosis support device that performs image analysis using artificial intelligence are connected via the network 130 to the ophthalmic equipment 110, the eye axial length measuring device 120, and the server. 140 and viewer 150 .
- SLO scanning laser ophthalmoscope
- OCT optical coherence tomography
- the horizontal direction is the "X direction”
- the vertical direction to the horizontal plane is the "Y direction”.
- the ophthalmologic device 110 includes an imaging device 14 and a control device 16 .
- the imaging device 14 includes an SLO unit 18, an OCT unit 20, and an imaging optical system 19, and acquires a fundus image of the eye 12 to be examined.
- the two-dimensional fundus image acquired by the SLO unit 18 is hereinafter referred to as an SLO fundus image.
- a tomographic image of the retina, a front image (en-face image), and the like created based on the OCT data acquired by the OCT unit 20 are referred to as OCT images.
- the control device 16 comprises a computer having a CPU (Central Processing Unit) 16A, a RAM (Random Access Memory) 16B, a ROM (Read-Only Memory) 16C, and an input/output (I/O) port 16D. ing.
- the ROM 16C stores an image processing program, which will be described later.
- the control device 16 may further include an external storage device and store the image processing program in the external storage device.
- the image processing program is an example of the "program” of the technology of the present disclosure.
- the ROM 16C (or external storage device) is an example of the “memory” and “computer-readable storage medium” of the technology of the present disclosure.
- the CPU 16A is an example of the “processor” of the technology of the present disclosure.
- the control device 16 is an example of the "computer program product” of the technology of the present disclosure.
- the control device 16 has an input/display device 16E connected to the CPU 16A via an I/O port 16D.
- the input/display device 16E has a graphic user interface that displays an image of the subject's eye 12 and receives various instructions from the user. Graphic user interfaces include touch panel displays.
- the control device 16 has a communication interface (I/F) 16F connected to the I/O port 16D.
- the ophthalmologic apparatus 110 is connected to an axial length measuring instrument 120 , a server 140 and a viewer 150 via a communication interface (I/F) 16F and a network 130 .
- the control device 16 of the ophthalmic device 110 includes the input/display device 16E, but the technology of the present disclosure is not limited to this.
- the controller 16 of the ophthalmic device 110 may not have the input/display device 16E, but may have a separate input/display device physically separate from the ophthalmic device 110.
- the display device comprises an image processor unit operating under the control of the CPU 16A of the control device 16.
- the image processor unit may display an SLO fundus image, an OCT image, etc., based on the image signal output by the CPU 16A.
- the imaging device 14 operates under the control of the CPU 16A of the control device 16.
- the imaging device 14 includes an SLO unit 18 , an imaging optical system 19 and an OCT unit 20 .
- the imaging optical system 19 includes a first optical scanner 22 , a second optical scanner 24 and a wide-angle optical system 30 .
- the first optical scanner 22 two-dimensionally scans the light emitted from the SLO unit 18 in the X direction and the Y direction.
- the second optical scanner 24 two-dimensionally scans the light emitted from the OCT unit 20 in the X direction and the Y direction.
- the first optical scanner 22 and the second optical scanner 24 may be optical elements capable of deflecting light beams, such as polygon mirrors and galvanometer mirrors. Moreover, those combinations may be sufficient.
- the wide-angle optical system 30 includes an objective optical system (not shown in FIG. 2) having a common optical system 28, and a synthesizing section 26 that synthesizes the light from the SLO unit 18 and the light from the OCT unit 20.
- the objective optical system of the common optical system 28 may be a reflective optical system using a concave mirror such as an elliptical mirror, a refractive optical system using a wide-angle lens, or a catadioptric system combining concave mirrors and lenses. good.
- a wide-angle optical system with an elliptical mirror and wide-angle lens it is possible to image not only the central part of the fundus where the optic disc and macula exist, but also the equatorial part of the eyeball and the peripheral part of the fundus where vortex veins exist. It becomes possible.
- the wide-angle optical system 30 realizes observation in a wide field of view (FOV: Field of View) 12A at the fundus.
- the FOV 12A indicates a range that can be photographed by the photographing device 14.
- FIG. FOV12A can be expressed as a viewing angle.
- a viewing angle can be defined by an internal illumination angle and an external illumination angle in this embodiment.
- the external irradiation angle is an irradiation angle defined by using the pupil 27 as a reference for the irradiation angle of the light beam irradiated from the ophthalmologic apparatus 110 to the eye 12 to be examined.
- the internal illumination angle is an illumination angle defined with the center O of the eyeball as a reference for the illumination angle of the luminous flux that illuminates the fundus.
- the external illumination angle and the internal illumination angle are in correspondence. For example, an external illumination angle of 120 degrees corresponds to an internal illumination angle of approximately 160 degrees. In this embodiment, the internal illumination angle is 200 degrees.
- UWF-SLO fundus image an SLO fundus image obtained by photographing at an angle of view of 160 degrees or more with an internal irradiation angle is referred to as a UWF-SLO fundus image.
- UWF is an abbreviation for UltraWide Field.
- the SLO system is implemented by the control device 16, SLO unit 18, and imaging optical system 19 shown in FIG. Since the SLO system includes the wide-angle optical system 30, it enables fundus imaging with a wide FOV 12A.
- the SLO unit 18 includes a plurality of light sources, for example, a B light (blue light) light source 40, a G light (green light) light source 42, an R light (red light) light source 44, and an IR light (infrared (for example, near infrared) light source). and optical systems 48, 50, 52, 54, and 56 that reflect or transmit the light from the light sources 40, 42, 44, and 46 and guide them to one optical path.
- Optical systems 48, 50, 56 are mirrors and optical systems 52, 54 are beam splitters.
- the B light is reflected by the optical system 48, transmitted through the optical system 50, and reflected by the optical system 54, the G light is reflected by the optical systems 50 and 54, and the R light is transmitted by the optical systems 52 and 54.
- IR light is reflected by the optical systems 56, 52 and directed to one optical path, respectively.
- the SLO unit 18 is configured to be switchable between a light source that emits laser light of different wavelengths, such as a mode that emits G light, R light, and B light, and a mode that emits infrared light, or a combination of light sources that emit light.
- a light source that emits laser light of different wavelengths
- FIG. 2 includes four light sources, a B light (blue light) light source 40, a G light source 42, an R light source 44, and an IR light source 46
- SLO unit 18 may further include a white light source to emit light in various modes, such as a mode that emits only white light.
- the light incident on the imaging optical system 19 from the SLO unit 18 is scanned in the X direction and the Y direction by the first optical scanner 22 .
- the scanning light passes through the wide-angle optical system 30 and the pupil 27 and irradiates the posterior segment of the eye 12 to be examined.
- Reflected light reflected by the fundus enters the SLO unit 18 via the wide-angle optical system 30 and the first optical scanner 22 .
- the SLO unit 18 includes a beam splitter 64 that reflects B light and transmits light other than B light from the posterior segment (for example, fundus) of the eye 12 to be inspected, and G light that has passed through the beam splitter 64.
- a beam splitter 58 that reflects light and transmits light other than G light is provided.
- the SLO unit 18 has a beam splitter 60 that reflects the R light and transmits other than the R light out of the light transmitted through the beam splitter 58 .
- the SLO unit 18 has a beam splitter 62 that reflects IR light out of the light transmitted through the beam splitter 60 .
- the SLO unit 18 has a plurality of photodetection elements corresponding to a plurality of light sources.
- the SLO unit 18 includes a B light detection element 70 that detects B light reflected by the beam splitter 64 and a G light detection element 72 that detects G light reflected by the beam splitter 58 .
- the SLO unit 18 includes an R photodetector element 74 that detects R light reflected by the beam splitter 60 and an IR photodetector element 76 that detects IR light reflected by the beam splitter 62 .
- Light reflected light reflected by the fundus
- the beam splitter 64 in the case of B light, and is detected by the B light detection element 70 .
- G light is transmitted through the beam splitter 64 , reflected by the beam splitter 58 , and received by the G light detection element 72 .
- the incident light in the case of R light, passes through the beam splitters 64 and 58 , is reflected by the beam splitter 60 , and is received by the R light detection element 74 .
- the incident light passes through beam splitters 64 , 58 and 60 , is reflected by beam splitter 62 , and is received by IR photodetector 76 .
- the CPU 16A uses the signals detected by the B photodetector 70, G photodetector 72, R photodetector 74, and IR photodetector 76 to generate a UWF-SLO fundus image.
- a UWF-SLO fundus image (also referred to as a UWF fundus image or an original fundus image as described later) includes a UWF-SLO fundus image obtained by photographing the fundus in G color (G color fundus image) and a fundus image in R color.
- a UWF-SLO fundus image (R-color fundus image) obtained by photographing in color.
- the UWF-SLO fundus image includes a UWF-SLO fundus image obtained by photographing the fundus in B color (B color fundus image) and a UWF-SLO fundus image obtained by photographing the fundus in IR (IR fundus image). image).
- control device 16 controls the light sources 40, 42, 44 to emit light simultaneously.
- a G-color fundus image, an R-color fundus image, and a B-color fundus image whose respective positions correspond to each other are obtained.
- An RGB color fundus image is obtained from the G color fundus image, the R color fundus image, and the B color fundus image.
- the control device 16 controls the light sources 42 and 44 to emit light at the same time, and the fundus of the subject's eye 12 is photographed simultaneously with the G light and the R light, thereby obtaining a G-color fundus image and an R-color fundus image corresponding to each other at each position.
- a fundus image is obtained.
- An RG color fundus image is obtained from the G color fundus image and the R color fundus image.
- UWF-SLO fundus images include B-color fundus images, G-color fundus images, R-color fundus images, IR fundus images, RGB color fundus images, and RG color fundus images.
- Each image data of the UWF-SLO fundus image is transmitted from the ophthalmologic apparatus 110 to the server 140 via the communication interface (I/F) 16F together with the subject information input via the input/display device 16E.
- Each image data of the UWF-SLO fundus image and information of the subject are stored in the storage device 254 correspondingly.
- the information of the subject includes, for example, subject ID, name, age, visual acuity, distinction between right eye/left eye, and the like.
- the subject information is entered by the operator through the input/display device 16E.
- the OCT system is implemented by the control device 16, OCT unit 20, and imaging optical system 19 shown in FIG. Since the OCT system includes the wide-angle optical system 30, it enables fundus imaging with a wide FOV 12A, as in the above-described SLO fundus imaging.
- the OCT unit 20 includes a light source 20A, a sensor (detection element) 20B, a first optical coupler 20C, a reference optical system 20D, a collimating lens 20E, and a second optical coupler 20F.
- the light emitted from the light source 20A is split by the first optical coupler 20C.
- One of the split beams is collimated by the collimating lens 20E and then enters the imaging optical system 19 as measurement light.
- the measurement light is scanned in the X and Y directions by the second optical scanner 24 .
- the scanning light passes through the wide-angle optical system 30 and the pupil 27 and illuminates the fundus.
- the measurement light reflected by the fundus enters the OCT unit 20 via the wide-angle optical system 30 and the second optical scanner 24, passes through the collimating lens 20E and the first optical coupler 20C, and reaches the second optical coupler 20F. incident on
- the other light emitted from the light source 20A and branched by the first optical coupler 20C enters the reference optical system 20D as reference light, passes through the reference optical system 20D, and enters the second optical coupler 20F. do.
- the CPU 16A performs signal processing such as Fourier transform on the detection signal detected by the sensor 20B to generate OCT data.
- the CPU 16A generates OCT images such as tomographic images and en-face images based on the OCT data.
- the OCT system can acquire OCT data of the imaging region realized by the wide-angle optical system 30.
- the OCT data, tomographic images, and en-face images generated by the CPU 16A are transmitted from the ophthalmologic apparatus 110 to the server 140 via the communication interface (I/F) 16F together with information on the subject.
- Various OCT images such as OCT data, tomographic images and en-face images are associated with subject information and stored in the storage device 254 .
- the light source 20A exemplifies a wavelength sweep type SS-OCT (Swept-Source OCT), but SD-OCT (Spectral-Domain OCT), TD-OCT (Time-Domain OCT), etc.
- SS-OCT Session-Coupled Device
- SD-OCT Spectral-Domain OCT
- TD-OCT Time-Domain OCT
- the axial length measuring device 120 measures the axial length of the subject's eye 12 in the axial direction.
- the axial length measuring device 120 transmits the measured axial length to the server 140 .
- the server 140 stores the subject's eye axial length in association with the subject ID.
- the control program for the ophthalmic equipment has an imaging control function, a display control function, an image processing function, and a processing function.
- the CPU 16A executes the control program for the ophthalmologic equipment having these functions, the CPU 16A functions as an imaging control unit 202, a display control unit 204, an image processing unit 206, and a processing unit 208, as shown in FIG. .
- the CPU 16A of the control device 16 of the ophthalmologic apparatus 110 executes the image processing program of the ophthalmologic apparatus, thereby realizing the control of the ophthalmologic apparatus shown in the flowchart of FIG.
- the processing shown in the flowchart of FIG. 4 is an example of the "image processing method" of the technology of the present disclosure.
- the operator of the ophthalmologic apparatus 110 has the subject place his or her chin on a support portion (not shown) of the ophthalmologic apparatus 110 and adjusts the position of the subject's eye 12 .
- the display control unit 204 of the ophthalmologic apparatus 110 displays a menu screen for inputting subject information and mode selection on the screen of the input/display device 16E.
- Modes include an SLO mode for acquiring an SLO fundus image and an OCT mode for acquiring an OCT fundus image.
- the imaging control unit 202 controls the SLO unit 18 and the imaging optical system 19 to obtain a first fundus image of the eye fundus of the subject's eye 12 , specifically, B light source 40 , G light source 42 and R light source 44 . to acquire UWF-SLO fundus images at three different wavelengths.
- the UWF-SLO fundus image includes a G-color fundus image, an R-color fundus image, a B-color fundus image, and an RGB color fundus image.
- a UWF-SLO fundus image is an example of the "first fundus image" of the technology of the present disclosure.
- the display control unit 204 displays the UWF-SLO fundus image 400G on the display of the input/display device 16E.
- FIG. 7A shows a UWF-SLO fundus image 400G displayed on the display.
- the UWF-SLO fundus image 400G corresponds to an image of an area that can be scanned by the SLO unit 18, and as shown in FIG. Contains 400gg.
- a user uses the displayed UWF-SLO fundus image 400G to select a region for acquiring OCT data (position for acquiring a tomographic image) using a touch panel or an input device (not shown). set.
- FIG. 7B shows a case where a position 402 for acquiring a tomographic image is set by a straight line in the X direction using the UWF-SLO fundus image 400G.
- the position 402 for acquiring a tomographic image is represented by an arrow.
- the position 402 for acquiring a tomographic image is not limited to a straight line in the X direction as shown in FIG. A curve or the like connecting two points, or a surface such as a circular area or a rectangular area may be used.
- OCT data (referred to as "A scan data”) is obtained by scanning one point of the fundus in the depth (optical axis) direction (this scanning is referred to as “A scan”). is obtained.
- a scan data is performed a plurality of times while moving along the line (referred to as “B scan") to obtain OCT data (“B scan data”) is obtained.
- B scan data is obtained when the position for acquiring the tomographic image is a plane.
- the B scan is repeated while moving along the plane (referred to as “C scan”) to obtain OCT data (referred to as "C scan data”).
- C scan data Three-dimensional OCT data is generated from the C-scan data, and a two-dimensional en-face image or the like is generated based on the three-dimensional OCT data.
- the processing unit 208 acquires position data (coordinate data, etc.) of a position 402 for acquiring a tomographic image, which is set using the UWF-SLO fundus image 400G.
- the data indicating the position where the tomographic image is acquired is not limited to coordinate data, and may be a number or the like that roughly indicates the position of the fundus image.
- step 306 eye tracking processing is performed. Eye tracking processing will be described with reference to FIG.
- eye tracking processing is executed immediately after the position data of the position 402 for acquiring the tomographic image, which is set using the UWF-SLO fundus image 400G in step 304, is acquired.
- the operator confirms that the subject's eye 12 and the ophthalmic device 110 are properly aligned. After confirming that the alignment is in an appropriate state, the operator instructs to start OCT imaging by operating buttons or the like on the display of the input/display device 16E. When an operation to start OCT imaging is instructed in this way, eye tracking processing is executed.
- the image processing unit 206 uses the R-color fundus image and the G-color fundus image of the UWF-SLO fundus image to extract feature points of each of the retinal blood vessels and the choroidal blood vessels. Specifically, the image processing unit 206 first extracts each of the retinal blood vessels and the choroidal blood vessels, and extracts feature points of each of the retinal blood vessels and the choroidal blood vessels.
- the structure of the eye is such that the vitreous body is covered by multiple layers with different structures.
- the multiple layers include, from innermost to outermost on the vitreous side, the retina, choroid, and sclera.
- the R light passes through the retina and reaches the choroid. Therefore, the R-color fundus image includes information on blood vessels existing in the retina (retinal vessels) and information on blood vessels existing in the choroid (choroidal vessels).
- the G light reaches only the retina. Therefore, the G-color fundus image contains only information on blood vessels existing in the retina (retinal blood vessels).
- the image processing unit 206 extracts retinal blood vessels from the G-color fundus image by performing image processing such as black hat filter processing on the G-color fundus image.
- image processing such as black hat filter processing
- a retinal blood vessel image in which only retinal blood vessel pixels are extracted from the G-color fundus image is obtained.
- Black hat filtering is performed by performing N times (N is an integer of 1 or more) expansion processing and N times contraction processing on the image data of the G-color fundus image, which is the original image, and the closing processing performed on this original image. Refers to the process of taking the difference from the obtained image data.
- the image processing unit 206 removes retinal blood vessels from the R-color fundus image by inpainting processing or the like using the retinal blood vessel image extracted from the G-color fundus image.
- the inpainting process is a process of setting a pixel value at a predetermined position so that the difference from the average value of surrounding pixels is within a specific range (for example, 0). That is, the position information of the retinal blood vessels extracted from the G-color fundus image is used to fill in the pixels corresponding to the retinal blood vessels in the R-color fundus image with the same value as the surrounding pixels. As a result, a choroidal blood vessel image is obtained from the R-color fundus image in which only the pixels of the retinal blood vessels are extracted.
- the image processing unit 206 may perform CLAHE (Contrast Limited Adaptive Histogram Equalization) processing on the R-color fundus image from which the retinal blood vessels have been removed, and perform enhancement processing for emphasizing the choroidal blood vessels. .
- CLAHE Contrast Limited Adaptive Histogram Equalization
- the image processing unit 206 extracts branching points or merging points of retinal blood vessels from the retinal blood vessel image as first feature points of the retinal blood vessels. Then, from the choroidal blood vessel image, a branch point or a confluence point of the choroidal blood vessel is extracted as a first characteristic point of the choroidal blood vessel.
- the processing unit 208 stores the first feature points of the retinal vessels and the first feature points of the choroidal vessels in the RAM 16B.
- the imaging control unit 202 controls the SLO unit 18 and the imaging optical system 19 to obtain a rectangular SLO fundus image of the eye 12 to be examined.
- the rectangular SLO fundus image of the fundus of the subject's eye 12 is an example of the "second fundus image" of the technology of the present disclosure.
- FIG. 7C shows a UWF-SLO fundus image 400G on which a region 400 for acquiring a rectangular SLO fundus image is superimposed on a position 402 for acquiring a tomographic image.
- the area 400 for acquiring the rectangular SLO fundus image includes the entire area of the position 402 for acquiring the tomographic image.
- the technology of the present disclosure is not limited to the case where the area 400 for acquiring the rectangular SLO fundus image includes the entire area of the position 402 for acquiring the tomographic image.
- the area for acquiring the rectangular SLO fundus image may include only part of the position 402 for acquiring the tomographic image. In this way, the area for acquiring the rectangular SLO fundus image may include at least part of the position 402 for acquiring the tomographic image.
- the area for acquiring the rectangular SLO fundus image does not have to include the position 402 for acquiring the tomographic image. Therefore, the area for acquiring the rectangular SLO fundus image may be set independently of the position 402 for acquiring the tomographic image.
- the region 400 for acquiring the rectangular SLO fundus image includes the entire region of the position 402 for acquiring the tomographic image, the region 400 does not include the position 402 at all or includes only a part of the position 402. It is possible to increase the probability that the search range for calculating is narrowed. Thereby, eye tracking processing can be performed smoothly, that is, processing time can be shortened.
- the area for acquiring the rectangular SLO fundus image includes at least part of the fundus area 400gg of the subject's eye.
- the size of the area for acquiring the rectangular SLO fundus image is smaller than the size of the UWF-SLO fundus image 400G. That is, the angle of view of the rectangular SLO fundus image is smaller than the angle of view of the UWF-SLO fundus image, and is set to, for example, 10 degrees to 50 degrees.
- the imaging control unit 202 acquires the position of the area 400 for acquiring the rectangular SLO fundus image set using the UWF-SLO fundus image 400G.
- the processing unit 208 stores and holds the position of the region for acquiring the rectangular SLO fundus image as the first position in the RAM 16B.
- the imaging control unit 202 controls the SLO unit 18 and the imaging optical system 19 based on the acquired position of the region 400 to acquire an image of the fundus of the eye 12 to be inspected. Acquire a rectangular SLO fundus image of the fundus.
- the imaging control unit 202 causes the IR light source 46 to emit light to irradiate the fundus with IR light, thereby capturing an image of the fundus. This is because the IR light is not perceived by the visual cells of the retina of the subject's eye, so that the subject does not feel glare.
- step 354 the image processing unit 206 extracts feature points of each of the retinal blood vessels and the choroidal blood vessels in the rectangular SLO fundus image of the second fundus image.
- the image processing unit 206 performs black hat processing on the rectangular SLO fundus image to specify the pixel positions of the retinal blood vessels. This yields the second feature points of the retinal vessels.
- the image processing unit 206 removes the retinal blood vessels from the rectangular SLO fundus image by performing inpainting processing on the pixel positions of the retinal blood vessels. A second feature point is extracted.
- the processing unit 208 stores the second feature point of the choroidal blood vessel in the RAM 16B.
- step 356 the image processing unit 206 determines whether the position 402 for acquiring the tomographic image is within a predetermined range, outside the predetermined range, or partly included in the predetermined range.
- the predetermined range is, for example, the central region of the fundus.
- the image processing unit 206 determines whether the position 402 for acquiring the tomographic image is included in the central region of the fundus, in a peripheral region that does not include the central region, or straddles the central region and the peripheral region. is determined based on the data indicating the position 402 where the tomographic image is acquired.
- the central region 900 of the fundus is a circular region with a predetermined radius centered at the point where the optical axis of the ophthalmologic apparatus 110 passes through the center of the eyeball and intersects the fundus.
- the region of the processed area outside the center region 900 of the fundus is the peripheral region 902 of the fundus.
- the predetermined range is not limited to the central region 900 of the fundus.
- it may be a circular area with a predetermined radius centered on the macula, or a predetermined range where the blood vessel density of the retinal blood vessels is equal to or greater than a predetermined value.
- Image processing is performed when the position 402 for acquiring a tomographic image is included in the central fundus region 900 and when the position 402 for acquiring a tomographic image straddles the central fundus region 900 and the peripheral region 902 for the fundus. Now go to step 358 . Image processing (especially eye tracking processing) proceeds to step 360 when the position 402 for obtaining a tomographic image is located in the fundus peripheral region 902 .
- step 358 the image processing unit 206 executes the first registration processing whose processing content is optimized for the central part of the fundus so that the processing time is relatively short and image matching is facilitated.
- step 360 the image processing unit 206 executes a second registration process in which the processing content is optimized for the peripheral region so that image matching is facilitated even if the processing time is relatively long.
- the registration process is a process of aligning a rectangular SLO fundus image and a UWF-SLO fundus image (in particular, an RGB color fundus image). Specifically, it is a process of specifying the position of the rectangular SLO fundus image on the RGB color fundus image by image matching.
- the image processing unit 206 extracts, as feature points, three second feature points of the retinal blood vessels extracted from the rectangular SLO fundus image.
- the image processing unit 206 searches for positions where the three second feature points match the first feature points of the retinal blood vessels extracted from the RGB color fundus image.
- the processing unit 208 stores and holds the position matched in the first registration process in the RAM 16B as the second position. Note that in the first registration process of step 358, the RGB color fundus image is not denoised.
- the image processing unit 206 first performs denoising processing on the RGB color fundus image. Then, the image processing unit 206 extracts, as feature points, three from both the second feature points of the retinal vessels and the second feature points of the choroidal vessels extracted from the rectangular SLO fundus image, for a total of six feature points. The image processing unit 206 searches for positions where the six feature points match the first feature point of the retinal blood vessels or the first feature point of the choroidal blood vessels extracted from the RGB color fundus image. The processing unit 208 stores and holds the position matched in the second registration process in the RAM 16B as the second position.
- the number of feature amounts used for matching is smaller than the number of feature amounts used in the second registration process. This is because the density of blood vessels in the fundus central region 900 is higher than that in the fundus peripheral region 902, so even if the number of feature points is small, the result of image matching processing can be obtained with high accuracy. This is for shortening the processing time.
- the image matching processing result is accurate. This is because it is necessary to use the feature points of the choroidal blood vessels in order to obtain , and it is necessary to use the retinal blood vessels and the choroidal blood vessels in order to obtain more accurate image matching processing results.
- first registration process anterior vessels
- second registration process retinal vessels
- choroidal vessels for example, (2, 4 (2, 2)
- the number of retinal blood vessel feature points and the number of choroidal blood vessel feature points in the second registration process are not limited to the same number. Either one may be more numerous than the other. For example, the number of feature points for choroidal vessels is greater than the number of feature points for retinal vessels.
- the feature points of only the choroidal blood vessels may be made larger than the number of feature values of the retinal vessels when performing the first registration processing.
- the image processing unit 206 calculates the eye movement amount from the first position and the second position. Specifically, the image processing unit 206 calculates the magnitude and direction of the deviation between the first position and the second position. Then, the image processing unit 206 calculates the amount of movement of the subject's eye 12 from the calculated magnitude and direction of the displacement.
- the amount of motion is an amount having a magnitude and a direction of motion, that is, a vector amount. The time elapses from when the fundus of the subject's eye 12 is photographed to obtain a UWF-SLO fundus image in step 302 to when the fundus of the subject's eye 12 is photographed to obtain a rectangular SLO fundus image in step 352. Therefore, the same location of the subject's eye is not necessarily photographed.
- the image processing unit 206 calculates in what direction and how much the subject's eye 12 is displaced at the start of OCT imaging or during OCT imaging, that is, the amount of movement (vector amount) of the subject's eye 12. do.
- the deviation amount and the deviation direction calculated in step 362 after step 358 is executed are an example of "the first deviation amount and the first deviation direction.”
- the deviation amount and the deviation direction calculated in step 362 after step 360 is executed are an example of "the second deviation amount and the second deviation direction.”
- the subject's eye 12 Since the subject's eye 12 is displaced in this way, if the subject's eye 12 is scanned based on the position for acquiring the tomographic image set based on the UWF-SLO fundus image, the movement amount (displacement amount) from the position originally desired to be acquired. and in the direction of displacement).
- the imaging control unit 202 controls the second optical system so that the tomographic image of the subject's eye 12 after movement can be acquired at the position to be acquired.
- the scanning range of the scanner 24 is adjusted.
- the second optical scanner 24 is an example of the "scanning device" of the technology of the present disclosure.
- step 308 the imaging control unit 202 controls the OCT unit 20 and the second optical scanner 24 whose scanning range is adjusted, and scans the position of the subject's eye 12 for obtaining a tomographic image, thereby obtaining a tomographic image. get.
- the processing unit 208 outputs the acquired tomographic image, the RG-UWF-SLO fundus image, and the data of the position 402 where the tomographic image is acquired to the server 140 in correspondence with the subject's information.
- the image processing in FIG. 4 ends.
- the server 140 associates the tomographic image, the RGB color fundus image, and the data of the position 402 where the tomographic image is acquired with the subject ID and stores them in a storage device (not shown).
- the viewer 150 requests the server 140 to transmit data such as a tomographic image corresponding to the subject ID according to an instruction from a user such as an ophthalmologist.
- the server 140 transmits the tomographic image, the UWF-SLO fundus image, and the data of the position 402 for acquiring the tomographic image stored in association with the subject ID. It transmits to the viewer 150 corresponding to the information and the like. Thereby, the viewer 150 displays each received data on a display (not shown).
- a screen 500 displayed on the display of the viewer 150 is shown in FIG.
- the screen 500 has a subject information display area 502 and an image display area 504 .
- the subject information display area 502 includes a subject ID display field 512, subject name display field 514, age display field 516, visual acuity display field 518, right/left eye display field 520, and axial length display field. 522.
- the display control unit 204 receives from the server 140 the subject ID, the subject's name, the subject's age, the visual acuity, the right or left eye information, and the subject's eye axial length data, Displayed in corresponding display fields 512-522.
- the image display area 504 includes an RGB color fundus image display field 508 , a tomographic image display field 506 and a text data display field 510 .
- a UWF-SLO fundus image 400G superimposed with a position 402 for obtaining a tomographic image is displayed.
- a tomographic image is displayed in the tomographic image display field 506 .
- the text data display field 510 displays comments and the like at the time of medical examination.
- the first registration process is performed. Specifically, the deviation amount and deviation direction of the subject's eye 12 are calculated using a relatively small number of feature points of the retinal blood vessels in each of the UWF-SLO fundus image and the rectangular SLO fundus image. Therefore, the displacement amount and the displacement direction of the subject's eye 12 can be calculated in a relatively short time. Therefore, the motion amount of the subject's eye 12 can be calculated in a relatively short time.
- the scanning range of the second optical scanner 24 is adjusted to acquire a tomographic image. Therefore, in the above embodiment, a tomographic image can be acquired while following the movement of the eye.
- the second registration process is performed. Specifically, the deviation amount and deviation direction of the subject's eye 12 are calculated using a relatively large number of characteristic points of at least choroidal blood vessels in each of the UWF-SLO fundus image and the rectangular SLO fundus image. Therefore, the displacement amount and the displacement direction of the subject's eye 12 can be accurately calculated. Therefore, the motion amount of the subject's eye 12 can be accurately calculated. Therefore, the scanning range of the second optical scanner 24 is adjusted based on the accurately calculated movement amount of the eye 12 to acquire a tomographic image. Therefore, in the above embodiments, it is possible to accurately acquire a tomographic image of the set region.
- the tomographic image is acquired after executing the eye tracking process.
- eye tracking processing may be performed while acquiring a tomographic image.
- steps 352 to 362 in FIG. 5 may be repeatedly executed while the tomographic image is being acquired.
- each time steps 352 to 362 are repeatedly executed a predetermined number of times, the average value of the movement amounts of the eye 12 to be examined is obtained, and the scanning range of the second optical scanner 24 is adjusted while the tomographic image is being acquired. and a tomographic image may be obtained.
- a plurality of tomographic images of the same position of the subject's eye 12 are obtained.
- a noise-reduced tomographic image may be acquired by averaging these multiple tomographic images.
- the tomographic image is acquired after executing the eye tracking process.
- the technology of the present disclosure is not limited to this.
- a plurality of tomographic images are obtained without tracking the optical scanner.
- each component may exist either singly or two or more.
- image processing may be performed only by a hardware configuration such as FPGA (Field-Programmable Gate Array) or ASIC (Application Specific Integrated Circuit).
- FPGA Field-Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- the technology of the present disclosure includes the following technology, as it includes both cases in which image processing is realized by software configuration using a computer and cases in which it is not.
- a first acquisition unit that acquires a first fundus image of the subject's eye; a second acquisition unit that acquires a position for acquiring a tomographic image of the fundus of the subject eye, which is set using the first fundus image; a third acquisition unit that acquires a second fundus image of the eye to be inspected; a determination unit that determines whether the acquired position is included in a predetermined range of the first fundus image; When the position to be acquired is included in the predetermined range, a first registration process for aligning the first fundus image and the second fundus image is used to obtain the first displacement amount and the displacement of the eye to be inspected.
- a second registration different from the first registration process which calculates the direction of and the direction of and aligns the first fundus image and the second fundus image when the acquired position is outside the predetermined range.
- a calculation unit that calculates a second displacement amount and a displacement direction of the eye to be inspected using a process; image processing device including
- a first acquiring unit acquiring a first fundus image of the eye to be inspected; a second acquisition unit acquiring a position for acquiring a tomographic image of the fundus of the subject eye, which is set using the first fundus image; a step in which a third acquisition unit acquires a second fundus image of the subject eye; a determination unit determining whether the acquired position is included in a predetermined range of the first fundus image; When the position to be acquired is included in the predetermined range, the calculation unit uses a first registration process for aligning the first fundus image and the second fundus image to obtain the first position of the eye to be inspected.
- the imaging control unit 202 is an example of the "first acquisition unit” and the “third acquisition unit” of the technology of the present disclosure.
- the processing unit 208 is an example of the “second acquisition unit” of the technology of the present disclosure.
- the image processing unit 206 is an example of the “determination unit” and the “calculation unit” of the technology of the present disclosure.
- a computer program product for image processing comprising a computer readable storage medium which is not itself a temporary signal, said computer readable storage medium having a program stored thereon, said A program is stored in a computer as image processing performed by a processor, comprising: obtaining a first fundus image of an eye to be examined; acquiring a second fundus image of the eye to be inspected; determining whether or not the acquired position is included in a predetermined range of the first fundus image; When the position is included in the predetermined range, a first registration process for aligning the first fundus image and the second fundus image is used to obtain the first shift amount and shift direction of the subject's eye.
- a second registration process different from the first registration process is used to align the first fundus image and the second fundus image. and calculating a second displacement amount and a displacement direction of the eye to be inspected.
Abstract
Description
前記被検眼の第2眼底画像を取得するステップと、前記取得する位置が前記第1眼底画像の所定範囲に含まれるか否かを判断するステップと、前記取得する位置が前記所定範囲に含まれる場合は、前記第1眼底画像と前記第2眼底画像との位置合わせする第1レジストレーション処理を用いて、前記被検眼の第1の動き量を算出し、前記取得する位置が前記所定範囲以外の場合は、前記第1眼底画像と前記第2眼底画像との位置合わせする、前記第1レジストレーション処理とは異なる第2レジストレーション処理を用いて、前記被検眼の第2の動き量とを算出するステップと、を含む画像処理を実行する。
UWF-SLO眼底画像は、本開示の技術の「第1眼底画像」の一例である。
断層画像を取得する位置が線の場合には、Aスキャンを当該線に沿って移動しつつAスキャンを複数回行うこと(「Bスキャン」と言う。)することにより、OCTデータ(「Bスキャンデータ」という)が得られる。
断層画像を取得する位置が面の場合には、Bスキャンを当該面に沿って移動しつつBスキャンを繰り返す(「Cスキャン」と言う。)することにより、OCTデータ(「Cスキャンデータ」という)が得られる。Cスキャンデータにより、3次元OCTデータが生成され、その3次元OCTデータに基づいて二次元のen-face画像などが生成される。
なお、被検眼12の眼底の矩形SLO眼底画像は、本開示の技術の「第2眼底画像」の一例である
ところで、IR光で撮影された矩形SLO眼底画像では、網膜血管は細い黒い血管として、脈絡膜血管は太い白い血管として写る。そこで、画像処理部206は、ブラックハット処理を矩形SLO眼底画像に施し網膜血管の画素位置を特定する。これにより網膜血管の第2特徴点が得られる。
なお、ステップ358の第1レジストレーション処理では、RGBカラー眼底画像にデノイズ処理を行わない。
なお、ステップ358が実行されてステップ362で算出されたズレ量とズレの方向は、「第1のズレ量と第1ズレの方向」の一例である。ステップ360が実行されてステップ362で算出されたズレ量とズレの方向は、「第2のズレ量と第2のズレの方向」の一例である。
第2光学スキャナ24は、本開示の技術の「走査デバイス」の一例である。
上記実施の形態では、アイトラッキング処理を実行した後に、断層画像を取得している。本開示の技術はこれに限定されない。例えば、光学スキャナを追従させることなく複数枚の断層画像の取得を行う。複数枚の断層画像を取得している間、図5のステップ352からステップ362を、繰り返し実行するようにする、そして、眼の動きが所定値以上のときに撮影された断層画像を削除し、残された複数の断層画像を用いて加算平均を行うようにしてもうよい。
被検眼の第1眼底画像を取得する第1取得部と、
前記第1眼底画像を用いて設定された、前記被検眼の眼底の断層画像を取得する位置を取得する第2取得部と、
前記被検眼の第2眼底画像を取得する第3取得部と、
前記取得する位置が前記第1眼底画像の所定範囲に含まれるか否かを判断する判断部と、
前記取得する位置が前記所定範囲に含まれる場合は、前記第1眼底画像と前記第2眼底画像との位置合わせする第1レジストレーション処理を用いて、前記被検眼の第1のズレ量とズレの方向とを算出し、前記取得する位置が前記所定範囲以外の場合は、前記第1眼底画像と前記第2眼底画像との位置合わせする、前記第1レジストレーション処理とは異なる第2レジストレーション処理を用いて、前記被検眼の第2のズレ量とズレの方向とを算出する算出部と、
を含む画像処理装置。
第1取得部が、被検眼の第1眼底画像を取得するステップと、
第2取得部が、前記第1眼底画像を用いて設定された、前記被検眼の眼底の断層画像を取得する位置を取得するステップと、
第3取得部が、前記被検眼の第2眼底画像を取得するステップと、
判断部が、前記取得する位置が前記第1眼底画像の所定範囲に含まれるか否かを判断するステップと、
算出部が、前記取得する位置が前記所定範囲に含まれる場合は、前記第1眼底画像と前記第2眼底画像との位置合わせする第1レジストレーション処理を用いて、前記被検眼の第1のズレ量とズレの方向とを算出し、前記取得する位置が前記所定範囲以外の場合は、前記第1眼底画像と前記第2眼底画像との位置合わせする、前記第1レジストレーション処理とは異なる第2レジストレーション処理を用いて、前記被検眼の第2のズレ量とズレの方向とを算出するステップと、
を含む画像処理方法。
(第3の技術)
画像処理するためのコンピュータープログラム製品であって、前記コンピュータープログラム製品は、それ自体が一時的な信号ではないコンピュータ可読記憶媒体を備え、前記コンピュータ可読記憶媒体には、プログラムが格納されており、前記プログラムは、コンピュータに、プロセッサが行う画像処理であって、被検眼の第1眼底画像を取得するステップと、前記第1眼底画像を用いて設定された、前記被検眼の眼底の断層画像を取得する位置を取得するステップと、前記被検眼の第2眼底画像を取得するステップと、前記取得する位置が前記第1眼底画像の所定範囲に含まれるか否かを判断するステップと、前記取得する位置が前記所定範囲に含まれる場合は、前記第1眼底画像と前記第2眼底画像との位置合わせする第1レジストレーション処理を用いて、前記被検眼の第1のズレ量とズレの方向とを算出し、前記取得する位置が前記所定範囲以外の場合は、前記第1眼底画像と前記第2眼底画像との位置合わせする、前記第1レジストレーション処理とは異なる第2レジストレーション処理を用いて、前記被検眼の第2のズレ量とズレの方向とを算出するステップと、を含む画像処理を実行させる、コンピュータープログラム製品。
Claims (13)
- プロセッサが行う画像処理であって、
被検眼の第1眼底画像を取得するステップと、
前記第1眼底画像を用いて設定された、前記被検眼の眼底の断層画像を取得する位置を取得するステップと、
前記被検眼の第2眼底画像を取得するステップと、
前記取得する位置が前記第1眼底画像の所定範囲に含まれるか否かを判断するステップと、
前記取得する位置が前記所定範囲に含まれる場合は、前記第1眼底画像と前記第2眼底画像との位置合わせする第1レジストレーション処理を用いて、前記被検眼の第1の動き量を算出し、前記取得する位置が前記所定範囲以外の場合は、前記第1眼底画像と前記第2眼底画像との位置合わせする、前記第1レジストレーション処理とは異なる第2レジストレーション処理を用いて、前記被検眼の第2の動き量とを算出するステップと、
を含む画像処理方法。 - 前記第1の動き量あるいは前記第2の動き量に基づいて、前記断層画像を取得するための走査デバイスを制御するステップと、
を更に含む請求項1に記載の画像処理方法。 - 前記所定範囲は、眼底中心領域であることを特徴とする請求項1又は請求項2に記載の画像処理方法。
- 前記第1眼底画像及び前記第2眼底画像の各々の網膜血管の特徴点を抽出するステップを更に含み、
前記第1レジストレーション処理は、前記網膜血管の特徴点を用いて前記第1眼底画像と前記第2眼底画像との位置合わせすることを特徴とする請求項1から請求項3の何れか1項に記載の画像処理方法。 - 前記第1眼底画像及び前記第2眼底画像の各々の脈絡膜血管の特徴点を抽出するステップを更に含み、
前記第2レジストレーション処理は、前記脈絡膜血管の特徴点を用いて前記第1眼底画像と前記第2眼底画像との位置合わせすることを特徴とする請求項1から請求項4の何れか1項に記載の画像処理方法。 - 前記第1の動き量は、第1のズレ量と第1ズレの方向の成分からなり、
前記第2の動き量は、第2のズレ量と第2ズレの方向の成分からなる、
ことを特徴とする請求項1から請求項5の何れか1項に記載の画像処理方法。 - 前記第1眼底画像は、前記被検眼の眼底自体からの反射光が導かれて形成された被検眼の眼底自体の画像を含み、
前記被検眼の第2眼底画像を取得するステップでは、
前記第1眼底画像を用いて前記被検眼の眼底自体の画像の少なくとも一部を含むように設定された領域の位置を取得し、取得された前記領域の位置に基づいて、前記被検眼の眼底の画像を取得することにより、前記第2眼底画像を取得する、
ことを特徴とする、請求項1から請求項6の何れか1項に記載の画像処理方法。 - 前記領域の大きさは、前記第1眼底画像の大きさより小さいことを特徴とする、請求項7に記載の画像処理方法。
- 前記領域は、前記断層画像を取得する位置の少なくとも一部を含むことを特徴とする、請求項7又は請求項8に記載の画像処理方法。
- 前記制御された走査デバイスを用いて、前記被検眼の眼底の断層画像を取得するステップを更に含む、請求項2に記載の画像処理方法。
- 前記第2眼底画像を取得するステップ、前記判断するステップ、前記算出するステップ、及び前記制御するステップを繰り返し実行しながら、前記被検眼の眼底の断層画像を取得するステップを更に含む、請求項2に記載の画像処理方法。
- プロセッサを備える画像処理装置であって、
前記プロセッサは、
被検眼の第1眼底画像を取得するステップと、
前記第1眼底画像を用いて設定された、前記被検眼の眼底の断層画像を取得する位置を取得するステップと、
前記被検眼の第2眼底画像を取得するステップと、
前記取得する位置が前記第1眼底画像の所定範囲に含まれるか否かを判断するステップと、
前記取得する位置が前記所定範囲に含まれる場合は、前記第1眼底画像と前記第2眼底画像との位置合わせする第1レジストレーション処理を用いて、前記被検眼の第1の動き量を算出し、前記取得する位置が前記所定範囲以外の場合は、前記第1眼底画像と前記第2眼底画像との位置合わせする、前記第1レジストレーション処理とは異なる第2レジストレーション処理を用いて、前記被検眼の第2の動き量とを算出するステップと、
を含む画像処理を実行する、画像処理装置。 - コンピュータに、
被検眼の第1眼底画像を取得するステップと、
前記第1眼底画像を用いて設定された、前記被検眼の眼底の断層画像を取得する位置を取得するステップと、
前記被検眼の第2眼底画像を取得するステップと、
前記取得する位置が前記第1眼底画像の所定範囲に含まれるか否かを判断するステップと、
前記取得する位置が前記所定範囲に含まれる場合は、前記第1眼底画像と前記第2眼底画像との位置合わせする第1レジストレーション処理を用いて、前記被検眼の第1の動き量を算出し、前記取得する位置が前記所定範囲以外の場合は、前記第1眼底画像と前記第2眼底画像との位置合わせする、前記第1レジストレーション処理とは異なる第2レジストレーション処理を用いて、前記被検眼の第2の動き量とを算出するステップと、
を含む画像処理を実行させるプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023533196A JPWO2023282339A1 (ja) | 2021-07-07 | 2022-07-07 | |
CN202280047704.7A CN117597061A (zh) | 2021-07-07 | 2022-07-07 | 图像处理方法、图像处理程序、图像处理装置及眼科装置 |
EP22837753.7A EP4360535A1 (en) | 2021-07-07 | 2022-07-07 | Image processing method, image processing program, image processing device, and ophthalmic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021112899 | 2021-07-07 | ||
JP2021-112899 | 2021-07-07 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023282339A1 true WO2023282339A1 (ja) | 2023-01-12 |
Family
ID=84800621
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/027008 WO2023282339A1 (ja) | 2021-07-07 | 2022-07-07 | 画像処理方法、画像処理プログラム、画像処理装置及び眼科装置 |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP4360535A1 (ja) |
JP (1) | JPWO2023282339A1 (ja) |
CN (1) | CN117597061A (ja) |
WO (1) | WO2023282339A1 (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013208415A (ja) * | 2012-02-28 | 2013-10-10 | Topcon Corp | 眼底観察装置 |
JP2018064662A (ja) * | 2016-10-17 | 2018-04-26 | キヤノン株式会社 | 眼科撮影装置およびその制御方法 |
US20190059723A1 (en) | 2017-08-30 | 2019-02-28 | Topcon Corporation | Ophthalmologic apparatus and method of controlling the same |
JP2020530368A (ja) * | 2017-08-14 | 2020-10-22 | オプトス ピーエルシー | 眼科装置 |
-
2022
- 2022-07-07 CN CN202280047704.7A patent/CN117597061A/zh active Pending
- 2022-07-07 EP EP22837753.7A patent/EP4360535A1/en active Pending
- 2022-07-07 JP JP2023533196A patent/JPWO2023282339A1/ja active Pending
- 2022-07-07 WO PCT/JP2022/027008 patent/WO2023282339A1/ja active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013208415A (ja) * | 2012-02-28 | 2013-10-10 | Topcon Corp | 眼底観察装置 |
JP2018064662A (ja) * | 2016-10-17 | 2018-04-26 | キヤノン株式会社 | 眼科撮影装置およびその制御方法 |
JP2020530368A (ja) * | 2017-08-14 | 2020-10-22 | オプトス ピーエルシー | 眼科装置 |
US20190059723A1 (en) | 2017-08-30 | 2019-02-28 | Topcon Corporation | Ophthalmologic apparatus and method of controlling the same |
Also Published As
Publication number | Publication date |
---|---|
EP4360535A1 (en) | 2024-05-01 |
JPWO2023282339A1 (ja) | 2023-01-12 |
CN117597061A (zh) | 2024-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2023009530A (ja) | 画像処理方法、画像処理装置、及びプログラム | |
US10561311B2 (en) | Ophthalmic imaging apparatus and ophthalmic information processing apparatus | |
US10786153B2 (en) | Ophthalmologic imaging apparatus | |
JP7186587B2 (ja) | 眼科装置 | |
JP2022040372A (ja) | 眼科装置 | |
US10321819B2 (en) | Ophthalmic imaging apparatus | |
JP7306467B2 (ja) | 画像処理方法、画像処理装置、及びプログラム | |
JP2019177032A (ja) | 眼科画像処理装置、および眼科画像処理プログラム | |
WO2021074960A1 (ja) | 画像処理方法、画像処理装置、及び画像処理プログラム | |
JP2019171221A (ja) | 眼科撮影装置及び眼科情報処理装置 | |
JP2022060588A (ja) | 眼科装置、及び眼科装置の制御方法 | |
JP7419946B2 (ja) | 画像処理方法、画像処理装置、及び画像処理プログラム | |
WO2023282339A1 (ja) | 画像処理方法、画像処理プログラム、画像処理装置及び眼科装置 | |
JP2022089086A (ja) | 画像処理方法、画像処理装置、及び画像処理プログラム | |
WO2021210281A1 (ja) | 画像処理方法、画像処理装置、及び画像処理プログラム | |
JP7306482B2 (ja) | 画像処理方法、画像処理装置、及びプログラム | |
WO2022177028A1 (ja) | 画像処理方法、画像処理装置、及びプログラム | |
WO2023199847A1 (ja) | 画像処理方法、画像処理装置、及びプログラム | |
WO2023182011A1 (ja) | 画像処理方法、画像処理装置、眼科装置、及びプログラム | |
JP7272453B2 (ja) | 画像処理方法、画像処理装置、およびプログラム | |
WO2022250048A1 (ja) | 画像処理方法、画像処理装置、及びプログラム | |
WO2022113409A1 (ja) | 画像処理方法、画像処理装置、及びプログラム | |
US20230380680A1 (en) | Ophthalmic apparatus and method of controlling the same | |
US11954872B2 (en) | Image processing method, program, and image processing device | |
JP7416083B2 (ja) | 画像処理方法、画像処理装置、およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22837753 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023533196 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022837753 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022837753 Country of ref document: EP Effective date: 20240122 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |