WO2022113409A1 - Procédé de traitement d'image, dispositif de traitement d'image et programme - Google Patents

Procédé de traitement d'image, dispositif de traitement d'image et programme Download PDF

Info

Publication number
WO2022113409A1
WO2022113409A1 PCT/JP2021/023441 JP2021023441W WO2022113409A1 WO 2022113409 A1 WO2022113409 A1 WO 2022113409A1 JP 2021023441 W JP2021023441 W JP 2021023441W WO 2022113409 A1 WO2022113409 A1 WO 2022113409A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image processing
light
volume data
oct
Prior art date
Application number
PCT/JP2021/023441
Other languages
English (en)
Japanese (ja)
Inventor
真梨子 向井
洋志 葛西
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Publication of WO2022113409A1 publication Critical patent/WO2022113409A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions

Definitions

  • the techniques of the present disclosure relate to image processing methods, image processing devices, and programs.
  • U.S. Patent No. 10238281 relates to a technique for generating volume data of an eye to be inspected using an optical coherence tomography. Conventionally, it has been desired to analyze the volume data of the eye to be inspected.
  • a first aspect of the technique of the present disclosure is an image processing method performed by a processor, in which a step of acquiring OCT volume data of a region including a vortex vein and a plurality of different depths based on the OCT volume data. Includes a step of generating a plurality of en-face images corresponding to a surface.
  • the image processing apparatus of the second aspect of the technique of the present disclosure includes a memory and a processor connected to the memory, wherein the processor acquires OCT volume data of a region including a vortex vein, and the OCT volume.
  • a step of generating a plurality of en-face images corresponding to a plurality of surfaces having different depths based on the data is executed.
  • the program of the third aspect of the technique of the present disclosure includes a step of acquiring OCT volume data of a region including a vortex vein in a computer, and a plurality of surfaces corresponding to a plurality of surfaces having different depths based on the OCT volume data. And to execute the step of generating the en-face image of.
  • FIG. 1 shows a schematic configuration of an ophthalmic system 100.
  • the ophthalmology system 100 includes an ophthalmology device 110, a server device (hereinafter referred to as “server”) 140, and a display device (hereinafter referred to as “viewer”) 150.
  • the ophthalmic apparatus 110 acquires a fundus image.
  • the server 140 corresponds to a patient ID with a plurality of fundus images obtained by photographing the fundus of a plurality of patients by the ophthalmologic apparatus 110 and an axial length measured by an axial length measuring apparatus (not shown). And remember.
  • the viewer 150 displays the fundus image and the analysis result acquired by the server 140.
  • the server 140 is an example of the "image processing device" of the technique of the present disclosure.
  • the ophthalmic apparatus 110, the server 140, and the viewer 150 are connected to each other via the network 130.
  • the network 130 is an arbitrary network such as LAN, WAN, the Internet, and a wide area Ethernet network.
  • a LAN can be adopted for the network 130.
  • the viewer 150 is a client in a client-server system, and a plurality of viewers are connected via a network. Further, a plurality of servers 140 may be connected via a network in order to ensure system redundancy.
  • the ophthalmology device 110 has an image processing function and an image viewing function of the viewer 150, the ophthalmology device 110 can acquire an image of the fundus of the eye, process an image, and view an image in a stand-alone state. In this case, the viewer 150 may be omitted.
  • the server 140 has an image viewing function of the viewer 150, the configuration of the ophthalmic apparatus 110 and the server 140 enables acquisition, image processing, and image viewing of the fundus image. In this case, the viewer 150 may be omitted.
  • a diagnostic support device that performs image analysis using other ophthalmic devices (inspection devices such as visual field measurement and intraocular pressure measurement) and AI (Artificial Integrity) is connected to the ophthalmic device 110, the server 140, and the viewer via the network 130. It may be connected to 150.
  • ophthalmic devices such as visual field measurement and intraocular pressure measurement
  • AI Artificial Integrity
  • the scanning laser ophthalmoscope is referred to as "SLO”.
  • the optical coherence tomography is referred to as "OCT”.
  • the horizontal direction is the "X direction” and the direction perpendicular to the horizontal plane is the "Y direction", connecting the center of the pupil of the anterior segment of the eye 12 to the center of the eyeball.
  • the direction is "Z direction”. Therefore, the X, Y, and Z directions are perpendicular to each other.
  • the ophthalmology device 110 includes a photographing device 14 and a control device 16.
  • the imaging device 14 includes an SLO unit 18 and an OCT unit 20 to acquire a fundus image of the fundus of the eye to be inspected 12.
  • the two-dimensional fundus image acquired by the SLO unit 18 is referred to as an SLO image.
  • a tomographic image of the retina or a frontal image (en-face image) created based on the OCT data acquired by the OCT unit 20 is referred to as an OCT image.
  • the control device 16 includes a computer having a CPU (Central Processing Unit) 16A, a RAM (Random Access Memory) 16B, a ROM (Read-Only memory) 16C, and an input / output (I / O) port 16D. ing.
  • CPU Central Processing Unit
  • RAM Random Access Memory
  • ROM Read-Only memory
  • I / O input / output
  • the control device 16 includes an input / display device 16E connected to the CPU 16A via the I / O port 16D.
  • the input / display device 16E has a graphic user interface for displaying an image of the eye to be inspected 12 and receiving various instructions from the user.
  • the graphic user interface includes a touch panel display.
  • control device 16 includes an image processor 17 connected to the I / O port 16D.
  • the image processor 17 generates an image of the eye 12 to be inspected based on the data obtained by the photographing apparatus 14.
  • the image processor 17 may be omitted, and the CPU 16A may generate an image of the eye 12 to be inspected based on the data obtained by the photographing apparatus 14.
  • the control device 16 is connected to the network 130 via the communication interface 16F.
  • the control device 16 of the ophthalmic device 110 includes the input / display device 16E, but the technique of the present disclosure is not limited to this.
  • the control device 16 of the ophthalmic apparatus 110 may not include the input / display device 16E, but may include an input / display device that is physically independent of the ophthalmic apparatus 110.
  • the display device includes an image processing processor unit that operates under the control of the display control unit 204 of the CPU 16A of the control device 16.
  • the image processor unit may display an SLO image or the like based on the image signal output instructed by the display control unit 204.
  • the photographing device 14 operates under the control of the CPU 16A of the control device 16.
  • the imaging device 14 includes an SLO unit 18, an imaging optical system 19, and an OCT unit 20.
  • the photographing optical system 19 includes an optical scanner 22 and a wide-angle optical system 30.
  • the optical scanner 22 two-dimensionally scans the light emitted from the SLO unit 18 in the X direction and the Y direction.
  • the optical scanner 22 may be any optical element capable of deflecting the luminous flux, and for example, a polygon mirror, a galvano mirror, or the like can be used. Moreover, it may be a combination thereof.
  • the wide-angle optical system 30 synthesizes the light from the SLO unit 18 and the light from the OCT unit 20.
  • the wide-angle optical system 30 may be a catadioptric system using a concave mirror such as an elliptical mirror, a catadioptric system using a wide-angle lens or the like, or a catadioptric system in which a concave mirror or a lens is combined.
  • a wide-angle optical system using an elliptical mirror, a wide-angle lens, or the like it is possible to photograph the retina not only in the central part of the fundus but also in the peripheral part of the fundus.
  • the wide-angle optical system 30 realizes observation in a wide field of view (FOV: Field of View) 12A at the fundus.
  • the FOV 12A indicates a range that can be photographed by the photographing apparatus 14.
  • the FOV12A can be expressed as a viewing angle.
  • the viewing angle may be defined by an internal irradiation angle and an external irradiation angle in the present embodiment.
  • the external irradiation angle is an irradiation angle in which the irradiation angle of the luminous flux emitted from the ophthalmic apparatus 110 to the eye 12 to be inspected is defined with reference to the pupil 27.
  • the internal irradiation angle is an irradiation angle in which the irradiation angle of the luminous flux irradiated to the fundus F is defined with reference to the center O of the eyeball.
  • the external irradiation angle and the internal irradiation angle have a corresponding relationship. For example, when the external irradiation angle is 120 degrees, the internal irradiation angle corresponds to about 160 degrees. In this embodiment, the internal irradiation angle is set to 200 degrees.
  • UWF-SLO fundus image the SLO fundus image obtained by taking a picture with an internal irradiation angle of 160 degrees or more is referred to as a UWF-SLO fundus image.
  • UWF is an abbreviation for UltraWide Field (ultra-wide-angle).
  • FOV viewing angle
  • the ophthalmic apparatus 110 can image a region 12A having an internal irradiation angle of 200 ° with the eyeball center O of the eye 12 to be inspected as a reference position.
  • the internal irradiation angle of 200 ° is 110 ° at the external irradiation angle based on the pupil of the eyeball of the eye 12 to be inspected. That is, the wide-angle optical system 30 irradiates the laser beam from the pupil with an angle of view of 110 ° as an external irradiation angle, and photographs the fundus region of 200 ° with an internal irradiation angle.
  • the SLO system is realized by the control device 16, the SLO unit 18, and the photographing optical system 19 shown in FIG. Since the SLO system includes a wide-angle optical system 30, it enables fundus photography with a wide FOV12A.
  • the SLO unit 18 includes a B light (blue light) light source 40, a G light (green light) light source 42, an R light (red light) light source 44, and an IR light (infrared light (for example, near infrared light)). It includes a light source 46 and optical systems 48, 50, 52, 54, 56 that reflect or transmit light from the light sources 40, 42, 44, 46 and guide them into one optical path.
  • the optical systems 48 and 56 are mirrors, and the optical systems 50, 52 and 54 are beam splitters.
  • B light is reflected by the optical system 48, is transmitted through the optical system 50, is reflected by the optical system 54, G light is reflected by the optical systems 50 and 54, and R light is transmitted through the optical systems 52 and 54.
  • IR light is reflected by the optical systems 52 and 56 and guided to one optical path, respectively.
  • the SLO unit 18 is configured to be able to switch a combination of a light source that emits laser light having a different wavelength or a light source that emits light, such as a mode that emits R light and G light and a mode that emits infrared light.
  • a light source 40 for B light includes four light sources, a light source 40 for B light, a light source 42 for G light, a light source 44 for R light, and a light source 46 for IR light, but the technique of the present disclosure is not limited thereto.
  • the SLO unit 18 is further provided with a light source of white light, and may emit light in various modes such as a mode of emitting G light, R light, and B light, and a mode of emitting only white light. good.
  • the light incident on the photographing optical system 19 from the SLO unit 18 is scanned in the X direction and the Y direction by the optical scanner 22.
  • the scanning light is applied to the fundus through the wide-angle optical system 30 and the pupil 27.
  • the reflected light reflected by the fundus is incident on the SLO unit 18 via the wide-angle optical system 30 and the optical scanner 22.
  • the SLO unit 18 transmits G light among the light from the rear eye portion (fundus) of the eye 12 to be examined, the beam splitter 64 that reflects the B light and transmits other than the B light, and the light transmitted through the beam splitter 64.
  • a beam splitter 58 that reflects and transmits other than G light is provided.
  • the SLO unit 18 includes a beam splitter 60 that reflects R light and transmits other than R light among the light transmitted through the beam splitter 58.
  • the SLO unit 18 includes a beam splitter 62 that reflects IR light among the light transmitted through the beam splitter 60.
  • the SLO unit 18 detects the B light detection element 70 that detects the B light reflected by the beam splitter 64, the G light detection element 72 that detects the G light reflected by the beam splitter 58, and the R light reflected by the beam splitter 60. It includes an R light detection element 74 and an IR light detection element 76 that detects the IR light reflected by the beam splitter 62.
  • the light incident on the SLO unit 18 via the wide-angle optical system 30 and the optical scanner 22 is reflected by the beam splitter 64 and received by the B light detection element 70.
  • the beam splitter 58 In the case of G light, it is reflected by the beam splitter 58 and received by the G light detection element 72.
  • the incident light passes through the beam splitter 58, is reflected by the beam splitter 60, and is received by the R photodetection element 74.
  • the incident light passes through the beam splitters 58 and 60, is reflected by the beam splitter 62, and is received by the IR photodetection element 76.
  • the image processor 17 operating under the control of the CPU 16A displays a UWF-SLO image using the signals detected by the B photodetection element 70, the G photodetection element 72, the R photodetection element 74, and the IR photodetection element 76. Generate.
  • control device 16 controls the light sources 40, 42, and 44 so as to emit light at the same time.
  • a G color fundus image, an R color fundus image, and a B color fundus image in which the positions correspond to each other can be obtained.
  • An RGB color fundus image can be obtained from a G color fundus image, an R color fundus image, and a B color fundus image.
  • the control device 16 controls the light sources 42 and 44 so as to emit light at the same time, and the fundus of the eye to be inspected 12 is simultaneously photographed by the G light and the R light, so that the G color fundus image and the R color corresponding to each other at each position are photographed at the same time.
  • a fundus image is obtained.
  • An RG color fundus image can be obtained from the G color fundus image and the R color fundus image.
  • the viewing angle (FOV: Field of View) of the fundus is set to an ultra-wide angle, and the region beyond the equator from the posterior pole of the fundus of the eye 12 to be inspected can be photographed.
  • the OCT system is realized by the control device 16, the OCT unit 20, and the photographing optical system 19 shown in FIG. Since the OCT system includes a wide-angle optical system 30, it enables OCT imaging of the peripheral portion of the fundus in the same manner as the acquisition of the SLO fundus image described above. That is, the wide-angle optical system 30 in which the viewing angle (FOV) of the fundus is an ultra-wide angle enables OCT imaging of a region beyond the equator portion 178 from the posterior pole portion of the fundus of the eye 12 to be inspected. OCT data of structures existing around the fundus such as vortex veins can be acquired, and tomographic images of vortex veins and 3D structures of vortex veins can be obtained by image processing of OCT data.
  • FOV viewing angle
  • the OCT unit 20 includes a light source 20A, a sensor (detection element) 20B, a first optical coupler 20C, a reference optical system 20D, a collimating lens 20E, and a second optical coupler 20F.
  • the light emitted from the light source 20A is branched by the first optical coupler 20C.
  • One of the branched lights is converted into parallel light by the collimated lens 20E as measurement light, and then incident on the photographing optical system 19.
  • the measurement light is applied to the fundus through the wide-angle optical system 30 and the pupil 27.
  • the measurement light reflected by the fundus is incident on the OCT unit 20 via the wide-angle optical system 30 and incident on the second optical coupler 20F via the collimating lens 20E and the first optical coupler 20C.
  • the other light emitted from the light source 20A and branched by the first optical coupler 20C is incident on the reference optical system 20D as reference light, and is incident on the second optical coupler 20F via the reference optical system 20D. do.
  • the image processor 17 that operates under the control of the image processing unit 206 performs signal processing such as Fourier transform on the detection signal detected by the sensor 20B, and generates OCT data. It is also possible to generate an OCT image such as a tomographic image or an en-face image by the image processor 17 based on the OCT data.
  • the OCT unit 20 can scan a predetermined range (for example, a rectangular range of 6 mm ⁇ 6 mm) in one OCT imaging.
  • the predetermined range is not limited to 6 mm ⁇ 6 mm, but may be a square range of 12 mm ⁇ 12 mm or 23 mm ⁇ 23 mm, or a rectangular range such as 14 mm ⁇ 9 mm or 6 mm ⁇ 3.5 mm, and may be an arbitrary rectangular range. can.
  • the diameter may be in the range of a circular diameter such as 6 mm, 12 mm, and 23 mm.
  • the ophthalmologic apparatus 110 can scan a region 12A having an internal irradiation angle of 200 °.
  • the ophthalmic apparatus 110 can generate OCT data by the OCT imaging. Therefore, the ophthalmic apparatus 110 is an OCT image, a tomographic image of the fundus including a vortex vein (B-scan image), an OCT volume data including the vortex vein, and an en-face image (OCT) which is a cross section of the OCT volume data. It is possible to generate a front image) generated from volume data. Needless to say, the OCT image includes an OCT image of the central part of the fundus (the posterior pole of the eyeball where the macula and the optic disc are present).
  • the OCT data (or the image data of the OCT image) is sent from the ophthalmologic device 110 to the server 140 via the communication interface 16F, and is stored in the storage device 254.
  • the light source 20A exemplifies a wavelength sweep type SS-OCT (Swept-Source OCT), but there are various types such as SD-OCT (Spectral-Domain OCT) and TD-OCT (Time-Domain OCT). It may be an OCT system of various types.
  • the server 140 includes a computer main body 252.
  • the computer body 252 has a CPU 262, a RAM 266, a ROM 264, and an input / output (I / O) port 268.
  • a storage device 254, a display 256, a mouse 255M, a keyboard 255K, and a communication interface (I / F) 258 are connected to the input / output (I / O) port 268.
  • the storage device 254 is composed of, for example, a non-volatile memory.
  • the input / output (I / O) port 268 is connected to the network 130 via the communication interface (I / F) 258.
  • the server 140 can communicate with the ophthalmic apparatus 110 and the viewer 150.
  • the image processing program shown in FIG. 6 is stored in the ROM 264 or the storage device 254.
  • the ROM 264 or the storage device 254 is an example of a "memory" of the technique of the present disclosure.
  • the CPU 262 is an example of a "processor” of the technique of the present disclosure.
  • the image processing program is an example of a "program" of the technique of the present disclosure.
  • the server 140 stores each data received from the ophthalmic device 110 in the storage device 254.
  • the image processing program includes a display control function, an image processing function, and a processing function.
  • the CPU 262 executes an image processing program having each of these functions, the CPU 262 functions as a display control unit 204, an image processing unit 206, and a processing unit 208.
  • the image processing unit 206 acquires the OCT volume data 400, which is the OCT data, from the storage device 254.
  • the OCT volume data 400 is OCT volume data having a predetermined area including the vortex vein VV.
  • the OCT volume data 400 is obtained by OCT imaging one of a plurality of vortex vein VVs with an ophthalmologic apparatus 110.
  • the predetermined area is, for example, a rectangular area of 6 mm ⁇ 6 mm.
  • N faces having different depths from the first plane f401 to the Nth plane f40N are set for the OCT volume data 400.
  • the OCT volume data 400 may be obtained by OCT imaging of each of the plurality of vortex vein VVs existing in the eye to be inspected by the ophthalmologic apparatus 110.
  • the position of each of the plurality of vortex veins is estimated based on the position where the choroidal blood vessels are gathered, for example, by extracting the choroidal blood vessels from the choroidal blood vessel image, estimating the movement direction (blood vessel running direction) of each choroidal blood vessel, and estimating the position. Will be done.
  • the choroidal vessel image is an R-UWF-SLO image (R color fundus image) taken with red light (laser light with a wavelength of 630 to 660 nm) and a G- taken with green light (laser light with a wavelength of 500 to 550 nm). It is generated by image processing the image data of the UWF-SLO image (G color fundus image).
  • a choroidal blood vessel image is generated by extracting retinal blood vessels from the G-colored fundus image, removing the retinal blood vessels from the R-colored fundus image, and performing image processing for enhancing the choroidal blood vessels.
  • step 602 the image processing unit 206 sets the parameter n to 1.
  • the parameter n is a parameter indicating the number of en-face images.
  • the image processing unit 206 analyzes the OCT volume data 400 and is below a predetermined number of pixels from the retinal pigment epithelial cell layer (hereinafter referred to as RPE layer) in the OCT volume data 400, for example, 10 pixels.
  • the lower surface is set as the first surface.
  • the image processing unit 206 first specifies the RPE layer as a reference plane.
  • the RPE layer is specified by performing a predetermined segmentation process on the OCT volume data 400. Further, the image processing unit 206 may specify the RPE layer as the most bright layer in the OCT volume data 400 as the RPE layer.
  • the image processing unit 206 sets the surface 10 pixels below the RPE layer as the first surface. This is because the region deeper than the RPE layer (the region farther than the RPE layer when viewed from the center of the eyeball) is the choroidal region, and an en-face image of the region where the choroidal blood vessels are present is generated.
  • the surface 10 pixels below the RPE layer is not limited to being set as the first surface, and for example, the surface 10 pixels below the Bruch film existing immediately below the RPE layer may be set as the first surface.
  • Bruch's membrane is also identified by subjecting the OCT volume data 400 to another predetermined segmentation process different from that for the RPE layer.
  • it may be 10 pixels below in the direction of the A-scan when the OCT volume data is generated.
  • the surface 10 pixels below the RPE layer or Bruch film is not limited to being specified as the first surface, and may be set with an arbitrary number of pixels. Further, it may be defined by a length such as millimail or nanometer instead of the definition by the number of pixels. Further, as the reference plane, a spherical surface maintained at a certain distance from the pupil or the center of the eyeball may be defined as the reference plane.
  • the first aspect is an example of "other aspects" of the technique of the present disclosure.
  • the RPE layer or Bruch film is an example of the "reference plane" of the technique of the present disclosure.
  • the image processing unit 206 generates a first en-face image 7a (see FIG. 7) corresponding to the set first surface.
  • the en-face image 7a may be generated from the pixel values of the pixels existing on the first surface, or the pixel group in the shallow direction and the pixel group in the deep direction including the first surface are extracted from the OCT volume data 400 and these are extracted.
  • the pixel value may be obtained as the average value or the median value of the brightness values of the pixel group of. Image processing such as noise removal may be used when obtaining the pixel value.
  • the first en-face image corresponding to the generated first surface is stored in the RAM 266 by the processing unit 208. It should be noted that the en-face image corresponding to the RPE layer or Bruch film which is the reference plane may be generated.
  • the image processing unit 206 determines whether or not the parameter n is the preset maximum number N.
  • the image processing unit 206 generates an en-face image corresponding to the above.
  • the maximum number N can be arbitrarily set by the user depending on the thickness of the retina near the vortex vein and the length of the interval between the en-face images.
  • step 608 If it is determined that the parameter n is the maximum number N (step 608: YES), the image processing proceeds to step 610.
  • the processing unit 208 stores the generated N-face images in the RAM 266 or the storage device 254, and ends the processing.
  • step 608 the image processing proceeds to step 612.
  • the image processing unit 206 sets the parameter n to n + 1.
  • step 614 the image processing returns to step 606.
  • the image processing unit 206 generates the nth (eg, 2) en-face image 7b (see FIG. 7) corresponding to the set nth (eg, 2) plane.
  • the generated second en-face image 7b is stored in the RAM 266.
  • the image processing unit 206 repeats the loop of step 606, step 608, step 612, and step 614 until the parameter n reaches the maximum number N.
  • the Nth en-face image 7d from the first en-face image 7a shown in FIG. 7 is generated.
  • These plurality of en-face images are a group of en-face images generated for each predetermined number of pixels in the direction in which the choroid is deepened.
  • the vortex vein For example, comparing the first en-face image 7a, the second en-face image 7b, ... the N-1 en-face image 7c, and the Nth en-face image 7d in order, the vortex vein. It is possible to specify the traveling direction of the vehicle, particularly the traveling direction in the depth direction. In the example shown in FIG. 7, it can be seen that the vortex vein runs in the upper left direction of the paper.
  • the vortex vein is obtained by image processing the en-face image group as shown in FIG. 7, or by artificial intelligence using a trained model in which the traveling direction of the en-face image group and the vortex vein is machine-learned. You may try to identify the vascular running of.
  • the blood vessel running of the vortex vein may be specified by artificial intelligence using the OCT volume data and the learned model in which the running direction of the vortex vein is machine-learned. Based on the specified blood vessel running, information indicating the direction of blood vessel running may be generated, and the information indicating the direction of blood vessel running may be superimposed and displayed on the first to Nth en-face images. Further, it may be superimposed and displayed on the OCT volume data 400.
  • the information indicating the direction of blood vessel travel includes vector information, mark information such as an arrow indicating the direction, and information for blinking so as to flow in the traveling direction. Further, the region of the pixel indicating the blood vessel in each en-face image of the first to Nth en-face images is obtained, and the position of the center of gravity of the blood vessel is calculated. Then, a vector connecting the positions of the centers of gravity calculated from the first en-face image to the Nth en-face image (from the upper en-face image to the lower en-face image) is obtained from the first en-face image. It may be displayed in all the en-face images of the third Nen-face image. Further, the position of the center of gravity in each en-face image may be displayed in the OCT volume data.
  • a first surface below a predetermined number of pixels (10 pixels in the above example) is set from the RPE layer, and then surfaces are set from the second surface to the eighth surface in order, for a total of eight images.
  • An en-face image corresponding to each surface is generated.
  • a surface 40 pixels below the RPE layer may be set as a reference surface, and other surfaces may be set above and below every 10 pixels from the reference surface to generate an en-face image of each surface. good.
  • the display screen is generated by the display control unit 204 of the server 140 based on the user's instruction, and is output as an image signal to the viewer 150 by the processing unit 208.
  • the viewer 150 displays the display screen on a monitor or the like based on the image signal.
  • FIG. 8 shows the first display screen 500A. As shown in FIG. 8, the first display screen 500A has an information area 502 and an image display area 504A.
  • the information area 502 has a patient ID display field 512, a patient name display field 514, an age display field 516, a visual acuity display field 515, a right eye / left eye display field 520, and an axial length display field 522.
  • the viewer 150 displays each information based on the information received from the server 140.
  • the image display area 504A is an area for displaying an image to be inspected or the like.
  • the image display area 504A is provided with the following display fields, specifically, UWF fundus image display field 542, choroidal blood vessel image display field 544, B-scan image display field 546, and en-face image display.
  • the comment field 530 is a remarks column in which the result observed by the ophthalmologist who is the user or the diagnosis result can be arbitrarily input.
  • the UWF-SLO fundus image SL taken by SLO with the ophthalmologic apparatus 110 is displayed on the fundus of the eye to be inspected.
  • the UWF-SLO fundus image SL is superposed with a line SLV showing a vertical watershed and a line SLH showing a horizontal watershed.
  • the rectangular region OCV1 indicating the acquisition position of the OCT volume data 400 around the vortex vein is superimposed and displayed.
  • the line OCB1 indicating the cross-sectional position of the B-scan image OCB displayed in the B-scan image display field 526 is superimposed and displayed.
  • the choroidal blood vessel image CV obtained by image processing the UWF-SLO fundus image SL1 is displayed.
  • the choroidal blood vessel image CV is superposed with a line CVV showing a vertical watershed and a line CVH showing a horizontal watershed.
  • the rectangular region OCV2 indicating the acquisition position of the OCT volume data 400 around the vortex vein is superimposed and displayed.
  • the line OCB2 indicating the cross-sectional position of the B-scan image OCB displayed in the B-scan image display field 526 is superimposed and displayed.
  • the vortex vein to be displayed in the B-scan image display field 546 and the en-face image display field 548 may be selected.
  • the rectangular region OCV2 is superimposed and displayed in the range of the OCT volume data of the selected vortex vein.
  • the B-scan image OCB generated from the OCT volume data 400 is displayed in the B-scan image display field 546.
  • the curve OCL superimposed on the B-scan image OCB is a curve indicating the position of the acquisition surface of the en-face image in the depth direction.
  • the curve OCL is the superimposed position of the curve OCL. Also changes.
  • the moving image generated from the en-face image group is displayed in the en-face image display field 548, the superimposed position of the curve OCL changes with the reproduction of the moving image.
  • en-face image display field 548 a plurality of en-face images from the first en-face image 7a to the Nth en-face image 7d are displayed in order like a slide show, or are displayed as a moving image. A specific en-face image can be displayed or displayed as a still image.
  • the superimposed position of the curve OCL on the B-scan image OCB displayed in the B-scan image display field 546 changes according to the en-face image displayed in the en-face image display field 548.
  • the line OCB3 indicating the cross-sectional position of the B-scan image OCB may be superimposed and displayed on the en-face image displayed in the n-face image display field 548.
  • the line OCB3 may be selected to be superimposed / not superimposed according to the user's instruction. By horizontally moving this line OCB3, the B-scan image OCB to be displayed in the B-scan image display field 546 can be changed.
  • the line OCB3 can be set not only in the horizontal direction but also in any direction such as a vertical direction. Further, the above-mentioned information indicating the direction of blood vessel traveling may be superimposed and displayed on the first to Nth en-face images.
  • FIG. 9 shows a second display screen 500B. Since the second display screen 500B has the same fields as the fields of the first display screen 500A, the same fields are designated by the same reference numerals, the description thereof will be omitted, and different parts will be described.
  • the display screen 500B has an en-face image display field 560 for displaying N sheets of en-face images side by side.
  • the rectangular frame 560f shown by the dotted line is superimposed and displayed on the second en-face image 7b.
  • the position of the surface of the selected en-face image is displayed as a curve OCL in the B-scan image OCB of the B-scan image display field 546.
  • a three-dimensional image obtained by image-processing the OCT volume data 400 may be displayed on the first display screen 500A or the second display screen 500B.
  • the OCT volume data 400 the blood vessel area is black, and the space (interstitium) other than the blood vessel is binarized to extract the blood vessel area, and the blood vessel area is displayed by executing the 3D volume rendering process. May be.
  • it may be displayed after being converted into a file format having a polygonal mesh structure such as an STL (Standard Triangulated Language) file format.
  • STL Standard Triangulated Language
  • each of the plurality of en-face images generated from the OCT volume data 400 is binarized to generate a binarized en-face image. Then, the binarized OCT volume data 400 may be obtained by resynthesizing the binarized en-face image.
  • image processing for visualizing the vortex vein and the choroidal blood vessels connected to the vortex vein may be performed on the OCT volume data 400 to display a three-dimensional image emphasizing the blood vessels around the vortex vein.
  • the retina and choroid around the fundus may be thinner than the central part, and it is difficult to observe the three-dimensional structure of the vortex vein when the actual size is displayed. Therefore, the OCT volume data 400 may be subjected to image processing in which the OCT volume data 400 is stretched only in the depth direction (the display unit is changed only in the depth direction) to display a three-dimensional image.
  • the state of the vortex vein traveling from the choroid to the sclera is visualized by generating an en-face image from the OCT volume data in which the region including the vortex vein is photographed.
  • the vortex vein changes significantly in the depth direction compared to the lateral change, but it was difficult to understand just by visualizing the choroidal vascular structure in the OCT volume data by using the en-face image generated from the OCT volume data.
  • the morphological change in the depth direction of the vortex vein can be visualized and quantified with high resolution.
  • the vortex vein is visualized without using the angiography process by the Argolim of OCT angiography.
  • the image processing (FIG. 5) is executed by the server 140, but the technique of the present disclosure is not limited to this, and additional additions provided in the ophthalmic apparatus 110, the viewer 150, or the network 130.
  • the image processing device may execute.
  • each component may be present alone or in combination of two or more as long as there is no contradiction.
  • image processing is realized by a software configuration using a computer
  • the technique of the present disclosure is not limited to this.
  • the image processing may be executed only by the hardware configuration such as FPGA (Field-Programmable Gate Array) or ASIC (Application Specific Integrated Circuit). Some of the image processing may be performed by the software configuration and the rest may be performed by the hardware configuration.
  • the technique of the present disclosure includes the following techniques because it includes cases where image processing is realized and cases where image processing is not realized by a software configuration using a computer.
  • the acquisition unit that acquires OCT volume data for the area including the vortex vein, A generation unit that generates a plurality of en-face images corresponding to a plurality of surfaces having different depths based on the OCT volume data.
  • An image processing device comprising.
  • Image processing methods including.
  • the image processing unit 206 is an example of the "acquisition unit” and the “generation unit” of the technique of the present disclosure.
  • a computer program product for image processing comprises a computer-readable storage medium that is not itself a temporary signal.
  • a program is stored in the computer-readable storage medium.
  • the program On the computer Steps to acquire OCT volume data for areas containing vortex veins, Based on the OCT volume data, a step of generating a plurality of en-face images corresponding to a plurality of surfaces having different depths, and To execute, Computer program product.
  • the server 140 is an example of the "computer program product" of the technique of the present disclosure.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

La présente invention analyse les données de volume d'un oeil soumis à un test. Un procédé de traitement d'image, mis en oeuvre par un processeur, comprend : une étape consistant à obtenir des données de volume OCT d'une région comprenant des veines vortiqueuses ; une étape consistant à générer, sur la base des données de volume OCT, une pluralité d'images "en face" qui correspondent à une pluralité de surfaces ayant des profondeurs différentes.
PCT/JP2021/023441 2020-11-26 2021-06-21 Procédé de traitement d'image, dispositif de traitement d'image et programme WO2022113409A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020196330 2020-11-26
JP2020-196330 2020-11-26

Publications (1)

Publication Number Publication Date
WO2022113409A1 true WO2022113409A1 (fr) 2022-06-02

Family

ID=81754436

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/023441 WO2022113409A1 (fr) 2020-11-26 2021-06-21 Procédé de traitement d'image, dispositif de traitement d'image et programme

Country Status (1)

Country Link
WO (1) WO2022113409A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014200093A1 (fr) * 2013-06-14 2014-12-18 国立大学法人名古屋大学 Dispositif de tomographie optique
JP2016026521A (ja) * 2014-06-30 2016-02-18 株式会社ニデック 光コヒーレンストモグラフィ装置、及びデータ処理プログラム
JP2017047111A (ja) * 2015-09-04 2017-03-09 キヤノン株式会社 眼科装置、表示制御方法およびプログラム
WO2019203311A1 (fr) * 2018-04-18 2019-10-24 株式会社ニコン Procédé de traitement d'image, programme et dispositif de traitement d'image
JP2020513983A (ja) * 2017-03-23 2020-05-21 ドヘニー アイ インスティテュート 光干渉断層撮影の多重en−face血管造影平均化システム、方法および装置
JP2020166814A (ja) * 2019-03-11 2020-10-08 キヤノン株式会社 医用画像処理装置、医用画像処理方法及びプログラム
US20200394789A1 (en) * 2019-06-12 2020-12-17 Carl Zeiss Meditec Inc Oct-based retinal artery/vein classification

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014200093A1 (fr) * 2013-06-14 2014-12-18 国立大学法人名古屋大学 Dispositif de tomographie optique
JP2016026521A (ja) * 2014-06-30 2016-02-18 株式会社ニデック 光コヒーレンストモグラフィ装置、及びデータ処理プログラム
JP2017047111A (ja) * 2015-09-04 2017-03-09 キヤノン株式会社 眼科装置、表示制御方法およびプログラム
JP2020513983A (ja) * 2017-03-23 2020-05-21 ドヘニー アイ インスティテュート 光干渉断層撮影の多重en−face血管造影平均化システム、方法および装置
WO2019203311A1 (fr) * 2018-04-18 2019-10-24 株式会社ニコン Procédé de traitement d'image, programme et dispositif de traitement d'image
JP2020166814A (ja) * 2019-03-11 2020-10-08 キヤノン株式会社 医用画像処理装置、医用画像処理方法及びプログラム
US20200394789A1 (en) * 2019-06-12 2020-12-17 Carl Zeiss Meditec Inc Oct-based retinal artery/vein classification

Similar Documents

Publication Publication Date Title
JP2023009530A (ja) 画像処理方法、画像処理装置、及びプログラム
JP7441783B2 (ja) 画像処理方法、プログラム、眼科装置、及び脈絡膜血管画像生成方法
JP7279712B2 (ja) 画像処理方法、プログラム、及び画像処理装置
JP2018019771A (ja) 光コヒーレンストモグラフィ装置、および光コヒーレンストモグラフィ制御プログラム
US20220230371A1 (en) Image processing method, image processing device and program memory medium
JP2019177032A (ja) 眼科画像処理装置、および眼科画像処理プログラム
JP2023158161A (ja) 画像処理方法、プログラム、画像処理装置、及び眼科システム
WO2021074960A1 (fr) Procédé de traitement d'images, dispositif de traitement d'images et programme de traitement d'images
WO2022113409A1 (fr) Procédé de traitement d'image, dispositif de traitement d'image et programme
JP7419946B2 (ja) 画像処理方法、画像処理装置、及び画像処理プログラム
JP7494855B2 (ja) 画像処理方法、画像処理装置、及び画像処理プログラム
JP2022089086A (ja) 画像処理方法、画像処理装置、及び画像処理プログラム
JP7204345B2 (ja) 画像処理装置、画像処理方法及びプログラム
WO2023199847A1 (fr) Procédé de traitement d'image, dispositif de traitement d'image, et programme
WO2023199848A1 (fr) Procédé de traitement d'image, dispositif de traitement d'image, et programme
WO2021210295A1 (fr) Procédé de traitement d'images, dispositif de traitement d'images et programme
WO2023282339A1 (fr) Procédé de traitement d'image, programme de traitement d'image, dispositif de traitement d'image et dispositif ophtalmique
WO2022177028A1 (fr) Procédé de traitement d'image, dispositif de traitement d'image et programme
WO2021210281A1 (fr) Méthode de traitement d'image, dispositif de traitement d'image et programme de traitement d'image
WO2022181729A1 (fr) Procédé de traitement d'image, dispositif de traitement d'image et programme de traitement d'image
JP7272453B2 (ja) 画像処理方法、画像処理装置、およびプログラム
WO2021111840A1 (fr) Procédé de traitement d'images, dispositif de traitement d'images et programme
WO2022250048A1 (fr) Procédé de traitement d'image, dispositif de traitement d'image, et programme
JP7416083B2 (ja) 画像処理方法、画像処理装置、およびプログラム
JP7327954B2 (ja) 画像処理装置および画像処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21897389

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21897389

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP