WO2016031151A1 - Ophthalmic apparatus and control method therefor - Google Patents

Ophthalmic apparatus and control method therefor Download PDF

Info

Publication number
WO2016031151A1
WO2016031151A1 PCT/JP2015/003971 JP2015003971W WO2016031151A1 WO 2016031151 A1 WO2016031151 A1 WO 2016031151A1 JP 2015003971 W JP2015003971 W JP 2015003971W WO 2016031151 A1 WO2016031151 A1 WO 2016031151A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
layer
planar
information
photoreceptor cells
Prior art date
Application number
PCT/JP2015/003971
Other languages
French (fr)
Other versions
WO2016031151A4 (en
Inventor
Keiko Yonezawa
Original Assignee
Canon Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Kabushiki Kaisha filed Critical Canon Kabushiki Kaisha
Publication of WO2016031151A1 publication Critical patent/WO2016031151A1/en
Publication of WO2016031151A4 publication Critical patent/WO2016031151A4/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/1025Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for confocal scanning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/102Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]

Definitions

  • the present invention relates to an ophthalmic apparatus to be used in particular for an ophthalmical diagnosis and treatment, and a control method for the ophthalmic apparatus.
  • SLO scanning laser ophthalmoscope
  • the SLO is an ophthalmic apparatus configured to perform a raster scanning on a fundus with laser light, which is measuring light, and acquire a planar image from the intensity of return light with a high resolution at a high speed.
  • an adaptive optics SLO (hereinafter referred to as "AO-SLO") including an adaptive optics system in which an aberration of an eye to be inspected is measured in real time by a wavefront sensor, and aberrations of measuring light and return light thereof occurred at the eye to be inspected are corrected by a wavefront correction device.
  • the use of the AO-SLO enables a planar image with a high transverse resolution to be acquired.
  • it is further attempted to diagnose a disease and evaluate a drug response by extracting a photoreceptor cell from an acquired planar image of the retina, and analyzing a density and distribution of the photoreceptor cell.
  • a change in photoreceptor cell due to the progress of a disease or a drug response is evaluated, it is required to detect the photoreceptor cell from an AO-SLO image acquired with a high resolution.
  • the focus is required to be set on a layer in which the photoreceptor cell exists to acquire an image, but a focus position cannot be correctly set in some cases if the photoreceptor cell layer is damaged.
  • a determination process relating to a normal/abnormal region in visual performance is performed based on a tomographic image captured by optical coherence tomography (hereinafter referred to as "OCT").
  • OCT optical coherence tomography
  • a process of acquiring a laser irradiation position for a laser treatment is performed by displaying the result of the determination process on a front image of a fundus in a superimposed manner.
  • a photoreceptor cell When a photoreceptor cell is detected by an AO-SLO, it is required to discriminate a case where the photoreceptor cell is not rendered because the photoreceptor cell has a defect from a case where the photoreceptor cell is not rendered because of a capturing reason such as defocusing. If the photoreceptor cell is affected by a disease, the capturing by the AO-SLO is generally difficult to be performed in many cases. For example, in a case where a structure such as photoreceptor cells is not captured, it is difficult to determine whether the capturing has failed or a state in which the photoreceptor cells have a defect is correctly captured.
  • Patent Literature 1 the normal/abnormal region in visual performance is determined by an OCT, but a case where the image taking by the AO-SLO is performed is not taken into consideration.
  • Patent Literature 2 a control method for an aberration detection optical system of the AO-SLO in a case where an image taking position is changed is described.
  • Patent Literature 2 a control based on luminance values of an acquired Hartmann image is merely disclosed, and no reference is made to the use of knowledge obtained from another modality.
  • the present invention has an object to provide an ophthalmic apparatus including an AO-SLO, which is capable of more accurately rendering a state of a photoreceptor cell layer, and a control method for the ophthalmic apparatus.
  • an ophthalmic apparatus including: a tomographic image acquiring unit configured to acquire tomographic images of a fundus of an eye to be inspected; a layer analyzing unit configured to extract information on a state of photoreceptor cells from the tomographic images; a position alignment unit configured to associate the information on the state of the photoreceptor cells with information on image taking positions on a planar image of the fundus; and a planar image acquiring unit configured to acquire planar images at the image taking positions on the fundus of the eye to be inspected while changing an aberration correction manner at the image taking positions based on the information on the state of the photoreceptor cells associated by the position alignment unit.
  • the photoreceptor cell layer in a case where a photoreceptor cell layer exists, the photoreceptor cell layer may be rendered with high image quality, and in a case where the photoreceptor cell layer is damaged, a damaged state may be correctly rendered.
  • FIG. 1 is a block diagram for illustrating a functional configuration of an ophthalmic apparatus 20 according to Embodiment 1 of the present invention.
  • FIG. 2 is a flow chart for illustrating a processing procedure performed in the processing device 10 according to Embodiment 1 of the present invention.
  • FIG. 3 is a schematic view for illustrating an example in which AO-SLO images are displayed on a WF-SLO image.
  • FIG. 4 is a schematic view for illustrating an example of three-dimensional tomographic images (3D tomographic images) obtained by OCT.
  • FIG. 5 is a flow chart for illustrating details of a step of tomographic image analysis in the flow chart illustrated in FIG. 2.
  • FIG. 2 is a flow chart for illustrating a processing procedure performed in the processing device 10 according to Embodiment 1 of the present invention.
  • FIG. 3 is a schematic view for illustrating an example in which AO-SLO images are displayed on a WF-SLO image.
  • FIG. 4 is a schematic view for
  • FIG. 6A is a view for illustrating a method of extracting a region in which an IS/OS exists from the OCT tomographic images, and is a schematic view for illustrating an example in which the region in which the IS/OS exists has been identified.
  • FIG. 6B is a schematic view for illustrating an example in which the region in which the IS/OS exists has been extracted with the use of the region identified in the OCT tomographic images.
  • FIG. 7 is a block diagram for illustrating a functional configuration of an ophthalmic apparatus 20 according to Embodiment 2 of the present invention.
  • FIG. 8 is a flow chart for illustrating a processing procedure performed in the processing device 10 according to Embodiment 2 of the present invention.
  • FIG. 9 is a schematic view for illustrating an example in which image taking positions are set for whole regions in which photoreceptor cells exist.
  • FIG. 10 is a schematic view for illustrating an example in which image taking positions are set for a boundary of the regions in which the photoreceptor cells exist.
  • Embodiment 1 of the present invention in acquiring a planar image of a retina by an AO-SLO, a control method for the AO-SLO is changed based on a result of performing a layer analysis on OCT tomographic images. In this manner, the retina may be satisfactorily captured by the AO-SLO in a condition in which photoreceptor cells are easily rendered.
  • a photoreceptor cell layer, an RPE layer, and the like are extracted from the OCT tomographic images, and a region in which the photoreceptor cell layer normally exists is identified.
  • a state of the photoreceptor cells may be estimated to be a state in which the photoreceptor cells may be satisfactorily captured as the planar image.
  • whether or not the photoreceptor cell layer can be extracted, whether or not the photoreceptor cells can be extracted from actual tomographic images, or the like may be grasped as information on the state of the photoreceptor cells, and the information is associated with image taking positions of subregions, which are AO-SLO images in a planar image of a fundus, by a method to be described below. More specifically, the region that has been identified as the region in which the photoreceptor cells normally exist is presented to a user as being displayed on an SLO image (wide-field SLO; hereinafter referred to as "WF-SLO image”) on which aberration correction is not performed.
  • SLO image wide-field SLO
  • the user selects an image taking position based on the WF-SLO image on which the region in which the photoreceptor cell layer exists is presented.
  • the image taking position and when the photoreceptor cell layer is normal based on the state of the photoreceptor cell layer for the specified position, normal aberration correction is performed and an AO-SLO image is acquired at the position.
  • the aberration correction is performed while fixing a focus position during the aberration correction to an estimated position of the photoreceptor cell layer, and the AO-SLO image is acquired.
  • a plurality of AO-SLO images having different focus positions in a depth direction may be acquired to select an image having a preferred resolution, or further, associated information may be obtained while changing the image taking position, and then interpolation or the like may be performed based on those pieces of information.
  • normal aberration correction is processing in which wavefront aberration measured by a Hartmann-Shack sensor is used to determine Zernike coefficients up to the sixth order by least square approximation. It should be noted, however, that the order of the Zernike coefficients changes depending on intended accuracy of approximation and processing time, and is not limited to the method described above.
  • FIG. 3 an image obtained by superimposing the AO-SLO images on the WF-SLO image is schematically illustrated.
  • the AO-SLO images have a high resolution but are small in capturing field angle due to the aberration correction, and in contrast, the WF-SLO image does not have a high resolution because of being the SLO image without the aberration correction.
  • the WF-SLO image may capture a wide range of the fundus, and an entire image of the retina may be obtained.
  • the AO-SLO images and the WF-SLO image are referred to as "planar images", and the AO-SLO images are distinguished from the WF-SLO image for convenience as planar images of the subregions.
  • ⁇ OCT tomographic images> The OCT in which optical interference is utilized to capture tomographic images of the fundus allows a state of an internal structure of the retina of the fundus to be three-dimensionally observed, and hence is widely used at the site of ophthalmical diagnosis and treatment.
  • a tomographic image acquiring unit is configured to acquire the tomographic images of the fundus of an eye to be inspected by the OCT.
  • FIG. 4 the tomographic images around a macula, which are acquired by the OCT, are schematically illustrated.
  • tomographic images (B-scan images) are denoted by T 1 to T n , respectively, and information on the retina is represented three-dimensionally by a tomographic image group obtained by collecting the plurality of tomographic images.
  • boundaries of layer structures of the retina are represented by L1 to L6, respectively.
  • the boundary L1 indicates a surface of an inner limiting membrane (hereinafter referred to as "ILM”)
  • the boundary L2 indicates a boundary (hereinafter referred to as “nerve fiber layer boundary”) between a nerve fiber layer (hereinafter referred to as “NFL”) and a layer thereunder, and the nerve fiber layer is illustrated as a region L2’.
  • the boundary L3 indicates a boundary (hereinafter referred to as “inner plexiform layer boundary”) between an inner plexiform layer and a layer thereunder
  • the boundary L4 indicates a boundary (hereinafter referred to as “outer plexiform layer boundary”) between an outer plexiform layer and a layer thereunder
  • the boundary L5 indicates a boundary of an interface between inner and outer segments of the photoreceptors (hereinafter referred to as "IS/OS”)
  • the boundary L6 indicates a boundary of retinal pigment epithelium (hereinafter referred to as "RPE").
  • the boundary between an IS/OS layer and the RPE layer may be indistinguishable, but the accuracy does not constitute a problem in the present invention.
  • the ILM and the IS/OS may be seen as layers, but are regarded as the boundaries because of being very thin.
  • FIG. 1 is a block diagram for illustrating a functional configuration of an ophthalmic apparatus 20 according to Embodiment 1.
  • the ophthalmic apparatus 20 includes a processing device 10, a planar image taking device 3, and a display part 4.
  • the processing device 10 performs an image process and computes an aberration correction coefficient.
  • the planar image taking device 3 captures the planar images of the AO-SLO image, the WF-SLO image, and the like.
  • the display part 4 displays the computation result of the processing device 10 and the like to a user.
  • the ophthalmic apparatus 20 is also connected to an external database (hereinafter referred to as "DB") 1. This allows the processing device 10 to acquire images and patient data acquired by other modalities, past images and data acquired by another planar image taking device, and the tomographic images obtained by an OCT 2, which are stored in the DB 1.
  • DB external database
  • the processing device 10 includes an image acquiring part 100, an information acquiring part 110, a control part 120, a memory part 130, an image processing part 140, an output part 150, and an AO control part 160.
  • the image acquiring part 100 acquires the tomographic images acquired by the OCT 2 via the DB 1.
  • the acquired OCT tomographic images are stored in the memory part 130 via the control part 120.
  • the information acquiring part 110 acquires an input by the user and measurement data during the aberration correction.
  • the image processing part 140 includes a layer analyzing part 141, a position alignment part 142, and a determining part 143.
  • the layer analyzing part 141 extracts, as the information on the state of the photoreceptor cells, information on whether or not the IS/OS layer can be extracted from the tomographic images, exists in the tomographic images, or the like, as described later.
  • the position alignment part 142 associates the information on the state of the photoreceptor cells with the information on the image taking positions of the subregions on the planar image of the fundus acquired by the WF-SLO.
  • the image processing part 140 extracts the photoreceptor cell layer and the RPE layer from the OCT tomographic images as described above to identify whether or not the photoreceptor cell layer is a normal state, and in the case where the photoreceptor cells has the defect or other such damages, acquires the estimated position of the photoreceptor cell layer. Further, the image processing part 140 determines positions on the WF-SLO image to which the acquired OCT tomographic images correspond, and associates the region in which the photoreceptor cells are normal with a position on the WF-SLO image. Then, based on the above-mentioned result of the layer analysis, the control method for the AO-SLO image taking is determined based on the state of the photoreceptor cell layer at the image taking position specified by the user.
  • the output part 150 outputs, to a monitor or the like, the region in which the normal photoreceptor cell layer exists, which is associated with the WF-SLO image.
  • the output part 150 specifies, to various modalities such as a configuration for performing the aberration correction (not shown), a control method for capturing an AO-SLO image of the capturing position specified by the user.
  • the AO control part 160 performs calculation of the Zernike coefficients associated with the aberration correction.
  • the AO control part 160 is included to form a planar image acquiring unit configured to acquire the planar images of the subregions of the fundus of the eye to be inspected. In the planar image acquiring unit, based on the information on the state of the photoreceptor cells associated by the position alignment part 142, the AO-SLO images are acquired while changing an aberration correction method at the image taking position on the WF-SLO image.
  • Step S210 the image acquiring part 100 acquires the OCT tomographic images of the retina of the eye to be inspected, which is taken by the OCT 2 and stored in the external DB 1. Then, the acquired OCT tomographic images are stored in the memory part 130 via the control part 120.
  • Step S220 the image processing part 140 performs the layer analysis on the OCT tomographic images stored in the memory part 130.
  • the retina is formed of a plurality of layer structures as illustrated in FIG. 4, and the IS/OS, which indicates the state of the photoreceptor cells, and the RPE layer under the IS/OS are extracted here. Then, the analysis is performed on those layers.
  • a procedure of tomographic image analysis in Step S220 is described referring to a flow chart of FIG. 5.
  • Step S510 the layer analyzing part 141 detects the IS/OS (boundary L5) and the RPE (boundary L6) from each of tomographic images stored in the memory part 130.
  • the IS/OS boundary L5
  • the RPE boundary L6
  • As a segmentation method for the layers various methods are known. In this embodiment, a case of using a method in which an edge enhancement filter is used to extract edges as layer boundaries, and in which edges detected using medical knowledge on the IS/OS are associated with the layer boundaries is described. Note that, the detection of the IS/OS and the RPE is described here, but the other layer boundaries may be detected by a similar method.
  • the layer analyzing part 141 performs smoothing filter processing on the tomographic image to remove noise components. Then, edge detection filter processing is performed to detect edge components from the tomographic image and hence extract edges corresponding to the boundaries of the layers. Further, a background region is identified from the tomographic image on which the edge detection has been performed to extract luminance values of the background region from the tomographic image. Next, peak values of luminances determined to be the edge components and luminance values between peaks are used to determine the boundaries of the layers.
  • the layer analyzing part 141 detects the edges in a depth direction of the fundus from a hyaloid body side to determine the boundary L1 (ILM) between the hyaloid body and a retina layer based on the peak values of the luminances determined to be the edge components, luminance values above and below the peak values, and the luminance values of the background. Further, the layer analyzing part 141 detects the edges in the depth direction of the fundus to determine the RPE (boundary L6) with reference to the peak values of the luminances of the edge components, the luminance values between the peaks, and the luminance values of the background.
  • the boundary L1 ILM
  • RPE boundary L6
  • the layer analyzing part 141 detects the edges in a shallow direction of the fundus from a scleral side to determine the IS/OS (boundary L5) based on the peaks of the luminances determined to be the edge components, the luminance values above and below the peaks, and the luminance values of the background.
  • the boundaries of the layers may be detected.
  • Control points as positions of the RPE (boundary L6) and the IS/OS (boundary L5), which have been detected as described above, in the depth direction of the retina are transmitted to the control part 120 and stored in the memory part 130.
  • Step S520 the layer analyzing part 141 identifies the regions in which the IS/OS exists based on a result of the extraction in Step S510 to discriminate between the regions in which the IS/OS exists and a region in which the IS/OS does not exist.
  • ranges in which the IS/OS has been identified in Step S510 are determined, and the ranges are connected with a plurality of tomographic images to form the regions in which the IS/OS exists.
  • a range in which the IS/OS has been identified in the tomographic image Tn is illustrated as a region Rn.
  • FIG. 6B an example in which the regions in which the IS/OS exists have been identified as regions R1 to Rn determined from the plurality of tomographic images is illustrated.
  • the regions thus identified as the regions in which the IS/OS exists are transmitted to the control part 120 and stored in the memory part 130.
  • Step S530 the layer analyzing part 141 determines, based on the regions R1 to Rn that have been identified as the regions in which the IS/OS exists in Step S520, whether or not the IS/OS exists in whole regions of which the AO-SLO images are to be acquired. In a case where it is determined that the IS/OS exists in the whole regions, the processing is directly ended. In a case where there is a region in which the IS/OS does not exist, the flow proceeds to Step S540.
  • Step S540 the layer analyzing part 141 estimates, based on the regions in which the IS/OS exists, which have been determined in Step S520, a position of the IS/OS for the region in which the IS/OS does not exist. More specifically, in the vicinity of the boundary of the regions in which the IS/OS does not exist within the region in which the IS/OS exists, a distance D between the RPE layer and the IS/OS, which have been extracted in Step S510, is determined. Then, a position of the RPE is identified in the region in which the IS/OS does not exist, and a position after moving upward by D is set as an estimated position at which the IS/OS exists.
  • the layer analyzing part 141 extracts the RPE layer, and based on the distance D in the vicinity of the boundary between the region in which the IS/OS layer exists and the region in which the IS/OS layer does not exist, estimates the position at which the IS/OS layer exists in the regions in which the IS/OS layer does not exist.
  • the thus estimated control point of the IS/OS is transmitted to the control part 120 and stored in the memory part 130. After the storing, the flow proceeds to Step S230.
  • the OCT tomographic images as a target are not limited to the 3D tomographic images.
  • the OCT tomographic images may be tomographic images acquired by radial scanning in which a plurality of images are captured radially around the macula, or may be one tomographic image.
  • ranges in which the IS/OS exists which are determined respectively from the tomographic images, may be extrapolated in an angular direction to identify the regions in which the IS/OS exists.
  • approximation may be performed with a circle around the macula to acquire the region of the normal photoreceptor cell layer.
  • Step S230 the image acquiring part 100 acquires a planar image of the retina of the eye to be inspected. Then, the acquired planar image is stored in the memory part 130 via the control part 120.
  • the planar image is the WF-SLO image is described here, but without limiting to the WF-SLO image, the planar image may be an AO-SLO image obtained by capturing a wide angle of field with a reduced resolution.
  • Step S240 the position alignment part 142 performs position alignment between the WF-SLO image stored in the memory part 130 and the OCT tomographic images. Further, the regions in which the IS/OS exists, which have been identified in the OCT tomographic images and acquired in Step S520, are also associated with positions on the WF-SLO image.
  • the above-mentioned position alignment includes, in the case where the OCT tomographic images are the 3D tomographic images, a method in which a projection image is generated by adding the 3D tomographic images in the depth direction of the retina, and in which position alignment between the projection image and the WF-SLO image is performed based on feature amounts such as vascular structure and the like.
  • the SLO image (planar image) captured at the same time when the OCT tomographic images are acquired may be acquired at the same time in Step S210, and the position alignment between the SLO image and the WF-SLO image may be performed in advance, based on which the position alignment between the WF-SLO image and the OCT tomographic images may be performed.
  • the regions in which the IS/OS exists, which have been associated on the WF-SLO image as described above, are stored in the memory part 130 via the control part 120. Further, the regions are displayed on the display part 4 such as a monitor via the output part 150 while being superimposed on the WF-SLO image. A mode and the like of the display are selected by a module region functioning as a display control unit in the control part 120, and the display part 4 is instructed on the display. Moreover, display by the display part 4 of the AO-SLO images acquired by the planar image taking device 3 is also executed by the display control unit.
  • Step S250 the information acquiring part 110 acquires information on the image taking position of the AO-SLO specified by the user. Then the acquired image taking position is stored in the memory part 130 via the control part 120.
  • Step S260 the determining part 143 determines whether or not the IS/OS exists at the image taking position of the AO-SLO acquired in Step S250. In a case where the IS/OS exists at the image taking position, the processing proceeds to Step S270. In a case where the IS/OS does not exist, the processing proceeds to Step S280. Note that, in this embodiment, in a case of an image taking position at which the region in which the IS/OS exists and the region in which the IS/OS does not exist are mixed, the image taking position is processed as the case where the IS/OS does not exist.
  • Step S270 the AO control part 160 calculates the Zernike coefficients in order to perform the aberration correction at the image taking position of the AO-SLO acquired in Step S250.
  • the calculation is performed on the precondition that the IS/OS exists.
  • the information acquiring part 110 acquires information acquired by a Hartmann-Shack wavefront sensor (not shown) from reflected light from the fundus of the eye to be inspected, which irradiates the image taking position. More specifically, a focal position of each microlens of a microlens array (not shown), shift amounts ⁇ x and ⁇ y from a corresponding reference point position (focal position in a case of no aberration), and a focal length f of the microlens are acquired. Then, the acquired values of the shift amounts and the focal length are stored in the memory part 130 via the control part 120.
  • W When a wavefront of the reflected light from the fundus of the eye to be inspected is represented by W(X, Y), W may be polynomially approximated by Zernike polynomials as in the following equation (1). Further, shift amounts and a wavefront that are measured are expressed by the following partial differential equations (2) and (3). The shift amounts and the focal length f that are acquired, and least square approximation in which squares of errors of approximations obtained by substituting the equation (1) into the equations (2) and (3) are minimized are used to calculate a Zernike coefficient C n 1 .
  • the thus acquired Zernike coefficient C n 1 is stored in the memory part 130 via the control part 120.
  • Step S280 the AO control part 160 calculates the Zernike coefficient in order to perform the aberration correction at the image taking position of the AO-SLO acquired in Step S250.
  • the calculation is performed on the precondition that the IS/OS does not exist at the image taking position.
  • the information acquiring part 110 acquires the information acquired by the above-mentioned Hartmann-Shack wavefront sensor from the reflected light from the fundus of the eye to be inspected, which irradiates the image taking position. Then, the acquired values of the shift amounts and the focal length are stored in the memory part 130 via the control part 120.
  • Step S270 the acquired values of the shift amounts and the focal length are used to calculate the Zernike coefficient, and in the calculation, a fixed value is used for a Zernike coefficient C 2 0 corresponding to a focus power. More specifically, in calculating the Zernike coefficient by the least square method, C 2 0 is not treated as a variable.
  • the reflected light from the fundus of the eye to be inspected may be regarded as reflected light mainly from the RPE.
  • Step S270 all Zernike coefficients are calculated, and the determined C 2 0 may be increased by a value corresponding to a shift of the focus position to the estimated IS/OS position.
  • the Zernike coefficients are acquired using a part of aberration correction parameters as preferred correction parameters, but correction parameters other than the coefficients may be used.
  • Step S290 the control part 120 transmits the calculated Zernike coefficients, which are stored in the memory part 130, to the planar image taking device 3 via the output part 150.
  • the planar image taking device 3 performs the aberration correction based on the acquired Zernike coefficients to acquire AO-SLO images.
  • the acquired AO-SLO images are stored in the memory part 130 via the image acquiring part 100 and the control part 120. Further, the acquired images and accompanying information such as the image taking positions are stored in the external database 1 via the output part 150.
  • the control method for the AO-SLO may be changed based on the layer analysis of the OCT tomographic images to capture the retina in the condition in which the photoreceptor cells are easily rendered.
  • Embodiment 1 the example in which the state of the IS/OS is discriminated based on the OCT tomographic images, and an aberration correction manner in acquiring the AO-SLO images is changed depending on the state of the IS/OS has been described.
  • Embodiment 2 of the present invention a case where the AO-SLO images of the region in which the photoreceptor cells exist are acquired by the method described in Embodiment 1, and the AO-SLO images are taken at a plurality of points especially so that the region in which the photoreceptor cells exist and the region in which the photoreceptor cells do not exist are clarified is described.
  • a hereditary disease associated with the photoreceptor cells a specific ring-like structure is known to be observed with the progress of the disease.
  • a case where the photoreceptor cells are normal inside the ring-like structure but a photoreceptor cell defect occurs outside the ring-like structure a case where a photoreceptor cell defect occurs inside the ring-like structure but are normal outside the ring-like structure are known.
  • Retinitis pigmentosa is known as an example of the former, and Stargardt disease is known as the latter.
  • FIG. 7 a functional configuration of an ophthalmic apparatus 20 according to Embodiment 2 is illustrated.
  • FIG. 7 components denoted by reference numerals 1 to 4, 100 to 130, and 150 to 160 are the same as the components illustrated in FIG. 1, and hence a description thereof is omitted here. Moreover, a description of the layer analyzing part 141, the position alignment part 142, and the determining part 143 included in the image processing part 140 to perform similar processing as in FIG. 1 is also omitted here.
  • This embodiment is different from Embodiment 1 in that the image processing part 140 includes an image taking position setting part 744.
  • the image taking position setting part 744 calculates, based on the result of associating the IS/OS obtained from the OCT tomographic images with the WF-SLO image, image taking positions on the WF-SLO image, which are required to acquire the AO-SLO images of the region in which the IS/OS exists and a region including the boundary thereof.
  • Step S210 to Step S290 are not different from the processing procedure described in Embodiment 1, and hence a description thereof is omitted here.
  • This embodiment is different in Step S850, which is performed after Step S240 in which the position alignment is performed, and processing performed in Step S850 is described in detail below.
  • Step S850 the image taking position setting part 744 sets the image taking position based on image taking pattern information selected by the user via the information acquiring part 110, and the region in which the IS/OS exists and which is associated with the generated WF-SLO image in Step S240.
  • the "image taking pattern selected by the user” as used herein is selected depending on a defect state of the photoreceptor cells, which is different for each disease, and includes a type that covers the entire region in which the IS/OS exists and a type that covers only the boundary.
  • FIG. 9 an example in which, with respect to an eye with retinitis pigmentosa, image taking positions that cover the entire region in which the IS/OS exists are presented is illustrated.
  • the region indicated in the gray circle, which exists near a fovea in FIG. 9, indicates the region in which the IS/OS exists and which is acquired by the OCT. Note that, in this embodiment, with a possibility that the image taking positions are shifted due to flicks of the eye or the like, or for the purpose of securing an overlapping region for the position alignment, the image taking positions of the individual AO-SLO images are determined so as to overlap each other by about 15% of the capturing field angle.
  • the image taking positions of the respective adjacent AO-SLO images are determined with an overlap corresponding to 60 pixels. Further, the image taking positions on the outermost portion are set to be at, or include, the outside of the region in which the IS/OS exists and which is associated with the generated WF-SLO image in Step S240. Thus including from the inside to the outside of the region in which the IS/OS exists in the image taking positions allows the AO-SLO images of the region in which the photoreceptor cells exist and the boundary thereof to be acquired.
  • FIG. 10 an example in which, with respect to the eye with retinitis pigmentosa, image taking positions that cover the boundary between the region in which the IS/OS exists and the region in which the IS/OS does not exist are presented is illustrated.
  • the reference position is moved from the fovea to a nose side direction to search for an image taking position serving as the boundary of the IS/OS, and the position is set as a first image taking position.
  • a method in which the boundary of the IS/OS is followed from the first image taking position, and a next image taking position having an image end that is 60 pixels inside from the point serving as the end of the AO-SLO image along the boundary of the IS/OS is set as the next image taking position is contemplated.
  • the existence of the IS/OS is uncertain at the point serving as the boundary of the IS/OS, and there is a fear that the aberration correction may become unstable, with the result that a boundary obtained by reducing the boundary of the IS/OS to 90% as a whole is determined, and the image taking positions are set thereon.
  • a boundary obtained by increasing the boundary of the IS/OS to 110% as a whole is determined, and image taking positions are also set on intersections of lines radially extending the image taking positions set on the reduced boundary from the fovea and the 110% boundary line so as to be paired with the image taking positions.
  • the image taking positions set as described above are stored in the memory part 130 via the control part 120. Further, the image taking positions are displayed on the display part 4 such as the monitor via the output part 150 while being superimposed on the WF-SLO image.
  • the display part 4 functions as a presentation unit configured to present, to the user, when acquiring as a planar image a lesioned part of the eye to be inspected extracted from an OCT image, the lesioned part as a region on which the aberration correction may not work. In this case, the user may directly capture the set image taking positions in order, or may select a new image taking position.
  • determining the image taking positions based on the IS/OS information acquired from the OCT tomographic images allows the photoreceptor cell image to be acquired effectively over the entire range in which the photoreceptor cells exist, or allows the boundary of the range in which the photoreceptor cells exist to be acquired correctly.
  • the focus position may become unstable. Therefore, the image taking is performed at the focus position acquired in a region in the proximity in which the IS/OS exists and a plurality of focus positions in an up and down direction of a depth of the OCT image or the planar image. Comparing the planar images at all the focus positions obtained as described above in terms of the state of the photoreceptor cells allows a more correct boundary to be identified.
  • the state of the IS/OS is discriminated based on the OCT tomographic images, and in the case where the IS/OS exists, in Step S270, the Zernike coefficient that most closely approximates the wavefront of the reflected light from the fundus of the eye to be inspected is acquired to perform the aberration correction.
  • this method brings the photoreceptor cell layer into focus, and the possibility that a good photoreceptor cell layer may be acquired becomes high.
  • more accurate focusing is realized by selecting, from a plurality of AO-SLO images captured while changing the focus, a focus position at which a structure characteristic of the photoreceptor cells appear most strongly.
  • a method is contemplated in which frequency conversion images of the AO-SLO images captured while changing the focus are acquired, and a focus position at which a signal strength corresponding to a frequency of the photoreceptor cells at the image taking positions is the highest is selected.
  • the possibility that the photoreceptor cells are correctly rendered is high, but there is a possibility that the images are blurred and image quality is lowered due to the effect of defocusing. Therefore, acquiring the images at a plurality of focus positions in advance and selecting an image based on a result of a frequency analysis of the acquired AO-SLO images allow the photoreceptor cell image having higher image quality to be acquired.
  • the planar image taking device 3 acquires the plurality of AO-SLO images having different focus positions, and determines the boundary between the region in which the IS/OS layer exists and the region in which the IS/OS layer does not exist from the plurality of AO-SLO images based on the frequency analysis. Moreover, the display control unit selects an image to be displayed based on the frequency analysis and causes the display part 4 to display the image.
  • Embodiment 1 in analyzing the OCT tomographic images in Step S220, extraction of the IS/OS layer and the RPE layer is performed.
  • the OCT tomographic images also contain information on a lesion such as vitiligo, blood vessels, and the like, and hence those pieces of information may be utilized to acquire the AO-SLO images having higher image quality.
  • the reflected light from the lesion enters the Hartmann-Shack wavefront sensor.
  • the lesion such as the vitiligo is extracted and stored.
  • the information on the lesion is also associated with the WF-SLO image.
  • the information is stored in the memory part 130, and at the same time, presented to the user as reference information in selecting the image taking position.
  • Step S250 a method is contemplated in which a possibility that the capturing may fail due to the effect of the lesion is presented to the user, and a position that is near the image taking position and is out of the lesion is presented as an alternative candidate to the user.
  • Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a 'non-transitory computer-readable storage medium') to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
  • computer executable instructions e.g., one or more programs
  • a storage medium which may also be referred to more fully as
  • the computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
  • the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
  • the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD) TM ), a flash memory device, a memory card, and the like.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

Provided is an ophthalmic apparatus capable of acquiring preferred AO-SLO images irrespective of whether or not a photoreceptor cell defect occurs. The ophthalmic apparatus includes: a tomographic image acquiring unit configured to acquire tomographic images of a fundus of an eye to be inspected; a layer analyzing unit configured to extract information on a state of photoreceptor cells from the tomographic images; a position alignment unit configured to associate the information on the state of the photoreceptor cells with information on image taking positions on a planar image of the fundus; and a planar image acquiring unit configured to acquire planar images at the image taking positions on the fundus of the eye to be inspected while changing an aberration correction manner at the image taking positions based on the information on the state of the photoreceptor cells associated by the position alignment unit.

Description

OPHTHALMIC APPARATUS AND CONTROL METHOD THEREFOR
The present invention relates to an ophthalmic apparatus to be used in particular for an ophthalmical diagnosis and treatment, and a control method for the ophthalmic apparatus.
An examination of a fundus portion is widely performed for the purpose of early diagnosis of life-style related disease or various kinds of diseases ranking high in causes of blindness. As an ophthalmic apparatus using a principle of a confocal laser microscope, a scanning laser ophthalmoscope (hereinafter referred to as "SLO") is known. The SLO is an ophthalmic apparatus configured to perform a raster scanning on a fundus with laser light, which is measuring light, and acquire a planar image from the intensity of return light with a high resolution at a high speed.
In recent years, there has been developed an adaptive optics SLO (hereinafter referred to as "AO-SLO") including an adaptive optics system in which an aberration of an eye to be inspected is measured in real time by a wavefront sensor, and aberrations of measuring light and return light thereof occurred at the eye to be inspected are corrected by a wavefront correction device. The use of the AO-SLO enables a planar image with a high transverse resolution to be acquired. With such an AO-SLO, it is further attempted to diagnose a disease and evaluate a drug response by extracting a photoreceptor cell from an acquired planar image of the retina, and analyzing a density and distribution of the photoreceptor cell.
In a case where a change in photoreceptor cell due to the progress of a disease or a drug response is evaluated, it is required to detect the photoreceptor cell from an AO-SLO image acquired with a high resolution. In order to capture the photoreceptor cell, the focus is required to be set on a layer in which the photoreceptor cell exists to acquire an image, but a focus position cannot be correctly set in some cases if the photoreceptor cell layer is damaged. In a method disclosed in Patent Literature 1, a determination process relating to a normal/abnormal region in visual performance is performed based on a tomographic image captured by optical coherence tomography (hereinafter referred to as "OCT"). A process of acquiring a laser irradiation position for a laser treatment is performed by displaying the result of the determination process on a front image of a fundus in a superimposed manner.
[PTL 1] Japanese Patent Application Laid-Open No. 2012-135550
[PTL 2] Japanese Patent Application Laid-Open No. 2012-010790
When a photoreceptor cell is detected by an AO-SLO, it is required to discriminate a case where the photoreceptor cell is not rendered because the photoreceptor cell has a defect from a case where the photoreceptor cell is not rendered because of a capturing reason such as defocusing. If the photoreceptor cell is affected by a disease, the capturing by the AO-SLO is generally difficult to be performed in many cases. For example, in a case where a structure such as photoreceptor cells is not captured, it is difficult to determine whether the capturing has failed or a state in which the photoreceptor cells have a defect is correctly captured. In the method disclosed in Patent Literature 1, the normal/abnormal region in visual performance is determined by an OCT, but a case where the image taking by the AO-SLO is performed is not taken into consideration. In Patent Literature 2, a control method for an aberration detection optical system of the AO-SLO in a case where an image taking position is changed is described. However, in Patent Literature 2, a control based on luminance values of an acquired Hartmann image is merely disclosed, and no reference is made to the use of knowledge obtained from another modality.
In view of the above-mentioned problems, the present invention has an object to provide an ophthalmic apparatus including an AO-SLO, which is capable of more accurately rendering a state of a photoreceptor cell layer, and a control method for the ophthalmic apparatus.
In order to solve the above-mentioned problems, according to one embodiment of the present invention, there is provided an ophthalmic apparatus, including: a tomographic image acquiring unit configured to acquire tomographic images of a fundus of an eye to be inspected; a layer analyzing unit configured to extract information on a state of photoreceptor cells from the tomographic images; a position alignment unit configured to associate the information on the state of the photoreceptor cells with information on image taking positions on a planar image of the fundus; and a planar image acquiring unit configured to acquire planar images at the image taking positions on the fundus of the eye to be inspected while changing an aberration correction manner at the image taking positions based on the information on the state of the photoreceptor cells associated by the position alignment unit.
According to the one embodiment of the present invention, in a case where a photoreceptor cell layer exists, the photoreceptor cell layer may be rendered with high image quality, and in a case where the photoreceptor cell layer is damaged, a damaged state may be correctly rendered.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
FIG. 1 is a block diagram for illustrating a functional configuration of an ophthalmic apparatus 20 according to Embodiment 1 of the present invention. FIG. 2 is a flow chart for illustrating a processing procedure performed in the processing device 10 according to Embodiment 1 of the present invention. FIG. 3 is a schematic view for illustrating an example in which AO-SLO images are displayed on a WF-SLO image. FIG. 4 is a schematic view for illustrating an example of three-dimensional tomographic images (3D tomographic images) obtained by OCT. FIG. 5 is a flow chart for illustrating details of a step of tomographic image analysis in the flow chart illustrated in FIG. 2. FIG. 6A is a view for illustrating a method of extracting a region in which an IS/OS exists from the OCT tomographic images, and is a schematic view for illustrating an example in which the region in which the IS/OS exists has been identified. FIG. 6B is a schematic view for illustrating an example in which the region in which the IS/OS exists has been extracted with the use of the region identified in the OCT tomographic images. FIG. 7 is a block diagram for illustrating a functional configuration of an ophthalmic apparatus 20 according to Embodiment 2 of the present invention. FIG. 8 is a flow chart for illustrating a processing procedure performed in the processing device 10 according to Embodiment 2 of the present invention. FIG. 9 is a schematic view for illustrating an example in which image taking positions are set for whole regions in which photoreceptor cells exist. FIG. 10 is a schematic view for illustrating an example in which image taking positions are set for a boundary of the regions in which the photoreceptor cells exist.
Preferred embodiments of the present invention will now be described in detail in accordance with the accompanying drawings.
Embodiment 1
In Embodiment 1 of the present invention, in acquiring a planar image of a retina by an AO-SLO, a control method for the AO-SLO is changed based on a result of performing a layer analysis on OCT tomographic images. In this manner, the retina may be satisfactorily captured by the AO-SLO in a condition in which photoreceptor cells are easily rendered.
More specifically, a photoreceptor cell layer, an RPE layer, and the like are extracted from the OCT tomographic images, and a region in which the photoreceptor cell layer normally exists is identified. In a case where the photoreceptor cell layer is normally extracted as a layer, a state of the photoreceptor cells may be estimated to be a state in which the photoreceptor cells may be satisfactorily captured as the planar image. In this embodiment, whether or not the photoreceptor cell layer can be extracted, whether or not the photoreceptor cells can be extracted from actual tomographic images, or the like may be grasped as information on the state of the photoreceptor cells, and the information is associated with image taking positions of subregions, which are AO-SLO images in a planar image of a fundus, by a method to be described below. More specifically, the region that has been identified as the region in which the photoreceptor cells normally exist is presented to a user as being displayed on an SLO image (wide-field SLO; hereinafter referred to as "WF-SLO image") on which aberration correction is not performed.
The user selects an image taking position based on the WF-SLO image on which the region in which the photoreceptor cell layer exists is presented. When the user specifies the image taking position, and when the photoreceptor cell layer is normal based on the state of the photoreceptor cell layer for the specified position, normal aberration correction is performed and an AO-SLO image is acquired at the position. In a case where the photoreceptor cell layer at the image taking position specified by the user has a defect or a damage, the aberration correction is performed while fixing a focus position during the aberration correction to an estimated position of the photoreceptor cell layer, and the AO-SLO image is acquired. Alternatively, a plurality of AO-SLO images having different focus positions in a depth direction may be acquired to select an image having a preferred resolution, or further, associated information may be obtained while changing the image taking position, and then interpolation or the like may be performed based on those pieces of information.
The "normal aberration correction" as used herein is processing in which wavefront aberration measured by a Hartmann-Shack sensor is used to determine Zernike coefficients up to the sixth order by least square approximation. It should be noted, however, that the order of the Zernike coefficients changes depending on intended accuracy of approximation and processing time, and is not limited to the method described above.
Performing the image taking while changing the control method for the AO-SLO depending on the state of the photoreceptor cell layer as described above allows the photoreceptor cell layer in the case where the normal photoreceptor cell layer exists, and a position at which the photoreceptor cell layer is likely to exist in the case where the photoreceptor cell layer is damaged, to be captured.
Now, details of Embodiment 1 are described.
<Planar image>
In FIG. 3, an image obtained by superimposing the AO-SLO images on the WF-SLO image is schematically illustrated. As illustrated in FIG. 3, the AO-SLO images have a high resolution but are small in capturing field angle due to the aberration correction, and in contrast, the WF-SLO image does not have a high resolution because of being the SLO image without the aberration correction. However, the WF-SLO image may capture a wide range of the fundus, and an entire image of the retina may be obtained. Note that, the AO-SLO images and the WF-SLO image are referred to as "planar images", and the AO-SLO images are distinguished from the WF-SLO image for convenience as planar images of the subregions.
<OCT tomographic images>
The OCT in which optical interference is utilized to capture tomographic images of the fundus allows a state of an internal structure of the retina of the fundus to be three-dimensionally observed, and hence is widely used at the site of ophthalmical diagnosis and treatment. In this embodiment, a tomographic image acquiring unit is configured to acquire the tomographic images of the fundus of an eye to be inspected by the OCT. In FIG. 4, the tomographic images around a macula, which are acquired by the OCT, are schematically illustrated. As illustrated in FIG. 4, tomographic images (B-scan images) are denoted by T1 to Tn, respectively, and information on the retina is represented three-dimensionally by a tomographic image group obtained by collecting the plurality of tomographic images.
In FIG. 4, boundaries of layer structures of the retina are represented by L1 to L6, respectively. Here, the boundary L1 indicates a surface of an inner limiting membrane (hereinafter referred to as "ILM"), the boundary L2 indicates a boundary (hereinafter referred to as "nerve fiber layer boundary") between a nerve fiber layer (hereinafter referred to as "NFL") and a layer thereunder, and the nerve fiber layer is illustrated as a region L2’. Moreover, the boundary L3 indicates a boundary (hereinafter referred to as "inner plexiform layer boundary") between an inner plexiform layer and a layer thereunder, the boundary L4 indicates a boundary (hereinafter referred to as "outer plexiform layer boundary") between an outer plexiform layer and a layer thereunder, the boundary L5 indicates a boundary of an interface between inner and outer segments of the photoreceptors (hereinafter referred to as "IS/OS"), and the boundary L6 indicates a boundary of retinal pigment epithelium (hereinafter referred to as "RPE"). Note that, depending on performance of the OCT, the boundary between an IS/OS layer and the RPE layer may be indistinguishable, but the accuracy does not constitute a problem in the present invention. Moreover, the ILM and the IS/OS may be seen as layers, but are regarded as the boundaries because of being very thin.
<Configuration of Ophthalmic Apparatus>
FIG. 1 is a block diagram for illustrating a functional configuration of an ophthalmic apparatus 20 according to Embodiment 1.
In FIG. 1, the ophthalmic apparatus 20 includes a processing device 10, a planar image taking device 3, and a display part 4. The processing device 10 performs an image process and computes an aberration correction coefficient. The planar image taking device 3 captures the planar images of the AO-SLO image, the WF-SLO image, and the like. The display part 4 displays the computation result of the processing device 10 and the like to a user. The ophthalmic apparatus 20 is also connected to an external database (hereinafter referred to as "DB") 1. This allows the processing device 10 to acquire images and patient data acquired by other modalities, past images and data acquired by another planar image taking device, and the tomographic images obtained by an OCT 2, which are stored in the DB 1.
The processing device 10 includes an image acquiring part 100, an information acquiring part 110, a control part 120, a memory part 130, an image processing part 140, an output part 150, and an AO control part 160. The image acquiring part 100 acquires the tomographic images acquired by the OCT 2 via the DB 1. The acquired OCT tomographic images are stored in the memory part 130 via the control part 120. The information acquiring part 110 acquires an input by the user and measurement data during the aberration correction.
The image processing part 140 includes a layer analyzing part 141, a position alignment part 142, and a determining part 143. The layer analyzing part 141 extracts, as the information on the state of the photoreceptor cells, information on whether or not the IS/OS layer can be extracted from the tomographic images, exists in the tomographic images, or the like, as described later. The position alignment part 142 associates the information on the state of the photoreceptor cells with the information on the image taking positions of the subregions on the planar image of the fundus acquired by the WF-SLO. The image processing part 140 extracts the photoreceptor cell layer and the RPE layer from the OCT tomographic images as described above to identify whether or not the photoreceptor cell layer is a normal state, and in the case where the photoreceptor cells has the defect or other such damages, acquires the estimated position of the photoreceptor cell layer. Further, the image processing part 140 determines positions on the WF-SLO image to which the acquired OCT tomographic images correspond, and associates the region in which the photoreceptor cells are normal with a position on the WF-SLO image. Then, based on the above-mentioned result of the layer analysis, the control method for the AO-SLO image taking is determined based on the state of the photoreceptor cell layer at the image taking position specified by the user.
The output part 150 outputs, to a monitor or the like, the region in which the normal photoreceptor cell layer exists, which is associated with the WF-SLO image. Alternatively, the output part 150 specifies, to various modalities such as a configuration for performing the aberration correction (not shown), a control method for capturing an AO-SLO image of the capturing position specified by the user. The AO control part 160 performs calculation of the Zernike coefficients associated with the aberration correction. In this embodiment, in addition to the planar image taking device 3, the AO control part 160 is included to form a planar image acquiring unit configured to acquire the planar images of the subregions of the fundus of the eye to be inspected. In the planar image acquiring unit, based on the information on the state of the photoreceptor cells associated by the position alignment part 142, the AO-SLO images are acquired while changing an aberration correction method at the image taking position on the WF-SLO image.
<Processing Procedure of Ophthalmic Apparatus>
Next, a processing procedure of the ophthalmic apparatus 20 according to Embodiment 1 is described referring to a flowchart of FIG. 2.
<Step S210>
In Step S210, the image acquiring part 100 acquires the OCT tomographic images of the retina of the eye to be inspected, which is taken by the OCT 2 and stored in the external DB 1. Then, the acquired OCT tomographic images are stored in the memory part 130 via the control part 120.
<Step S220>
In Step S220, the image processing part 140 performs the layer analysis on the OCT tomographic images stored in the memory part 130. The retina is formed of a plurality of layer structures as illustrated in FIG. 4, and the IS/OS, which indicates the state of the photoreceptor cells, and the RPE layer under the IS/OS are extracted here. Then, the analysis is performed on those layers. Next, a procedure of tomographic image analysis in Step S220 is described referring to a flow chart of FIG. 5.
<Step S510>
In Step S510, the layer analyzing part 141 detects the IS/OS (boundary L5) and the RPE (boundary L6) from each of tomographic images stored in the memory part 130. As a segmentation method for the layers, various methods are known. In this embodiment, a case of using a method in which an edge enhancement filter is used to extract edges as layer boundaries, and in which edges detected using medical knowledge on the IS/OS are associated with the layer boundaries is described. Note that, the detection of the IS/OS and the RPE is described here, but the other layer boundaries may be detected by a similar method.
First, the layer analyzing part 141 performs smoothing filter processing on the tomographic image to remove noise components. Then, edge detection filter processing is performed to detect edge components from the tomographic image and hence extract edges corresponding to the boundaries of the layers. Further, a background region is identified from the tomographic image on which the edge detection has been performed to extract luminance values of the background region from the tomographic image. Next, peak values of luminances determined to be the edge components and luminance values between peaks are used to determine the boundaries of the layers.
For example, the layer analyzing part 141 detects the edges in a depth direction of the fundus from a hyaloid body side to determine the boundary L1 (ILM) between the hyaloid body and a retina layer based on the peak values of the luminances determined to be the edge components, luminance values above and below the peak values, and the luminance values of the background. Further, the layer analyzing part 141 detects the edges in the depth direction of the fundus to determine the RPE (boundary L6) with reference to the peak values of the luminances of the edge components, the luminance values between the peaks, and the luminance values of the background. Subsequently, the layer analyzing part 141 detects the edges in a shallow direction of the fundus from a scleral side to determine the IS/OS (boundary L5) based on the peaks of the luminances determined to be the edge components, the luminance values above and below the peaks, and the luminance values of the background. Through the above-mentioned processing, the boundaries of the layers may be detected.
Control points as positions of the RPE (boundary L6) and the IS/OS (boundary L5), which have been detected as described above, in the depth direction of the retina are transmitted to the control part 120 and stored in the memory part 130.
<Step S520>
In Step S520, the layer analyzing part 141 identifies the regions in which the IS/OS exists based on a result of the extraction in Step S510 to discriminate between the regions in which the IS/OS exists and a region in which the IS/OS does not exist.
More specifically, ranges in which the IS/OS has been identified in Step S510 are determined, and the ranges are connected with a plurality of tomographic images to form the regions in which the IS/OS exists. In FIG. 6A, a range in which the IS/OS has been identified in the tomographic image Tn is illustrated as a region Rn. Further, in FIG. 6B, an example in which the regions in which the IS/OS exists have been identified as regions R1 to Rn determined from the plurality of tomographic images is illustrated.
The regions thus identified as the regions in which the IS/OS exists are transmitted to the control part 120 and stored in the memory part 130.
<Step S530>
In Step S530, the layer analyzing part 141 determines, based on the regions R1 to Rn that have been identified as the regions in which the IS/OS exists in Step S520, whether or not the IS/OS exists in whole regions of which the AO-SLO images are to be acquired. In a case where it is determined that the IS/OS exists in the whole regions, the processing is directly ended. In a case where there is a region in which the IS/OS does not exist, the flow proceeds to Step S540.
<Step S540>
In Step S540, the layer analyzing part 141 estimates, based on the regions in which the IS/OS exists, which have been determined in Step S520, a position of the IS/OS for the region in which the IS/OS does not exist. More specifically, in the vicinity of the boundary of the regions in which the IS/OS does not exist within the region in which the IS/OS exists, a distance D between the RPE layer and the IS/OS, which have been extracted in Step S510, is determined. Then, a position of the RPE is identified in the region in which the IS/OS does not exist, and a position after moving upward by D is set as an estimated position at which the IS/OS exists.
In other words, in the case where the IS/OS does not exist, the layer analyzing part 141 extracts the RPE layer, and based on the distance D in the vicinity of the boundary between the region in which the IS/OS layer exists and the region in which the IS/OS layer does not exist, estimates the position at which the IS/OS layer exists in the regions in which the IS/OS layer does not exist. The thus estimated control point of the IS/OS is transmitted to the control part 120 and stored in the memory part 130. After the storing, the flow proceeds to Step S230.
The case where the acquired OCT tomographic images are 3D tomographic images has been described above, but the OCT tomographic images as a target are not limited to the 3D tomographic images. For example, the OCT tomographic images may be tomographic images acquired by radial scanning in which a plurality of images are captured radially around the macula, or may be one tomographic image. In the case of the radial scanning, ranges in which the IS/OS exists, which are determined respectively from the tomographic images, may be extrapolated in an angular direction to identify the regions in which the IS/OS exists. Moreover, in the case of determination from the one tomographic image, from a range in which the IS/OS exists and which is identified in the tomographic image, approximation may be performed with a circle around the macula to acquire the region of the normal photoreceptor cell layer.
<Step S230>
In Step S230, the image acquiring part 100 acquires a planar image of the retina of the eye to be inspected. Then, the acquired planar image is stored in the memory part 130 via the control part 120. A case where the planar image is the WF-SLO image is described here, but without limiting to the WF-SLO image, the planar image may be an AO-SLO image obtained by capturing a wide angle of field with a reduced resolution.
<Step S240>
In Step S240, the position alignment part 142 performs position alignment between the WF-SLO image stored in the memory part 130 and the OCT tomographic images. Further, the regions in which the IS/OS exists, which have been identified in the OCT tomographic images and acquired in Step S520, are also associated with positions on the WF-SLO image.
Note that, the above-mentioned position alignment includes, in the case where the OCT tomographic images are the 3D tomographic images, a method in which a projection image is generated by adding the 3D tomographic images in the depth direction of the retina, and in which position alignment between the projection image and the WF-SLO image is performed based on feature amounts such as vascular structure and the like. In the case where the OCT tomographic images are those of the radial scanning or the one tomographic image, the SLO image (planar image) captured at the same time when the OCT tomographic images are acquired may be acquired at the same time in Step S210, and the position alignment between the SLO image and the WF-SLO image may be performed in advance, based on which the position alignment between the WF-SLO image and the OCT tomographic images may be performed.
The regions in which the IS/OS exists, which have been associated on the WF-SLO image as described above, are stored in the memory part 130 via the control part 120. Further, the regions are displayed on the display part 4 such as a monitor via the output part 150 while being superimposed on the WF-SLO image. A mode and the like of the display are selected by a module region functioning as a display control unit in the control part 120, and the display part 4 is instructed on the display. Moreover, display by the display part 4 of the AO-SLO images acquired by the planar image taking device 3 is also executed by the display control unit.
<Step S250>
In Step S250, the information acquiring part 110 acquires information on the image taking position of the AO-SLO specified by the user. Then the acquired image taking position is stored in the memory part 130 via the control part 120.
<Step S260>
In Step S260, the determining part 143 determines whether or not the IS/OS exists at the image taking position of the AO-SLO acquired in Step S250. In a case where the IS/OS exists at the image taking position, the processing proceeds to Step S270. In a case where the IS/OS does not exist, the processing proceeds to Step S280. Note that, in this embodiment, in a case of an image taking position at which the region in which the IS/OS exists and the region in which the IS/OS does not exist are mixed, the image taking position is processed as the case where the IS/OS does not exist.
<Step S270>
In Step S270, the AO control part 160 calculates the Zernike coefficients in order to perform the aberration correction at the image taking position of the AO-SLO acquired in Step S250. Here, the calculation is performed on the precondition that the IS/OS exists.
The information acquiring part 110 acquires information acquired by a Hartmann-Shack wavefront sensor (not shown) from reflected light from the fundus of the eye to be inspected, which irradiates the image taking position. More specifically, a focal position of each microlens of a microlens array (not shown), shift amounts Δx and Δy from a corresponding reference point position (focal position in a case of no aberration), and a focal length f of the microlens are acquired. Then, the acquired values of the shift amounts and the focal length are stored in the memory part 130 via the control part 120.
When a wavefront of the reflected light from the fundus of the eye to be inspected is represented by W(X, Y), W may be polynomially approximated by Zernike polynomials as in the following equation (1).
Figure JPOXMLDOC01-appb-M000001
Further, shift amounts and a wavefront that are measured are expressed by the following partial differential equations (2) and (3).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000003
The shift amounts and the focal length f that are acquired, and least square approximation in which squares of errors of approximations obtained by substituting the equation (1) into the equations (2) and (3) are minimized are used to calculate a Zernike coefficient Cn 1.
The thus acquired Zernike coefficient Cn 1 is stored in the memory part 130 via the control part 120.
<Step S280>
In Step S280, the AO control part 160 calculates the Zernike coefficient in order to perform the aberration correction at the image taking position of the AO-SLO acquired in Step S250. Here, the calculation is performed on the precondition that the IS/OS does not exist at the image taking position.
The information acquiring part 110 acquires the information acquired by the above-mentioned Hartmann-Shack wavefront sensor from the reflected light from the fundus of the eye to be inspected, which irradiates the image taking position. Then, the acquired values of the shift amounts and the focal length are stored in the memory part 130 via the control part 120.
As in Step S270, the acquired values of the shift amounts and the focal length are used to calculate the Zernike coefficient, and in the calculation, a fixed value is used for a Zernike coefficient C2 0 corresponding to a focus power. More specifically, in calculating the Zernike coefficient by the least square method, C2 0 is not treated as a variable.
Here, there is a method in which, as the fixed value, of image taking positions that have been captured and have the Zernike coefficients at the time of capturing stored in the memory part 130 or the database 1, C2 0 for which the image taking position is closest to the image taking position acquired in Step S250 and the IS/OS exists is used. Alternatively, for the image taking position acquired in Step S250, C2 0 corresponding to the estimated position of the IS/OS determined in Step S540 may be used. Still alternatively, from the Zernike coefficient in the region in which the IS/OS exists and which have been captured, the Zernike coefficient at an estimated IS/OS position may be extrapolated. Still more alternatively, in a case where the RPE exists at the image taking position acquired in Step S250, the reflected light from the fundus of the eye to be inspected may be regarded as reflected light mainly from the RPE. In this case, first, as in Step S270, all Zernike coefficients are calculated, and the determined C2 0 may be increased by a value corresponding to a shift of the focus position to the estimated IS/OS position. Note that, the Zernike coefficients are acquired using a part of aberration correction parameters as preferred correction parameters, but correction parameters other than the coefficients may be used.
<Step S290>
In Step S290, the control part 120 transmits the calculated Zernike coefficients, which are stored in the memory part 130, to the planar image taking device 3 via the output part 150. The planar image taking device 3 performs the aberration correction based on the acquired Zernike coefficients to acquire AO-SLO images. The acquired AO-SLO images are stored in the memory part 130 via the image acquiring part 100 and the control part 120. Further, the acquired images and accompanying information such as the image taking positions are stored in the external database 1 via the output part 150.
With the above-mentioned configuration, in acquiring images of the retina by the AO-SLO, the control method for the AO-SLO may be changed based on the layer analysis of the OCT tomographic images to capture the retina in the condition in which the photoreceptor cells are easily rendered.
Embodiment 2
In Embodiment 1, the example in which the state of the IS/OS is discriminated based on the OCT tomographic images, and an aberration correction manner in acquiring the AO-SLO images is changed depending on the state of the IS/OS has been described. In Embodiment 2 of the present invention, a case where the AO-SLO images of the region in which the photoreceptor cells exist are acquired by the method described in Embodiment 1, and the AO-SLO images are taken at a plurality of points especially so that the region in which the photoreceptor cells exist and the region in which the photoreceptor cells do not exist are clarified is described.
In a hereditary disease associated with the photoreceptor cells, a specific ring-like structure is known to be observed with the progress of the disease. Depending on the disease, a case where the photoreceptor cells are normal inside the ring-like structure but a photoreceptor cell defect occurs outside the ring-like structure, and to the contrary, a case where a photoreceptor cell defect occurs inside the ring-like structure but are normal outside the ring-like structure are known. Retinitis pigmentosa is known as an example of the former, and Stargardt disease is known as the latter.
In those diseases, it is important to acquire a photoreceptor cell image of the region in which the photoreceptor cells exist, and at the same time, to clearly render the boundary between the region in which the photoreceptor cells exist and the region in which the photoreceptor cells does not exist, and correctly measure sizes of the region in which the photoreceptor cells exist and the region in which the photoreceptor cell defect occurs.
In FIG. 7, a functional configuration of an ophthalmic apparatus 20 according to Embodiment 2 is illustrated.
In FIG. 7, components denoted by reference numerals 1 to 4, 100 to 130, and 150 to 160 are the same as the components illustrated in FIG. 1, and hence a description thereof is omitted here. Moreover, a description of the layer analyzing part 141, the position alignment part 142, and the determining part 143 included in the image processing part 140 to perform similar processing as in FIG. 1 is also omitted here. This embodiment is different from Embodiment 1 in that the image processing part 140 includes an image taking position setting part 744. The image taking position setting part 744 calculates, based on the result of associating the IS/OS obtained from the OCT tomographic images with the WF-SLO image, image taking positions on the WF-SLO image, which are required to acquire the AO-SLO images of the region in which the IS/OS exists and a region including the boundary thereof.
Next, referring to a flow chart of FIG. 8, a processing procedure of the processing device 10 performed in Embodiment 2 is described. Note that, Step S210 to Step S290 are not different from the processing procedure described in Embodiment 1, and hence a description thereof is omitted here. This embodiment is different in Step S850, which is performed after Step S240 in which the position alignment is performed, and processing performed in Step S850 is described in detail below.
<Step S850>
In Step S850, the image taking position setting part 744 sets the image taking position based on image taking pattern information selected by the user via the information acquiring part 110, and the region in which the IS/OS exists and which is associated with the generated WF-SLO image in Step S240. The "image taking pattern selected by the user" as used herein is selected depending on a defect state of the photoreceptor cells, which is different for each disease, and includes a type that covers the entire region in which the IS/OS exists and a type that covers only the boundary.
In FIG. 9, an example in which, with respect to an eye with retinitis pigmentosa, image taking positions that cover the entire region in which the IS/OS exists are presented is illustrated. The region indicated in the gray circle, which exists near a fovea in FIG. 9, indicates the region in which the IS/OS exists and which is acquired by the OCT. Note that, in this embodiment, with a possibility that the image taking positions are shifted due to flicks of the eye or the like, or for the purpose of securing an overlapping region for the position alignment, the image taking positions of the individual AO-SLO images are determined so as to overlap each other by about 15% of the capturing field angle. More specifically, in a case where an image size of an AO-SLO image is 400×400 pixels, the image taking positions of the respective adjacent AO-SLO images are determined with an overlap corresponding to 60 pixels. Further, the image taking positions on the outermost portion are set to be at, or include, the outside of the region in which the IS/OS exists and which is associated with the generated WF-SLO image in Step S240. Thus including from the inside to the outside of the region in which the IS/OS exists in the image taking positions allows the AO-SLO images of the region in which the photoreceptor cells exist and the boundary thereof to be acquired.
In FIG. 10, an example in which, with respect to the eye with retinitis pigmentosa, image taking positions that cover the boundary between the region in which the IS/OS exists and the region in which the IS/OS does not exist are presented is illustrated. Note that, a plurality of methods of setting the image taking positions exist in this case. For example, the reference position is moved from the fovea to a nose side direction to search for an image taking position serving as the boundary of the IS/OS, and the position is set as a first image taking position. A method in which the boundary of the IS/OS is followed from the first image taking position, and a next image taking position having an image end that is 60 pixels inside from the point serving as the end of the AO-SLO image along the boundary of the IS/OS is set as the next image taking position is contemplated. In FIG. 10, the existence of the IS/OS is uncertain at the point serving as the boundary of the IS/OS, and there is a fear that the aberration correction may become unstable, with the result that a boundary obtained by reducing the boundary of the IS/OS to 90% as a whole is determined, and the image taking positions are set thereon. Further, a boundary obtained by increasing the boundary of the IS/OS to 110% as a whole is determined, and image taking positions are also set on intersections of lines radially extending the image taking positions set on the reduced boundary from the fovea and the 110% boundary line so as to be paired with the image taking positions. Thus including the pairs of the inside and outside of the boundary of the region in which the IS/OS exists as the image taking positions along the boundary allows the AO-SLO images of the boundary of the region in which the photoreceptor cells exist to be acquired.
The image taking positions set as described above are stored in the memory part 130 via the control part 120. Further, the image taking positions are displayed on the display part 4 such as the monitor via the output part 150 while being superimposed on the WF-SLO image. In other words, the display part 4 functions as a presentation unit configured to present, to the user, when acquiring as a planar image a lesioned part of the eye to be inspected extracted from an OCT image, the lesioned part as a region on which the aberration correction may not work. In this case, the user may directly capture the set image taking positions in order, or may select a new image taking position.
Thus determining the image taking positions based on the IS/OS information acquired from the OCT tomographic images allows the photoreceptor cell image to be acquired effectively over the entire range in which the photoreceptor cells exist, or allows the boundary of the range in which the photoreceptor cells exist to be acquired correctly.
Further, in a case where the image taking positions are set on the boundary of the range in which the photoreceptor cells exist, the focus position may become unstable. Therefore, the image taking is performed at the focus position acquired in a region in the proximity in which the IS/OS exists and a plurality of focus positions in an up and down direction of a depth of the OCT image or the planar image. Comparing the planar images at all the focus positions obtained as described above in terms of the state of the photoreceptor cells allows a more correct boundary to be identified.
Embodiment 3
In Embodiment 1, the state of the IS/OS is discriminated based on the OCT tomographic images, and in the case where the IS/OS exists, in Step S270, the Zernike coefficient that most closely approximates the wavefront of the reflected light from the fundus of the eye to be inspected is acquired to perform the aberration correction. In a case where a highly reflective IS/OS exists, this method brings the photoreceptor cell layer into focus, and the possibility that a good photoreceptor cell layer may be acquired becomes high. However, more accurate focusing is realized by selecting, from a plurality of AO-SLO images captured while changing the focus, a focus position at which a structure characteristic of the photoreceptor cells appear most strongly.
More specifically, a method is contemplated in which frequency conversion images of the AO-SLO images captured while changing the focus are acquired, and a focus position at which a signal strength corresponding to a frequency of the photoreceptor cells at the image taking positions is the highest is selected.
As described above, in the case where it is identified that the IS/OS normally exists from the OCT tomographic images, the possibility that the photoreceptor cells are correctly rendered is high, but there is a possibility that the images are blurred and image quality is lowered due to the effect of defocusing. Therefore, acquiring the images at a plurality of focus positions in advance and selecting an image based on a result of a frequency analysis of the acquired AO-SLO images allow the photoreceptor cell image having higher image quality to be acquired. In other words, in the case where the IS/OS layer exists, the planar image taking device 3 acquires the plurality of AO-SLO images having different focus positions, and determines the boundary between the region in which the IS/OS layer exists and the region in which the IS/OS layer does not exist from the plurality of AO-SLO images based on the frequency analysis. Moreover, the display control unit selects an image to be displayed based on the frequency analysis and causes the display part 4 to display the image.
Embodiment 4
In Embodiment 1, in analyzing the OCT tomographic images in Step S220, extraction of the IS/OS layer and the RPE layer is performed. However, the OCT tomographic images also contain information on a lesion such as vitiligo, blood vessels, and the like, and hence those pieces of information may be utilized to acquire the AO-SLO images having higher image quality.
More specifically, in a case where a highly reflective lesion such as the vitiligo exists, when the aberration correction is performed at a position of the lesion, the reflected light from the lesion enters the Hartmann-Shack wavefront sensor. In order to avoid such circumstances, in the layer analysis in Step S220, the lesion such as the vitiligo is extracted and stored. Then, in the position alignment between the OCT tomographic images and the WF-SLO image in Step S240, the information on the lesion is also associated with the WF-SLO image. Then, the information is stored in the memory part 130, and at the same time, presented to the user as reference information in selecting the image taking position.
Moreover, in a case where the image taking position specified in Step S250 overlaps the position of the lesion, a method is contemplated in which a possibility that the capturing may fail due to the effect of the lesion is presented to the user, and a position that is near the image taking position and is out of the lesion is presented as an alternative candidate to the user.
Other Embodiments
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a 'non-transitory computer-readable storage medium') to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2014-174978, filed August 29, 2014, which is hereby incorporated by reference herein in its entirety.
1 database
2 OCT device
3 planar image taking device
4 display part
10 processing device
20 ophthalmic apparatus
100 image acquiring part
110 information acquiring part
120 control part
130 memory part
140 image processing part
141 layer analyzing part
142 position alignment part
143 determining part
744 image taking position setting part
150 output part
160 AO control part

Claims (15)

  1. An ophthalmic apparatus, comprising:
    a tomographic image acquiring unit configured to acquire tomographic images of a fundus of an eye to be inspected;
    a layer analyzing unit configured to extract information on a state of photoreceptor cells from the tomographic images;
    a position alignment unit configured to associate the information on the state of the photoreceptor cells with information on image taking positions on a planar image of the fundus; and
    a planar image acquiring unit configured to acquire planar images at the image taking positions on the fundus of the eye to be inspected while changing an aberration correction manner at the image taking positions based on the information on the state of the photoreceptor cells associated by the position alignment unit.
  2. An ophthalmic apparatus according to claim 1,
    wherein the information on the photoreceptor cells indicates whether or not an IS/OS layer exists, and
    wherein the layer analyzing unit extracts the IS/OS layer and distinguishes between a region in which the IS/OS layer exists and a region in which the IS/OS layer does not exist.
  3. An ophthalmic apparatus according to claim 2, wherein, when the IS/OS layer does not exist, the layer analyzing unit extracts an RPE layer, and estimates, based on a distance between the IS/OS layer and the RPE layer near a boundary between the region in which the IS/OS layer exists and the region in which the IS/OS layer does not exist, a position at which the IS/OS layer exists in the region in which the IS/OS layer does not exist.
  4. An ophthalmic apparatus according to any one of claims 1 to 3, further comprising a display control unit configured to display the planar images acquired by the planar image acquiring unit on a display unit,
    wherein the information on the photoreceptor cells indicates whether or not an IS/OS layer exists,
    wherein, when the IS/OS layer exists, the planar image acquiring unit acquires the planar images at a plurality of the image taking positions having different focus positions, and
    wherein the display control unit selects one of a plurality of the acquired planar images based on a frequency analysis to be displayed on the display unit.
  5. An ophthalmic apparatus according to any one of claims 1 to 4,
    wherein the information on the photoreceptor cells indicates whether or not an IS/OS layer exists, and
    wherein the planar image acquiring unit acquires, near a boundary between a region in which the IS/OS layer exists and a region in which the IS/OS layer does not exist, a plurality of the planar images having different focus positions, and determines the boundary from a plurality of the acquired planar images based on a frequency analysis.
  6. An ophthalmic apparatus according to any one of claims 1 to 5,
    wherein the information on the photoreceptor cells indicates whether or not an IS/OS layer exists, and
    wherein, when the IS/OS layer does not exist, the planar image acquiring unit uses a part of aberration correction parameters calculated at a position at which the IS/OS exists to acquire a Zernike coefficient for use in acquiring the planar images.
  7. An ophthalmic apparatus according to any one of claims 1 to 6, further comprising a presentation unit configured to present, to a user, in acquiring as the planar images a lesioned part of the eye to be inspected, which is extracted from the tomographic images by the layer analyzing unit, the lesioned part as a region on which aberration correction may not work.
  8. An ophthalmic apparatus, comprising:
    a tomographic image acquiring unit configured to acquire tomographic images of a fundus of an eye to be inspected;
    a layer analyzing unit configured to extract information on a state of photoreceptor cells from the tomographic images;
    a position alignment unit configured to associate the information on the state of the photoreceptor cells with information on image taking positions on a planar image of the fundus; and
    a planar image acquiring unit configured to acquire planar images at the image taking positions of the fundus of the eye to be inspected, on which aberration correction has been performed, while changing the image taking positions on the planar image of the fundus based on the information on the state of the photoreceptor cells associated by the position alignment unit.
  9. An ophthalmic apparatus according to claim 8, wherein the planar image acquiring unit sets a plurality of the image taking positions so as to include whole regions on the planar image of the fundus, which are determined as regions in which the photoreceptor cells exist based on the information on the state of the photoreceptor cells.
  10. An ophthalmic apparatus according to claim 8, wherein the planar image acquiring unit sets a plurality of the image taking positions along a boundary of regions on the planar image of the fundus, which are determined as regions in which the photoreceptor cells exist based on the information on the state of the photoreceptor cells.
  11. An ophthalmic apparatus according to claim 10, wherein the planar image acquiring unit acquires, when the image taking positions are set along the boundary of the regions determined as the regions in which the photoreceptor cells exist, the planar images at the image taking positions of the fundus at a focus position in a vicinity of the image taking positions and in the region in which the photoreceptor cells exist, and at a plurality of focus positions in an up and down direction of the regions in a depth direction of the planar image.
  12. A program configured to cause a computer to execute each of processes of respective units of the ophthalmic apparatus according to any one of claims 1 to 11.
  13. A storage medium having stored thereon the program of claim 12.
  14. A method of controlling an ophthalmic apparatus, comprising:
    a tomographic image acquiring step of acquiring tomographic images of a fundus of an eye to be inspected;
    a layer analyzing step of extracting information on a state of photoreceptor cells from the tomographic images;
    a position alignment step of associating the information on the state of the photoreceptor cells with information on image taking positions on a planar image of the fundus; and
    a planar image acquiring step of acquiring planar images at the image taking positions on the fundus of the eye to be inspected while changing an aberration correction manner at the image taking positions based on the information on the state of the photoreceptor cells associated in the position alignment step.
  15. A method of controlling an ophthalmic apparatus, comprising:
    a tomographic image acquiring step of acquiring tomographic images of a fundus of an eye to be inspected;
    a layer analyzing step of extracting information on a state of photoreceptor cells from the tomographic images;
    a position alignment step of associating the information on the state of the photoreceptor cells with information on image taking positions on a planar image of the fundus; and
    a planar image acquiring step of acquiring planar images at the image taking positions of the fundus of the eye to be inspected, on which aberration correction has been performed, while changing the image taking positions on the planar image of the fundus based on the information on the state of the photoreceptor cells associated in the position alignment step.
PCT/JP2015/003971 2014-08-29 2015-08-06 Ophthalmic apparatus and control method therefor WO2016031151A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014-174978 2014-08-29
JP2014174978A JP6506518B2 (en) 2014-08-29 2014-08-29 Ophthalmic apparatus and control method thereof

Publications (2)

Publication Number Publication Date
WO2016031151A1 true WO2016031151A1 (en) 2016-03-03
WO2016031151A4 WO2016031151A4 (en) 2016-06-02

Family

ID=53938383

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/003971 WO2016031151A1 (en) 2014-08-29 2015-08-06 Ophthalmic apparatus and control method therefor

Country Status (2)

Country Link
JP (1) JP6506518B2 (en)
WO (1) WO2016031151A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012010790A (en) 2010-06-29 2012-01-19 Nidek Co Ltd Ophthalmologic device
JP2012135550A (en) 2010-12-27 2012-07-19 Nidek Co Ltd Ophthalmic apparatus for laser treatment
US20120218515A1 (en) * 2011-02-25 2012-08-30 Canon Kabushiki Kaisha Image processing apparatus, image processing method, image processing system, slo apparatus, and program
JP2014111226A (en) * 2014-03-27 2014-06-19 Canon Inc Imaging device and control method therefor
US20140185009A1 (en) * 2012-12-28 2014-07-03 Canon Kabushiki Kaisha Ophthalmological Apparatus, Alignment Method, and Non-Transitory Recording Medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5843542B2 (en) * 2011-09-20 2016-01-13 キヤノン株式会社 Image processing apparatus, ophthalmologic photographing apparatus, image processing method, and program
JP5979904B2 (en) * 2012-02-20 2016-08-31 キヤノン株式会社 Image processing apparatus, ophthalmic imaging system, and image processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012010790A (en) 2010-06-29 2012-01-19 Nidek Co Ltd Ophthalmologic device
JP2012135550A (en) 2010-12-27 2012-07-19 Nidek Co Ltd Ophthalmic apparatus for laser treatment
US20120218515A1 (en) * 2011-02-25 2012-08-30 Canon Kabushiki Kaisha Image processing apparatus, image processing method, image processing system, slo apparatus, and program
US20140185009A1 (en) * 2012-12-28 2014-07-03 Canon Kabushiki Kaisha Ophthalmological Apparatus, Alignment Method, and Non-Transitory Recording Medium
JP2014111226A (en) * 2014-03-27 2014-06-19 Canon Inc Imaging device and control method therefor

Also Published As

Publication number Publication date
WO2016031151A4 (en) 2016-06-02
JP6506518B2 (en) 2019-04-24
JP2016049183A (en) 2016-04-11

Similar Documents

Publication Publication Date Title
JP5582772B2 (en) Image processing apparatus and image processing method
US9408535B2 (en) Photorefraction ocular screening device and methods
JP5955163B2 (en) Image processing apparatus and image processing method
US9355446B2 (en) Image processing apparatus and image processing method
US10694940B2 (en) Ophthalmic imaging system with automatic retinal feature detection
JP5698465B2 (en) Ophthalmic apparatus, display control method, and program
JP5697733B2 (en) Detection of optic nerve damage using 3D optical coherence tomography
JP5404358B2 (en) Image processing apparatus, image processing method, and program
US10251548B2 (en) Image processing apparatus, estimation method, system, and medium
US20140185889A1 (en) Image Processing Apparatus and Image Processing Method
EP2693399A1 (en) Method and apparatus for tomography imaging
WO2015129909A1 (en) Apparatus, method, and program for processing image
JP2006263127A (en) Ocular fundus diagnostic imaging support system and ocular fundus diagnostic imaging support program
WO2016031151A1 (en) Ophthalmic apparatus and control method therefor
JP5634587B2 (en) Image processing apparatus, image processing method, and program
Stankiewicz et al. Novel full-automatic approach for segmentation of epiretinal membrane from 3D OCT images
KR101683573B1 (en) Apparatus and Method for Detection of Regularized Layer of Visual Cells in Eyeball Tomography
WO2016006482A1 (en) Ophthalmic apparatus and control method therefor
JP6239021B2 (en) Image processing apparatus, image processing method, and program
KR20200049195A (en) The method for pre-perimetric glaucoma diagnosis using fundus photographs analysis
JP2005261799A (en) Ophthalmologic image processor
JP6611776B2 (en) Image processing apparatus, method of operating image processing apparatus, and program
JP6576035B2 (en) Ophthalmic apparatus, photoreceptor cell detection method and program
Rossant et al. Enhancement of Optical Coherence Tomography Images of the Retina by Normalization and Fusion.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15753789

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15753789

Country of ref document: EP

Kind code of ref document: A1