JP5192250B2 - Fundus observation device - Google Patents

Fundus observation device Download PDF

Info

Publication number
JP5192250B2
JP5192250B2 JP2008023505A JP2008023505A JP5192250B2 JP 5192250 B2 JP5192250 B2 JP 5192250B2 JP 2008023505 A JP2008023505 A JP 2008023505A JP 2008023505 A JP2008023505 A JP 2008023505A JP 5192250 B2 JP5192250 B2 JP 5192250B2
Authority
JP
Japan
Prior art keywords
fundus
image
dimensional
plurality
means
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2008023505A
Other languages
Japanese (ja)
Other versions
JP2009183332A (en
JP2009183332A5 (en
Inventor
篤 坂本
正紀 板谷
明彦 関根
Original Assignee
株式会社トプコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社トプコン filed Critical 株式会社トプコン
Priority to JP2008023505A priority Critical patent/JP5192250B2/en
Publication of JP2009183332A publication Critical patent/JP2009183332A/en
Publication of JP2009183332A5 publication Critical patent/JP2009183332A5/ja
Application granted granted Critical
Publication of JP5192250B2 publication Critical patent/JP5192250B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/102Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]

Description

The present invention relates to a fundus observation equipment. The fundus oculi observation device according to the present invention is a device that forms a tomographic image or a three-dimensional image of the fundus oculi of the eye to be examined .

  2. Description of the Related Art In recent years, optical image measurement technology that forms an image representing a surface form or an internal form of an object to be measured using a light beam from a laser light source or the like has attracted attention. Since the optical image measurement technique does not have invasiveness to the human body unlike the X-ray CT apparatus, the development of application in the medical field and the biological field is particularly expected.

  Patent Document 1 discloses an apparatus to which an optical image measurement technique is applied. In this apparatus, the measuring arm scans an object with a rotating turning mirror (galvanomirror), a reference mirror is installed on the reference arm, and light that appears due to interference of light beams from the measuring arm and the reference arm at the exit. An interferometer in which the intensity of the light is also analyzed by a spectroscope is used, and the reference arm is configured to change the phase of the reference light beam stepwise in a discontinuous value.

  The apparatus of Patent Document 1 uses a so-called “Fourier Domain OCT (Fourier Domain Optical Coherence Tomography)” technique. That is, the object to be measured is irradiated with a beam of low-coherence light, the reflected light and the reference light are superimposed to generate interference light, and the spectral intensity distribution of this interference light is acquired, and Fourier transform is performed on it. Is used to image the form of the object to be measured in the depth direction (z direction).

  Furthermore, the apparatus described in Patent Document 1 includes a galvanometer mirror that scans a light beam (signal light), thereby forming an image of a desired measurement target region of the object to be measured. Since this apparatus is configured to scan the light beam only in one direction (x direction) orthogonal to the z direction, the image formed by this apparatus is in the scanning direction (x direction) of the light beam. It becomes a two-dimensional tomographic image in the depth direction (z direction) along.

  In Patent Document 2, a plurality of horizontal two-dimensional tomographic images are formed by scanning signal light in the horizontal direction and the vertical direction, and three-dimensional tomographic information of a measurement range is acquired based on the plurality of tomographic images. A technique for imaging is disclosed. Examples of the three-dimensional imaging include a method of displaying a plurality of tomographic images side by side in a vertical direction (referred to as stack data), and a method of rendering a plurality of tomographic images to form a three-dimensional image. Conceivable.

  Patent Documents 3 and 4 disclose other types of OCT apparatuses. Patent Document 3 scans the wavelength of light applied to an object to be measured, acquires a spectral intensity distribution based on interference light obtained by superimposing reflected light of each wavelength and reference light, On the other hand, an OCT apparatus for imaging the form of an object to be measured by performing Fourier transform on the object is described. Such an OCT apparatus is called a swept source type.

  In Patent Document 4, the traveling direction of light is obtained by irradiating the object to be measured with light having a predetermined beam diameter, and analyzing the component of interference light obtained by superimposing the reflected light and the reference light. An OCT apparatus for forming an image of an object to be measured in a cross-section orthogonal to is described. Such an OCT apparatus is called a full-field type or an en-face type.

  Patent Document 5 discloses a configuration in which the OCT technique is applied to the ophthalmic field.

  There is a fundus camera as a fundus observation device used before the OCT apparatus is applied to the ophthalmic field. For example, the fundus camera described in Patent Document 6 has a configuration for performing panoramic photographing of the fundus. Panorama shooting is a technique for forming an image of a wide range of the fundus (that is, a range exceeding the maximum shooting angle of view) by shooting a plurality of images having different shooting ranges and connecting these images. Panoramic photography is widely used for grasping the state of the fundus over a wide range in fundus diagnosis.

JP 11-325849 A JP 2002-139421 A JP 2007-24677 A JP 2006-153838 A JP 2003-543 A JP-A-9-276232

  A fundus oculi observation device using OCT technology has the advantage of being able to obtain deep fundus images compared to a fundus camera that images the fundus oculi surface, and is effective in improving diagnostic accuracy and early detection of lesions. is there. However, since the conventional fundus oculi observation device cannot acquire a panoramic image over a wide range of the fundus, it cannot grasp the state of the deep portion of the fundus over a wide range.

  Here, the panoramic image means an image obtained by connecting a plurality of images representing different parts of the fundus. Note that the panoramic image obtained by the fundus oculi observation device using the OCT technique is a three-dimensional image representing the morphology of the fundus surface and the fundus deep part. On the other hand, the panoramic image obtained by the fundus camera is a two-dimensional image representing the form of the fundus surface.

  In addition, in the conventional fundus oculi observation device, it is possible to acquire an OCT image by selecting a site such as the macula or the optic disc by changing the fixation position of the eye to be examined. However, the conventional fundus oculi observation device cannot form a panoramic image by connecting a plurality of OCT images obtained by measuring different parts of the fundus.

The present invention has been made to solve such problems, and an object thereof is to provide a fundus observation equipment capable of creating a panorama image representing a three-dimensional pattern of the fundus oculi.

In order to achieve the above object, the invention according to claim 1 divides light from a light source into signal light and reference light, and the reference through the signal light and the reference object through the fundus of the eye to be examined. A fundus oculi observation device that includes an optical system that generates interference light by superimposing light and detection means that detects the interference light, and forms a three-dimensional image of the fundus based on a detection result of the detection means A plurality of three-dimensional images representing different parts of the fundus are analyzed to determine a positional relationship between the plurality of three-dimensional images, and each of the plurality of three-dimensional images is converted into one three-dimensional image based on the positional relationship. It comprises analysis means expressed in a coordinate system, display means, and control means for causing the display means to display the plurality of three-dimensional images expressed in the one three-dimensional coordinate system.

  The invention according to claim 2 is the fundus oculi observation device according to claim 1, wherein the analysis means specifies an image region corresponding to a predetermined part of the fundus in each of the plurality of three-dimensional images. Image region specifying means for determining the positional relationship of the plurality of three-dimensional images by determining the positional relationship of the plurality of specified image regions.

  The invention according to claim 3 is the fundus oculi observation device according to claim 2, wherein the image region specifying means specifies a blood vessel region corresponding to a blood vessel of the fundus as the image region, The analyzing means is characterized in that the positional relationship between the plurality of three-dimensional images in the fundus surface direction is obtained by connecting the plurality of specified blood vessel regions.

  According to a fourth aspect of the present invention, in the fundus oculi observation device according to the second aspect, the image region specifying unit specifies a layer region corresponding to a predetermined layer of the fundus as the image region. The analyzing means obtains the positional relationship of the plurality of three-dimensional images in the fundus depth direction by connecting the plurality of identified layer regions.

  The invention according to claim 5 is the fundus oculi observation device according to claim 1, further comprising a forming means for forming a two-dimensional image representing a form of the surface of the fundus, wherein the analyzing means As the positional relationship, the respective positions of the plurality of three-dimensional images with respect to the two-dimensional image are obtained, and the two-dimensional coordinate system of the fundus surface direction in which the two-dimensional image is defined and the fundus depth direction orthogonal to the two-dimensional coordinate system Each of the plurality of three-dimensional images is expressed by a three-dimensional coordinate system including coordinate axes.

  The invention according to claim 6 is the fundus oculi observation device according to claim 5, wherein the forming means irradiates the fundus with illumination light and detects the fundus reflection light to detect the fundus surface. It includes a photographing means for forming the two-dimensional image by photographing.

  Further, the invention according to claim 7 is the fundus oculi observation device according to claim 6, wherein the analysis unit integrates each of the plurality of three-dimensional images in the fundus depth direction. And a positional relationship of the plurality of three-dimensional images in the fundus surface direction is obtained by obtaining respective positions of the plurality of accumulated images in the two-dimensional image.

  The invention according to claim 8 is the fundus oculi observation device according to claim 5, wherein the forming means integrates each of the plurality of three-dimensional images in the fundus depth direction as the two-dimensional image. A plurality of integrated images forming means for forming a plurality of integrated images, wherein the analyzing means determines the positional relationship of the plurality of three-dimensional images in the fundus surface direction by determining the positional relationship of the plurality of integrated images. It is characterized by.

  The invention according to claim 9 is the fundus oculi observation device according to claim 1, wherein the analysis means analyzes an image region at an edge portion of the plurality of three-dimensional images to obtain the image region. A positional relationship between the plurality of three-dimensional images is obtained by positioning.

  The invention according to claim 10 is the fundus oculi observation device according to claim 1, wherein the optical system includes projection means for projecting a fixation target onto the eye to be examined, and the control means Acquisition means for acquiring fixation position information of the subject eye when light is irradiated on the fundus, and storage means for storing the fixation position information in association with the three-dimensional image of the fundus based on the signal light The analyzing means includes an array specifying means for specifying an array of the plurality of three-dimensional images based on the fixation position information stored in association with each of the plurality of three-dimensional images. The positional relationship of the three-dimensional image is obtained based on the arrangement.

  The invention according to claim 11 is the fundus oculi observation device according to claim 1, wherein the optical system includes projection means for projecting a fixation target onto the eye to be examined, and the plurality of three-dimensional images are displayed. When forming, the control unit controls the projection unit to change the projection position of the fixation target so that the adjacent three-dimensional image includes an overlapping region, and the analysis unit calculates the adjacent three-dimensional image. The positional relationship of the plurality of three-dimensional images is obtained by analyzing each overlapping region and aligning the images of the overlapping regions.

  The invention according to claim 12 is the fundus oculi observation device according to claim 1, further comprising forming means for forming a two-dimensional image representing the form of the surface of the fundus, wherein the optical system comprises the optical system Projection means for projecting a fixation target onto the eye to be examined, and the analysis means is formed by the forming means before and after the change when the projection position of the fixation target on the eye to be examined is changed. Detecting a rotation angle of the eye to be examined based on two two-dimensional images, and the control means controls the projection means to change the projection position of the fixation target so as to cancel the rotation angle. Features.

  The invention according to claim 13 is the fundus oculi observation device according to claim 1, further comprising storage means for previously storing the rotation angle of the eye to be examined, wherein the optical system is fixed to the eye to be examined. Projection means for projecting a target is included, and the control means controls the projection means to change the projection position of the fixation target so as to cancel the rotation angle.

  The invention according to claim 14 is the fundus oculi observation device according to claim 1, wherein the forming means for forming a two-dimensional image representing the form of the surface of the fundus, the optical system, and the eye to be examined are provided. Driving means for changing the relative position of the fixation target, and the analysis means is formed by the forming means before and after the change when the projection position of the fixation target on the eye to be examined is changed. Based on two two-dimensional images, the rotation angle of the eye to be examined is detected, and the control means controls the drive means and changes the relative position between the optical system and the eye to be canceled so as to cancel the rotation angle. It is characterized by that.

  The invention according to claim 15 is the fundus oculi observation device according to claim 1, further comprising storage means for storing in advance the rotation angle of the eye to be examined, and the relative relationship between the optical system and the eye to be examined. The drive means for changing the position and the control means control the drive means to change the relative position between the optical system and the eye to be examined so as to cancel the rotation angle.

  According to the present invention, a plurality of three-dimensional images representing different parts of the fundus of the eye to be examined are analyzed, a positional relationship between the plurality of three-dimensional images is obtained, and each three-dimensional image is converted into one 3D based on the positional relationship. It is possible to display a plurality of three-dimensional images expressed in a one-dimensional coordinate system as well as expressed in a three-dimensional coordinate system.

  In this way, the plurality of three-dimensional images expressed in one three-dimensional coordinate system are panoramic images including a plurality of parts having different fundus oculi. In addition, this panoramic image represents a three-dimensional form of the fundus.

  Therefore, according to the present invention, it is possible to create a panoramic image representing the three-dimensional form of the fundus.

An example of an embodiment of a fundus observation equipment according to the present invention will be described in detail with reference to the drawings.

[Fundamental observation device]
First, an embodiment of a fundus oculi observation device according to the present invention will be described. The fundus oculi observation device according to the present invention is a device that forms a tomographic image or a three-dimensional image of the fundus oculi using OCT technology. The technique applied to the fundus oculi observation device may be any technique such as a Fourier domain type, a swept source type, a full field type, or the like.

  In the following embodiment, a configuration to which a Fourier domain type technique is applied will be described in detail. Even when other types are applied, similar actions and effects can be obtained with the same characteristic configuration.

[overall structure]
As shown in FIG. 1, the fundus oculi observation device 1 includes a fundus camera unit 1 </ b> A, an OCT unit 150, and an arithmetic control device 200. The fundus camera unit 1A has an optical system that is substantially the same as that of a conventional fundus camera. The fundus camera is a device that captures a two-dimensional image representing the form of the surface of the fundus. The OCT unit 150 stores an optical system for acquiring an OCT image of the fundus. The arithmetic and control unit 200 includes a computer that executes various arithmetic processes and control processes.

  One end of a connection line 152 is attached to the OCT unit 150. A connector 151 for connecting the connection line 152 to the retinal camera unit 1A is attached to the other end of the connection line 152. An optical fiber is conducted through the connection line 152. Thus, the OCT unit 150 and the fundus camera unit 1A are optically connected via the connection line 152. The arithmetic and control unit 200 is connected to each of the fundus camera unit 1A and the OCT unit 150 via a communication line that transmits an electrical signal.

[Fundus camera unit]
The fundus camera unit 1A includes an optical system for forming a two-dimensional image representing the form of the fundus surface. Here, the two-dimensional image of the fundus surface represents a color image or a monochrome image obtained by photographing the fundus surface, and further a fluorescent image (fluorescein fluorescent image, indocyanine green fluorescent image, etc.). The fundus camera unit 1A is an example of the “photographing means” and “forming means” of the present invention.

  Similar to the conventional fundus camera, the fundus camera unit 1A includes an illumination optical system 100 that irradiates the fundus Ef with illumination light, and an imaging optical system 120 that guides the fundus reflection light of the illumination light to the imaging devices 10 and 12. ing. In addition, the imaging optical system 120 operates to guide the signal light from the OCT unit 150 to the fundus oculi Ef and guide the signal light passing through the fundus oculi Ef to the OCT unit 150.

  The illumination optical system 100 includes an observation light source 101, a condenser lens 102, a photographing light source 103, a condenser lens 104, exciter filters 105 and 106, a ring translucent plate 107, a mirror 108, an LCD (Liquid Crystal Display) 109, an illumination diaphragm 110, a relay. A lens 111, a perforated mirror 112, and an objective lens 113 are included.

  The observation light source 101 outputs illumination light having a wavelength in the visible region included in a range of about 400 nm to 700 nm, for example. The imaging light source 103 outputs illumination light having a wavelength in the near infrared region included in a range of about 700 nm to 800 nm, for example. Near-infrared light output from the imaging light source 103 is set to be shorter than the wavelength of light used in the OCT unit 150 (described later).

  The photographing optical system 120 includes an objective lens 113, a perforated mirror 112 (hole 112a), a photographing aperture 121, barrier filters 122 and 123, a variable power lens 124, a relay lens 125, a photographing lens 126, a dichroic mirror 134, and a field lens. (Field lens) 128, half mirror 135, relay lens 131, dichroic mirror 136, photographing lens 133, imaging device 10 (imaging device 10a), reflection mirror 137, photographing lens 138, photographing device 12 (imaging device 12a), lens 139 And the LCD 140.

  Further, the photographing optical system 120 is provided with a dichroic mirror 134, a half mirror 135, a dichroic mirror 136, a reflection mirror 137, a photographing lens 138, a lens 139, and an LCD 140.

  The dichroic mirror 134 reflects fundus reflection light (having a wavelength included in a range of about 400 nm to 800 nm) of illumination light from the illumination optical system 100. The dichroic mirror 134 transmits the signal light LS (for example, having a wavelength included in the range of about 800 nm to 900 nm; described later) from the OCT unit 150.

  The dichroic mirror 136 transmits illumination light having a wavelength in the visible region from the illumination optical system 100 (visible light having a wavelength of about 400 nm to 700 nm output from the observation light source 101). The dichroic mirror 136 reflects illumination light having a wavelength in the near infrared region (near infrared light having a wavelength of about 700 nm to 800 nm output from the imaging light source 103).

  The LCD 140 displays a fixation target (internal fixation target) for fixing the eye E to be examined. Light from the LCD 140 is collected by the lens 139, reflected by the half mirror 135, and reflected by the dichroic mirror 136 via the field lens 128. Further, this light is incident on the eye E through the photographing lens 126, the relay lens 125, the variable power lens 124, the aperture mirror 112 (the aperture 112a thereof), the objective lens 113, and the like. Thereby, the internal fixation target is projected onto the fundus oculi Ef.

  The image pickup device 10a is built in the image pickup apparatus 10 such as a television camera, and particularly detects light having a wavelength in the near infrared region. That is, the imaging device 10 is an infrared television camera that detects near infrared light. The imaging device 10 outputs a video signal as a result of detecting near infrared light. The imaging element 10a is configured by an arbitrary imaging element such as a CCD (Charge Coupled Devices) or a CMOS (Complementary Metal Oxide Semiconductor), for example.

  The touch panel monitor 11 displays a two-dimensional image (fundus image Ef ′) of the surface of the fundus oculi Ef based on the video signal from the image sensor 10a. The video signal is sent to the arithmetic and control unit 200, and a fundus image is displayed on a display (described later).

  Note that, for example, illumination light having a near-infrared wavelength output from the photographing light source 103 is used when photographing with the imaging device 10.

  The image pickup element 12a is built in the image pickup apparatus 12 such as a television camera, and particularly detects light having a wavelength in the visible region. That is, the imaging device 12 is a television camera that detects visible light. The imaging device 12 outputs a video signal as a result of detecting visible light. The image sensor 12a is configured by an arbitrary image sensor.

  The touch panel monitor 11 displays the fundus oculi image Ef ′ based on the video signal from the image sensor 12a. The video signal is sent to the arithmetic and control unit 200, and a fundus image is displayed on a display (described later).

  When the fundus is photographed by the imaging device 12, for example, illumination light having a wavelength in the visible region output from the observation light source 101 is used.

  Further, the fundus oculi image Ef ′ is a two-dimensional image defined by the xy coordinate system. The xy coordinate system defines a direction along the surface of the fundus oculi Ef (fundus surface direction). A coordinate axis (z coordinate axis) orthogonal to the xy coordinate system defines the depth direction (fundus depth direction) of the fundus oculi Ef.

  The fundus camera unit 1A is provided with a scanning unit 141 and a lens 142. The scanning unit 141 scans the irradiation position on the fundus oculi Ef of light (signal light LS; described later) output from the OCT unit 150.

  FIG. 2 shows an example of the configuration of the scanning unit 141. The scanning unit 141 includes galvanometer mirrors 141A and 141B and reflection mirrors 141C and 141D.

  Galvano mirrors 141A and 141B are reflection mirrors arranged so as to be rotatable about rotation shafts 141a and 141b, respectively. The galvanometer mirrors 141A and 141B are rotated around the rotation shafts 141a and 141b by drive mechanisms (mirror drive mechanisms 241 and 242 shown in FIG. 5) described later. Thereby, the direction of the reflection surface (surface that reflects the signal light LS) of each galvanometer mirror 141A, 141B is changed.

  The rotating shafts 141a and 141b are disposed orthogonal to each other. In FIG. 2, the rotation shaft 141a of the galvano mirror 141A is arranged in a direction parallel to the paper surface. Further, the rotation shaft 141b of the galvanometer mirror 141B is disposed in a direction orthogonal to the paper surface. That is, the galvano mirror 141B is configured to be rotatable in a direction indicated by a double-sided arrow in FIG. 2, and the galvano mirror 141A is configured to be rotatable in a direction orthogonal to the double-sided arrow. As can be seen from FIGS. 1 and 2, when the galvano mirror 141A is rotated, the signal light LS is scanned in the x direction, and when the galvano mirror 141B is rotated, the signal light LS is scanned in the y direction.

  The signal light LS reflected by the galvanometer mirrors 141A and 141B is reflected by the reflection mirrors 141C and 141D and travels in the same direction as when incident on the galvanometer mirror 141A.

  An end surface 152 b of the optical fiber 152 a inside the connection line 152 is disposed to face the lens 142. The signal light LS emitted from the end face 152b travels toward the lens 142 while expanding the beam diameter, and is converted into a parallel light flux by the lens 142. Conversely, the signal light LS that has passed through the fundus oculi Ef is focused toward the end face 152b by the lens 142 and enters the optical fiber 152a.

[OCT unit]
Next, the configuration of the OCT unit 150 will be described with reference to FIG. The OCT unit 150 includes the same optical system as a conventional OCT apparatus. That is, the OCT unit 150 divides the low-coherence light into reference light and signal light, and superimposes the signal light passing through the eye to be examined and the reference light passing through the reference object, and an optical system that generates this interference light. Detecting means for detecting interference light. The detection result (detection signal) of the interference light is input to the arithmetic and control unit 200.

  The low coherence light source 160 is configured by a broadband light source that outputs low coherence light L0. For example, an arbitrary light source such as a super luminescent diode (SLD) or a light emitting diode (LED) is used as the broadband light source.

  As the low coherence light L0, for example, light including light having a wavelength in the near infrared region and having a temporal coherence length of about several tens of micrometers is used. The low coherence light L0 has a wavelength longer than the illumination light (wavelength of about 400 nm to 800 nm) of the fundus camera unit 1A, for example, a wavelength included in a range of about 800 nm to 900 nm.

  The low coherence light L0 output from the low coherence light source 160 is guided to the optical coupler 162 through the optical fiber 161. The optical fiber 161 is configured by, for example, a single mode fiber or a PM fiber (Polarization maintaining fiber). The optical coupler 162 splits the low coherence light L0 into the reference light LR and the signal light LS.

  The optical coupler 162 functions as both a means for splitting light (splitter) and a means for superposing light (coupler). Here, it is conventionally referred to as an “optical coupler”. I will call it.

  The reference light LR generated by the optical coupler 162 is guided by an optical fiber 163 made of a single mode fiber or the like and emitted from the end face of the fiber. Further, the reference light LR is collimated by the collimator lens 171 and then reflected by the reference mirror 174 via the glass block 172 and the density filter 173. The reference mirror 174 is an example of the “reference object” in the present invention.

  The reference light LR reflected by the reference mirror 174 passes through the density filter 173 and the glass block 172 again, is condensed on the fiber end surface of the optical fiber 163 by the collimator lens 171, and is guided to the optical coupler 162 through the optical fiber 163.

  The glass block 172 and the density filter 173 function as delay means for matching the optical path lengths (optical distances) of the reference light LR and the signal light LS. Further, the glass block 172 and the density filter 173 function as dispersion compensation means for matching the dispersion characteristics of the reference light LR and the signal light LS.

  Further, the density filter 173 acts as a neutral density filter that reduces the amount of the reference light LR. The density filter 173 is configured by, for example, a rotary ND (Neutral Density) filter. The density filter 173 is rotationally driven by a drive mechanism (a density filter drive mechanism 244 described later; see FIG. 5) configured to include a drive device such as a motor. Thereby, the amount of the reference light LR that contributes to the generation of the interference light LC is changed.

  Further, the reference mirror 174 is movable in the traveling direction of the reference light LR (the direction of the double-sided arrow shown in FIG. 3). Thereby, the optical path length of the reference light LR according to the axial length of the eye E and the working distance (distance between the objective lens 113 and the eye E) can be secured. The reference mirror 174 is moved by a drive mechanism (a reference mirror drive mechanism 243 described later; see FIG. 5) configured to include a drive device such as a motor.

  On the other hand, the signal light LS generated by the optical coupler 162 is guided to the end of the connection line 152 by an optical fiber 164 made of a single mode fiber or the like. Here, the optical fiber 164 and the optical fiber 152a may be formed from a single optical fiber, or may be formed integrally by joining the respective end faces.

  The signal light LS is guided by the optical fiber 152a and guided to the fundus camera unit 1A. Further, the signal light LS passes through the lens 142, the scanning unit 141, the dichroic mirror 134, the photographing lens 126, the relay lens 125, the variable magnification lens 124, the photographing aperture 121, the hole 112 a of the aperture mirror 112, and the objective lens 113. The eye E is irradiated. When irradiating the eye E with the signal light LS, the barrier filters 122 and 123 are retracted from the optical path in advance.

  The signal light LS incident on the eye E is imaged and reflected on the fundus oculi Ef. At this time, the signal light LS is not only reflected by the surface of the fundus oculi Ef, but also reaches the deep region of the fundus oculi Ef and is scattered at the refractive index boundary. Therefore, the signal light LS passing through the fundus oculi Ef includes information reflecting the surface form of the fundus oculi Ef and information reflecting the state of backscattering at the refractive index boundary of the deep tissue of the fundus oculi Ef. This light may be simply referred to as “fundus reflected light of the signal light LS”.

  The fundus reflection light of the signal light LS travels in the reverse direction in the fundus camera unit 1A, is condensed on the end surface 152b of the optical fiber 152a, enters the OCT unit 150 through the optical fiber 152, and passes through the optical fiber 164. Return to the optical coupler 162.

  The optical coupler 162 superimposes the signal light LS returned through the eye E and the reference light LR reflected by the reference mirror 174 to generate interference light LC. The interference light LC is guided to the spectrometer 180 through an optical fiber 165 made of a single mode fiber or the like.

  The spectrometer (spectrometer) 180 detects the spectral component of the interference light LC. The spectrometer 180 includes a collimator lens 181, a diffraction grating 182, an imaging lens 183, and a CCD 184. The diffraction grating 182 may be a transmission type diffraction grating that transmits light, or may be a reflection type diffraction grating that reflects light. Further, instead of the CCD 184, other light detection elements such as CMOS can be used.

  The interference light LC incident on the spectrometer 180 is converted into a parallel light beam by the collimator lens 181, and is split (spectral decomposition) by the diffraction grating 182. The split interference light LC is imaged on the imaging surface of the CCD 184 by the imaging lens 183. The CCD 184 detects each spectral component of the separated interference light LC and converts it into electric charges. The CCD 184 accumulates this electric charge and generates a detection signal. Further, the CCD 184 transmits this detection signal to the arithmetic and control unit 200. The charge accumulation time and accumulation timing, and further the detection signal transmission timing are controlled by, for example, the arithmetic and control unit 200. The spectrometer 180 (in particular, the CCD 184) is an example of the “detection means” of the present invention.

  In this embodiment, a Michelson interferometer is used. However, for example, any type of interferometer such as a Mach-Zehnder type can be appropriately used.

  In addition, the optical coupler 162, the optical member on the optical path of the signal light LS (that is, the optical member disposed between the optical coupler 162 and the eye E), and the optical member on the optical path of the reference light LR (that is, The optical member disposed between the optical coupler 162 and the reference mirror 174 constitutes an example of the “optical system” of the present invention.

[Calculation control device]
Next, the configuration of the arithmetic and control unit 200 will be described. The arithmetic and control unit 200 analyzes the detection signal input from the CCD 184 and forms an OCT image of the fundus oculi Ef. The analysis processing at this time is performed using data processing such as Fourier transform, as in the conventional Fourier domain type OCT apparatus.

  In addition, the arithmetic and control unit 200 forms a two-dimensional image indicating the form of the surface of the fundus oculi Ef based on the video signals output from the imaging devices 10 and 12.

  Further, the arithmetic and control unit 200 controls each part of the fundus camera unit 1A and the OCT unit 150.

  As control of the fundus camera unit 1A, the arithmetic control device 200 controls the output of illumination light by the observation light source 101 and the imaging light source 103, and controls the insertion / retraction operation of the exciter filters 105 and 106 and the barrier filters 122 and 123 on the optical path. Then, operation control of a display device such as the LCD 140, movement control of the illumination aperture 110 (control of the aperture value), control of the aperture value of the photographing aperture 121, movement control of the variable power lens 124 (control of magnification), and the like are performed. Furthermore, the arithmetic and control unit 200 controls the operation of the galvanometer mirrors 141A and 141B.

  Further, as the control of the OCT unit 150, the arithmetic and control unit 200 controls the output of the low coherence light L0 by the low coherence light source 160, the movement control of the reference mirror 174, and the rotation operation of the density filter 173 (the amount of decrease in the light amount of the reference light LR). Control), control of the accumulation timing and signal output timing of the CCD 184, and the like.

  The hardware configuration of such an arithmetic control device 200 will be described with reference to FIG.

  The arithmetic and control unit 200 has a hardware configuration similar to that of a conventional computer. Specifically, the arithmetic and control unit 200 includes a microprocessor 201, RAM 202, ROM 203, hard disk drive (HDD) 204, keyboard 205, mouse 206, display 207, image forming board 208, and communication interface (I / F) 209. Consists of. These units are connected by a bus 200a.

  The microprocessor 201 includes a CPU (Central Processing Unit), an MPU (Micro Processing Unit), and the like. The microprocessor 201 reads out the control program 204a from the hard disk drive 204 and expands it on the RAM 202, thereby causing the fundus oculi observation device 1 to execute operations characteristic of this embodiment. Further, the microprocessor 201 executes control of each part of the device described above, various arithmetic processes, and the like.

  The keyboard 205, the mouse 206, and the display 207 are used as a user interface of the fundus oculi observation device 1. The display 207 is configured by a display device such as an LCD or a CRT (Cathode Ray Tube) display.

  Note that the user interface of the fundus oculi observation device 1 is not limited to such a configuration. For example, the fundus oculi observation device 1 may include a user interface such as a trackball, a joystick, a touch panel LCD, or a control panel for ophthalmic examination. As the user interface of the fundus oculi observation device 1, an arbitrary configuration having a function of displaying and outputting information and a function of inputting information and operating the device can be adopted.

  The image forming board 208 is a dedicated electronic circuit that performs processing for forming an image (image data) of the fundus oculi Ef. The image forming board 208 is provided with a fundus image forming board 208a and an OCT image forming board 208b. The fundus image forming board 208a is a dedicated electronic circuit that forms image data of a fundus image based on video signals from the imaging device 10 and the imaging device 12. The OCT image forming board 208b is a dedicated electronic circuit that forms image data of a tomographic image of the fundus oculi Ef based on a detection signal from the CCD 184 of the OCT unit 150. By providing such an image forming board 208, it is possible to improve the processing speed of processing for forming a fundus image or a tomographic image.

  The communication interface 209 transmits and receives data to and from the fundus camera unit 1A and the OCT unit 150. For example, the communication interface 209 transmits a control signal from the microprocessor 201 to the fundus camera unit 1 </ b> A or the OCT unit 150. Further, the communication interface 209 receives video signals from the imaging devices 10 and 12 and detection signals from the CCD 184 of the OCT unit 150. At this time, the communication interface 209 inputs video signals from the imaging devices 10 and 12 to the fundus image forming board 208a and inputs detection signals from the CCD 184 to the OCT image forming board 208b.

  Further, in order to connect the arithmetic and control unit 200 to a communication line such as a LAN (Local Area Network) or the Internet, a communication device such as a LAN card or a modem can be provided in the communication interface 209. In this case, the fundus oculi observation device 1 can be operated by installing a server for storing the control program 204a on the communication line and configuring the arithmetic and control unit 200 as a client terminal of the server.

[Control system configuration]
Next, the configuration of the control system of the fundus oculi observation device 1 will be described with reference to FIGS.

(Control part)
The control system of the fundus oculi observation device 1 is configured around the control unit 210 of the arithmetic and control unit 200. The control unit 210 includes a microprocessor 201, a RAM 202, a ROM 203, a hard disk drive 204 (control program 204a), a communication interface 209, and the like. The control unit 210 is an example of the “control unit” in the present invention.

  The control unit 210 includes a main control unit 211, a storage unit 212, and a fixation position acquisition unit 213. The main control unit 211 performs the various controls described above.

(Memory part)
The storage unit 212 stores various data. The data stored in the storage unit 212 includes, for example, an OCT image (tomographic image, three-dimensional image) of the fundus oculi Ef, a fundus oculi image Ef ′, eye information to be examined, and the like. The eye information is included in electronic medical record information, for example, and includes information about the subject such as patient ID and name, and information about the eye such as left / right eye identification information. The main control unit 211 performs a process of writing data to the storage unit 212 and a process of reading data from the storage unit 212.

(Fixed position acquisition unit)
The fixation position acquisition unit 213 acquires information (fixation position information) indicating the fixation position of the eye E when the signal light LS is applied to the fundus oculi Ef. The fixation position of the eye E corresponds to the display position of the internal fixation target by the LCD 140. That is, by changing the display position of the internal fixation target by the LCD 140, the fixation position of the eye E can be changed. This is the same as panoramic photography with a conventional fundus camera.

  The display position of the internal fixation target by the LCD 140 is controlled by the main control unit 211. The main control unit 211 sends information indicating the display position of the internal fixation target (display position information) to the fixation position acquisition unit 213. The fixation position acquisition unit 213 creates fixation position information based on the display position information. At this time, the display position information itself may be fixation position information. In addition, information (related information) that associates the display position of the internal fixation target and the fixation position of the eye E to be examined is stored in advance in the storage unit 212 or the like, and the fixation is determined from the display position information with reference to this related information. Position information may be created. A specific example of the process executed by the fixation position acquisition unit 213 will be described later.

  The main control unit 211 causes the storage unit 212 to store the fixation position information acquired by the fixation position acquisition unit 213. At this time, the main control unit 211 stores fixation position information in association with a three-dimensional image of the fundus oculi Ef based on measurement performed with the eye E fixed at the fixation position.

  The fixation position acquisition unit 213 that operates as described above is an example of the “acquisition unit” of the present invention. The storage unit 212 is an example of the “storage unit” in the present invention.

(Image forming part)
The image forming unit 220 forms image data of the fundus oculi image Ef ′ based on the video signals from the imaging devices 10 and 12.

  The image forming unit 220 forms image data of a tomographic image of the fundus oculi Ef based on the detection signal from the CCD 184. This processing includes noise removal (noise reduction), filter processing, FFT (Fast Fourier Transform), and the like, as in the prior art.

  The image forming unit 220 includes an image forming board 208, a communication interface 209, and the like. In this specification, “image data” and “image” displayed based on the “image data” may be identified.

(Image processing unit)
The image processing unit 230 performs various types of image processing and analysis processing on the image formed by the image forming unit 220. For example, the image processing unit 230 executes various correction processes such as image brightness correction and dispersion correction.

  The image processing unit 230 includes a three-dimensional image forming unit (“3D image forming unit” in FIG. 6) 231 and an image analyzing unit 232.

(3D image forming unit)
The three-dimensional image forming unit 231 executes known image processing such as interpolation processing for interpolating pixels between a plurality of tomographic images (for example, tomographic images G1 to Gm shown in FIG. 7) formed by the image forming unit 220. Thus, image data of a three-dimensional image of the fundus oculi Ef is formed.

  Note that the image data of a three-dimensional image means image data in which pixel positions are defined by a three-dimensional coordinate system. As image data of a three-dimensional image, there is image data composed of voxels arranged three-dimensionally. This image data is called volume data or voxel data. When displaying an image based on volume data, the image processing unit 230 performs rendering processing (volume rendering, MIP (Maximum Intensity Projection), etc.) on the volume data, and views the image from a specific gaze direction. Image data of a pseudo three-dimensional image is formed. A pseudo three-dimensional image based on the image data is displayed on a display device such as the display unit 240A.

  It is also possible to form stack data of a plurality of tomographic images as image data of a three-dimensional image. The stack data is image data obtained by three-dimensionally arranging a plurality of tomographic images obtained along a plurality of scanning lines based on the positional relationship of the scanning lines. That is, the stack data is image data obtained by expressing a plurality of tomographic images originally defined by individual two-dimensional coordinate systems using a single three-dimensional coordinate system. Here, “expressed by a single three-dimensional coordinate system” means that the position of each pixel of each tomographic image is expressed by coordinates defined by the three-dimensional coordinate system. Thereby, each tomographic image can be embedded in the three-dimensional space defined by the three-dimensional coordinate system.

(Image Analysis Department)
The image analysis unit 232 operates when a plurality of three-dimensional images representing different parts of the fundus oculi Ef are acquired. The image analysis unit 232 determines the positional relationship (relative positional relationship) of the plurality of three-dimensional images by analyzing the plurality of three-dimensional images. Furthermore, the image analysis unit 232 represents each of the plurality of three-dimensional images in a single three-dimensional coordinate system based on this positional relationship. Here, “expressed by a single three-dimensional coordinate system” means that the position of each voxel (or each pixel) of each three-dimensional image is expressed by coordinates defined by the three-dimensional coordinate system. Thereby, each three-dimensional image can be embedded in a three-dimensional space defined by the three-dimensional coordinate system. The image analysis unit 232 is an example of the “analysis unit” in the present invention.

  In order to execute the above processing, the image analysis unit 232 includes an integrated image forming unit 233, a blood vessel region specifying unit 234, a layer region specifying unit 235, an array specifying unit 236, a fundus image analyzing unit 237, and a three-dimensional coordinate system setting. Section ("3D coordinate system setting section" in FIG. 6) 238 is provided.

(Integrated image forming unit)
The accumulated image forming unit 233 creates an image (integrated image) obtained by integrating the tomographic images Gi formed by the image forming unit 220 in the depth direction (z direction). More specifically, the accumulated image forming unit 233 accumulates images Gij in each depth direction (described later: see FIG. 8) constituting each tomographic image Gi in the depth direction to form a dot image.

  Here, “accumulate in the depth direction” means a calculation process that adds (projects) the luminance values (pixel values) at the respective depth positions of the image Gij in the depth direction in the depth direction. Therefore, the dot-like image obtained by integrating the depth-direction images Gij has a luminance value obtained by adding the luminance values at the respective z positions of the depth-direction image Gij in the depth direction.

  The integrated image forming unit 233 integrates the images Gij in the depth directions forming the tomographic images Gi in the depth direction, thereby two-dimensionally (in a scanning area R (described later: see FIG. 7)) of the signal light LS. An integrated image composed of m × n dot-like images distributed in the xy plane is formed. This accumulated image is an image representing the form of the surface of the fundus oculi Ef, similarly to the fundus oculi image Ef ′ in the scanning region R. The accumulated image forming unit 233 sends the formed accumulated images to the blood vessel region specifying unit 234.

  The accumulated image forming unit 233 can form an accumulated image of a three-dimensional image (volume data) of the fundus oculi Ef in the same manner as described above. That is, an integrated image can be formed from a three-dimensional image by integrating voxel values (luminance values) of voxels arranged in the depth direction in the depth direction.

  The accumulated image is described in detail in Japanese Patent Application No. 2005-337628 by the present inventors. The integrated image is also called a projection image. The integrated image forming unit 233 is an example of the “integrated image forming unit” of the present invention.

(Vessel region specific part)
The blood vessel region specifying unit 234 specifies a blood vessel region in each integrated image sent from the integrated image forming unit 233. The blood vessel region means an image region corresponding to a blood vessel in the fundus. The blood vessel region specifying unit 234 is an example of the “image region specifying unit” of the present invention.

  The process of specifying the blood vessel region can be performed using any known method. For example, by analyzing the pixel value of the integrated image (the luminance value of each pixel), calculating the difference between the pixel values of adjacent pixels, and searching for adjacent pixels in which this difference is greater than a predetermined value, And a boundary area between other areas is detected. Thereby, the blood vessel region in the integrated image can be specified. This process utilizes the fact that the difference (differential coefficient, etc.) in pixel values (luminance values) between the blood vessel region and other regions in the integrated image is large.

  Alternatively, a threshold value of a pixel value corresponding to the blood vessel region may be stored in advance, and the blood vessel region may be specified by performing processing on the integrated image using this threshold value. This threshold value is obtained, for example, by acquiring pixel values of blood vessel regions in a large number of accumulated images acquired in the past, and performing statistical calculations such as statistical values (average value, median value, standard deviation) of these pixel values. It is possible to set by obtaining (value obtained). The blood vessel region specifying unit 234 sends the blood vessel region specifying result (for example, the coordinate value of the blood vessel region) of each layered image to the three-dimensional coordinate system setting unit 238.

(Layer region specific part)
The layer region specifying unit 235 receives a plurality of three-dimensional images of the fundus oculi Ef formed by the three-dimensional image forming unit 231. The layer region specifying unit 235 specifies a layer region in each three-dimensional image. The layer area means an image area corresponding to a predetermined layer of the fundus. The layer area specifying unit 235 is an example of the “image area specifying means” in the present invention.

  Incidentally, it is known that the fundus has a multilayer structure. Specifically, the fundus has a retina, a choroid, and a sclera in order from the fundus surface in the depth direction. In addition, the retina is the inner boundary membrane, nerve fiber layer, ganglion cell layer, inner plexiform layer, inner granule layer, outer reticular layer, outer granule layer, outer border membrane, photoreceptor cells in order from the fundus surface to the depth direction. Layer, retinal pigment epithelium layer.

  The fundus oculi observation device 1 is a device that forms an OCT image describing such a layer structure of the fundus oculi. The layer region may be an image region corresponding to any of the above layers, or may be an image region corresponding to a boundary region between two adjacent layers. This is because, in an image depicting a layer structure, the boundary region is determined unambiguously when the layer region is determined, and conversely, the layer region is unambiguously determined when the boundary region is determined.

  In the OCT image, the layer area is depicted with a pixel value (luminance value) different from that of the other areas. Further, the layer region in the OCT image is depicted with a specific pixel value (luminance value) according to the reflection characteristics and scattering characteristics at each layer (refractive index boundary).

  The layer region specifying unit 235 specifies a layer region corresponding to a predetermined layer (or a layer boundary) from the three-dimensional image by analyzing the pixel value of the three-dimensional image of the fundus oculi Ef having such a drawing mode. . As a specific example, the layer region specifying unit 235 can specify the layer region drawn with the highest luminance among the layer regions drawn in the three-dimensional image. The layer region specifying unit 235 can also specify the layer region located at the deepest part. The aspect (brightness value, depth, etc.) of the layer area to be specified can be set as appropriate.

  The process of specifying the layer region can be executed using the threshold value of the pixel value, the differential coefficient of the pixel value, etc., similarly to the specifying process of the blood vessel region. The layer region specifying unit 235 sends the layer region specifying result (for example, the coordinate value of the layer region) of each three-dimensional image to the three-dimensional coordinate system setting unit 238.

(Sequence identification part)
The arrangement specifying unit 236 receives a plurality of three-dimensional images of the fundus oculi Ef formed by the three-dimensional image forming unit 231. The storage unit 212 stores fixation position information associated with each three-dimensional image. The array specifying unit 236 specifies an array of a plurality of three-dimensional images based on the fixation position information. The sequence specifying unit 236 is an example of the “sequence specifying unit” of the present invention.

  The plurality of three-dimensional images represent different parts of the eye E as described above. Such a plurality of three-dimensional images are acquired by scanning the signal light LS in different scanning regions in a state where the eye E is fixed at different fixation positions. Therefore, the plurality of fixation position information corresponding to the plurality of three-dimensional images represent different fixation positions.

  In the measurement for acquiring a panoramic image of the fundus oculi Ef, for example, the arrangement of a plurality of scanning regions is set in advance. The main control unit 211 controls the LCD 140 to display the internal fixation target at the display position corresponding to each scanning area, thereby fixing the eye E to the fixation position corresponding to each scanning area.

  The array specifying unit 236 specifies an array of a plurality of three-dimensional images by specifying an array of a plurality of fixation positions (an array of a plurality of scanning regions) based on such a plurality of fixation position information. be able to. The array specifying unit 236 sends the result of specifying the array of a plurality of 3D images to the 3D coordinate system setting unit 238.

(Fundus image analysis unit)
The fundus image analysis unit 237 receives the fundus image Ef ′. The fundus image analysis unit 237 analyzes the fundus image Ef ′ and identifies a blood vessel region in the fundus image Ef ′. This process is executed in the same manner as the blood vessel region specifying unit 234, for example. The fundus image analysis unit 237 sends the result of specifying the blood vessel region of the fundus image Ef ′ (for example, the coordinate value of the blood vessel region) to the three-dimensional coordinate system setting unit 238.

(3D coordinate system setting part)
The three-dimensional coordinate system setting unit 238 specifies the results of specifying the blood vessel regions of the plurality of laminated images, the results of specifying the layer regions of the plurality of three-dimensional images, the results of specifying the arrangement of the plurality of three-dimensional images, and the fundus oculi image Ef ′. The identification result of the blood vessel region is accepted. The three-dimensional coordinate system setting unit 238 obtains the positional relationship between a plurality of three-dimensional images based on these pieces of information, and expresses each of the three-dimensional images in a single three-dimensional coordinate system based on the positional relationship.

  An example of processing by the three-dimensional coordinate system setting unit 238 will be described. As a premise, it is assumed that the fundus oculi image Ef ′ is obtained by photographing the range of the fundus oculi Ef including all scanning regions of a plurality of three-dimensional images. Here, it is sufficient that the photographing range of the fundus oculi image Ef ′ includes at least a part of each scanning region. Further, the fundus oculi image Ef ′ may be an image obtained by one photographing, or may be a panoramic image of two or more fundus images obtained by photographing different ranges of the fundus oculi Ef. In the latter case, a panoramic image is formed as in the conventional case. The process for forming the panoramic image is performed by the image processing unit 230.

  The three-dimensional coordinate system setting unit 238 obtains the position of each accumulated image in the fundus oculi image Ef ′. This process is performed, for example, by aligning the blood vessel region of the fundus image Ef ′ with the blood vessel region of each accumulated image. More specifically, this processing is performed while affine transformation (enlargement / reduction, translation, rotation) of the blood vessel region of the integrated image, as in the conventional alignment of the fundus image Ef ′ and the integrated image. Can be performed by searching for a region in the fundus image Ef ′ that matches (substantially).

  At this time, the approximate position of the blood vessel region of each accumulated image in the blood vessel region of the fundus image Ef ′ can be determined with reference to the result of specifying the arrangement of a plurality of three-dimensional images. That is, since the arrangement of a plurality of three-dimensional images represents the arrangement of a plurality of integrated images based on them, the approximate position of each integrated image can be determined based on the result of specifying the arrangement.

  By executing such processing for each accumulated image, the positions of a plurality of accumulated images (via the fundus image Ef ′) are obtained. Furthermore, this result provides the position of a plurality of three-dimensional images (via the fundus oculi image Ef ′). That is, since the integrated image is an image obtained by integrating the three-dimensional image in the depth direction (z direction), there is no shift in the xy direction between the integrated image and the three-dimensional image (a shift due to the curvature of the fundus occurs. (This deviation can be corrected by a known technique.)

  The three-dimensional coordinate system setting unit 238 sets a three-dimensional coordinate system (referred to as a reference three-dimensional coordinate system) for expressing a plurality of three-dimensional images. As the reference three-dimensional coordinate system, for example, an xyz coordinate system including an xy coordinate system in which the fundus oculi image Ef ′ is defined and a z coordinate axis orthogonal thereto can be used. Each accumulated image is originally defined by the same xyz coordinate system, but coordinate conversion is performed by the above-described alignment (affine transformation). Therefore, it is not necessary to perform further coordinate conversion by adopting the reference three-dimensional coordinate system. Other reference three-dimensional coordinate systems can be appropriately adopted.

  With the above processing, the xy coordinate values of a plurality of three-dimensional images can be expressed using the xy coordinate values of a (single) reference three-dimensional coordinate system.

  Further, the three-dimensional coordinate system setting unit 238 represents the position (z coordinate value) in the fundus depth direction of a plurality of three-dimensional images using the z coordinate value of the reference three-dimensional coordinate system. For this purpose, the three-dimensional coordinate system setting unit 238, for example, based on the identification results of the layer regions of a plurality of three-dimensional images, connects the layer regions corresponding to a predetermined layer of the fundus oculi Ef. By changing the position in the z direction, the positional relationship between a plurality of three-dimensional images in the fundus depth direction is obtained.

  At this time, referring to the result of specifying the arrangement of a plurality of three-dimensional images, the arrangement of a plurality of layer regions may be acquired, and the position in the z direction of each three-dimensional image may be changed so as to connect adjacent layer regions. it can.

  Thereby, z coordinate values of a plurality of three-dimensional images can be expressed using z coordinate values of a (single) reference three-dimensional coordinate system.

  As described above, the x coordinate value, the y coordinate value, and the z coordinate value of a plurality of three-dimensional images can be expressed by a (single) reference three-dimensional coordinate system.

  The image processing unit 230 includes a microprocessor 201, a RAM 202, a ROM 203, a hard disk drive 204 (control program 204a), and the like.

(User interface)
A user interface (UI) 240 is provided with a display unit 240A and an operation unit 240B. The display unit 240A includes a display device such as the display 207. The display unit 240A is an example of the “display unit” in the present invention. The operation unit 240B includes input devices such as a keyboard 205 and a mouse 206, and operation devices.

[Signal light scanning and image processing]
An example of the scanning mode of the signal light LS and the mode of image processing will be described.

  As described above, the galvano mirror 141A scans the signal light LS in the horizontal direction (x direction in FIG. 1), and the galvano mirror 141B scans in the signal light LS vertical direction (y direction in FIG. 1). Further, the signal light LS can be scanned in an arbitrary direction on the xy plane by simultaneously operating both the galvanometer mirrors 141A and 141B.

  FIG. 7 shows an example of a scanning mode of the signal light LS for forming an image of the fundus oculi Ef. FIG. 7A shows an example of a scanning mode of the signal light LS when the fundus oculi Ef is viewed from the direction in which the signal light LS enters the eye E (that is, when viewed from the −z direction to the + z direction in FIG. 1). Represents. FIG. 7B shows an example of an arrangement mode of scanning points (measurement positions) in each scanning line on the fundus oculi Ef.

  As shown in FIG. 7A, the signal light LS is scanned in a predetermined scanning region R. In the scanning region R, a plurality (m) of scanning lines R1 to Rm extending in the x direction are set. The scanning lines Ri (i = 1 to m) are arranged in the y direction. The direction (x direction) of each scanning line Ri is referred to as “main scanning direction”, and the direction (y direction) perpendicular thereto is referred to as “sub scanning direction”.

  On each scanning line Ri, as shown in FIG. 7B, a plurality (n) of scanning points Ri1 to Rin are set. Note that the positions of the scanning region R, the scanning line Ri, and the scanning point Rij are appropriately set before measurement.

  In order to execute the scanning shown in FIG. 7, the main control unit 211 first controls the galvanometer mirrors 141A and 141B, and sets the incident target of the signal light LS as the scanning start position RS (scanning point) on the first scanning line R1. R11). Subsequently, the main control unit 211 controls the low coherence light source 160, causes the low coherence light L0 to flash, and causes the signal light LS to enter the scanning start position RS. The CCD 184 detects the interference light LC based on the reflected light at the scanning start position RS of the signal light LS and generates a detection signal.

  Next, the main control unit 211 controls the galvanometer mirror 141A, scans the signal light LS in the main scanning direction, sets the incident target at the scanning point R12, and causes the low coherence light L0 to flash and scan. The signal light LS is incident on the point R12. The CCD 184 detects the interference light LC based on the reflected light at the scanning point R12 of the signal light LS and generates a detection signal.

  Similarly, the main control unit 211 sequentially shifts the incidence target of the signal light LS to the scanning points R13, R14,..., R1 (n−1), R1n, and outputs the low coherence light L0 at each scanning point. By detecting the flash emission, a detection signal corresponding to each scanning point is generated.

  When the measurement at the last scanning point R1n of the first scanning line R1 is completed, the main control unit 211 controls the galvanometer mirrors 141A and 141B at the same time so that the incident target of the signal light LS is changed along the line changing scan r. Move to the first scanning point R21 of the second scanning line R2. Then, the main control unit 211 causes the same measurement to be performed for each scanning point R2j (j = 1 to n) of the second scanning line R2, and generates a detection signal corresponding to each scanning point R2j.

  Similarly, the main control unit 211 performs measurement for each of the third scanning line R3,..., The m−1th scanning line R (m−1), and the mth scanning line Rm. A detection signal corresponding to the scanning point is generated. Note that the symbol RE on the scanning line Rm is a scanning end position corresponding to the scanning point Rmn.

  In this way, the main control unit 211 generates m × n detection signals corresponding to m × n scanning points Rij (i = 1 to m, j = 1 to n) in the scanning region R. . A detection signal corresponding to the scanning point Rij may be represented as Dij.

  In the above control, the main control unit 211 acquires position information (coordinates in the xy coordinate system) of each scanning point Rij when the galvanometer mirrors 141A and 141B are operated. This position information (scanning position information) is referred to when an OCT image is formed.

  Next, an example of image processing when the scan shown in FIG. 7 is performed will be described.

  The image forming unit 220 forms a tomographic image Gi along each scanning line Ri (main scanning direction). The three-dimensional image forming unit 231 forms a three-dimensional image of the fundus oculi Ef based on the tomographic image Gi.

  The tomographic image forming process includes a two-stage arithmetic process, as in the prior art. In the first stage, an image in the depth direction (z direction shown in FIG. 1) of the fundus oculi Ef at the scanning point Rij is formed based on each detection signal Dij. In the second stage, the tomographic image Gi along the scanning line Ri is formed by arranging the images in the depth direction at the scanning points Ri1 to Rin based on the scanning position information. By executing the above processing for each scanning line Ri, m tomographic images G1 to Gm are obtained.

  The three-dimensional image forming unit 231 generates the stack data of the fundus oculi Ef by expressing the tomographic images G1 to Gm in one three-dimensional coordinate system based on the scanning position information and arranging them in the three-dimensional space. In addition, the three-dimensional image forming unit 231 defines the volume data of the fundus oculi Ef by defining voxels by performing interpolation processing for interpolating an image between adjacent tomographic images Gi and G (i + 1) in the stack data. Generate. These three-dimensional images are defined by a three-dimensional coordinate system (x, y, z) based on scanning position information, for example.

  Further, the image processing unit 230 can form a tomographic image at an arbitrary cross section based on the three-dimensional image of the fundus oculi Ef. When the cross section is designated, the image processing unit 230 identifies the position of each scanning point (and / or the interpolated depth direction image) on the designated cross section, and the depth direction image (and / or at each specific position). The interpolated depth direction image) is extracted from the three-dimensional image, and the extracted plurality of depth direction images are arranged based on the scanning position information and the like, thereby forming a tomographic image at the designated cross section.

  Note that an image Gmj shown in FIG. 8 represents an image in the depth direction at the scanning point Rmj on the scanning line Rm. Similarly, an image in the depth direction at the scanning point Rij, which is formed in the above-described first stage processing, is denoted by reference symbol Gij.

  The scanning mode of the signal light LS by the fundus oculi observation device 1 is not limited to the above. For example, the signal light LS is scanned only in the horizontal direction (x direction), scanned only in the vertical direction (y direction), scanned vertically and horizontally in a cross shape, scanned radially, or in a circular shape. It can be scanned, concentrically scanned, or spirally scanned. That is, as described above, since the scanning unit 141 is configured to be able to scan the signal light LS independently in the x direction and the y direction, it scans the signal light LS along an arbitrary locus on the xy plane. Is possible.

[Mode of operation]
The operation mode of the fundus oculi observation device 1 will be described with reference to FIGS. Hereinafter, an operation mode for acquiring a plurality of three-dimensional images of the fundus oculi Ef and an operation mode for forming a panoramic image of the fundus oculi Ef will be described separately. In the following description, reference numerals not shown are used as appropriate.

[Acquisition of 3D image]
With reference to FIG.9 and FIG.10, the operation | movement aspect for acquiring the several three-dimensional image of the fundus oculi Ef is demonstrated. FIG. 9 is a flowchart showing an example of this operation mode.

  First, a plurality of scanning regions R (k) are set on the fundus oculi Ef (k = 1, 2,..., K) (S1). The scan region R (k) is set, for example, by using an observation image of the fundus oculi Ef obtained by continuously irradiating the fundus oculi with infrared light (a fundus image Ef ′ obtained by illumination light from the imaging light source 103). While observing, it can be performed by setting a fixation position corresponding to each scanning region R (k).

  An example of how the scanning region R (k) is set is shown in FIG. In this example, four scanning regions R (1), R (2), R (3), and R (4) are set on the fundus oculi Ef. At this time, it is desirable to set so that the partial areas of adjacent scanning areas overlap each other. Such an overlapping area is used as “gluing margin” when adjacent three-dimensional images are joined (bonded).

  Next, measurement for acquiring an OCT image of each scanning region R (k) set in step 1 is performed (S2).

  This measurement work is performed as follows, for example. First, the internal fixation target is displayed at the display position corresponding to the scanning region R (1), and the signal light LS is scanned as shown in FIG. 7 in a state where the eye E is fixed at the fixation position. Next, the display position of the internal fixation target is changed to the display position corresponding to the scanning region R (2), and the signal light LS is similarly scanned in a state where the eye to be examined is fixed at the fixation position. Measurements are similarly performed for the scanning regions R (3) and R (4).

  When the signal light LS is scanned in each scanning region R (k), the fixation position acquisition unit 213 acquires the fixation position (the display position of the internal fixation target) of the eye E, and fixation position information Is generated. Note that the fixation position information may be set based on the fixation position set in step 1. The main control unit 211 stores the measurement result (detection signal) of each scanning region R (k) in the storage unit 212 together with the corresponding fixation position information.

  When the measurement for the last scanning region R (K) is completed, the fundus oculi Ef is photographed to acquire the fundus oculi image Ef ′ (S3). The acquired fundus image Ef ′ is stored in the storage unit 212 by the main control unit 211.

  At this time, the color fundus image Ef ′ may be acquired using the illumination light (visible light) from the observation light source 101, or the monochrome fundus image Ef using the illumination light (infrared light) from the imaging light source 103. 'May be acquired. Alternatively, a fundus image Ef ′ (fluorescence image) obtained by contrasting blood vessels may be acquired by intravenously injecting a fluorescent agent. When the fundus image Ef ′ acquired in the past is used or when the fundus image Ef ′ is acquired separately later, it is not necessary to acquire the fundus image Ef ′ at this stage.

  The image forming unit 220 forms a tomographic image G (k) i in each scanning region R (k) based on the measurement result in step 2 (i = 1 to m) (S4). Note that either the process of forming the tomographic image G (k) i (S4) or the process of acquiring the fundus oculi image Ef ′ (S3) may be performed first. These processes may be performed in parallel.

  Next, the three-dimensional image forming unit 231 determines the three-dimensional image G (k) of the fundus oculi Ef in each scanning region R (k) based on the M number of tomographic images G (k) i in each scanning region R (k). ) Is formed (S5).

  The main control unit 211 stores the three-dimensional image G (k) of each scanning region R (k) in the storage unit 212 together with the corresponding fixation position information (S6). This completes the operation for acquiring a plurality of three-dimensional images of the fundus oculi Ef.

[Panorama image formation]
An operation mode for forming a panoramic image of the fundus oculi Ef will be described with reference to FIGS. 11 and 12. FIG. 11 is a flowchart showing an example of this operation mode.

  The main control unit 211 reads the three-dimensional image G (k) of the fundus oculi Ef and the fixation position information from the storage unit 212 and sends them to the image processing unit 230. Further, the main control unit 211 reads the fundus image Ef ′ from the storage unit 212 and sends it to the image processing unit 230. The image analysis unit 232 executes the following process to form a panoramic image of the fundus oculi Ef.

  The accumulated image forming unit 233 accumulates each three-dimensional image G (k) in the depth direction to form an accumulated image P (k) in each scanning region R (k) (S11). Furthermore, the blood vessel region specifying unit 234 specifies the blood vessel region V (k) in each integrated image P (k) (S12). This identification result is sent to the three-dimensional coordinate system setting unit 238.

  Further, the layer region specifying unit 235 analyzes each three-dimensional image G (k) and specifies the layer region M (k) in each three-dimensional image G (k) (S13). This identification result is sent to the three-dimensional coordinate system setting unit 238.

  Further, the array specifying unit 236 specifies an array of a plurality of three-dimensional images G (k) based on the fixation position information (S14). This identification result is sent to the three-dimensional coordinate system setting unit 238.

  In addition, the fundus image analysis unit 237 analyzes the fundus image Ef ′ and specifies the blood vessel region V in the fundus image Ef ′ (S15).

  Note that the order in which the processes S11 to S15 are executed is arbitrary (however, step 12 is always executed after step 11).

  The three-dimensional coordinate system setting unit 238 sets a reference three-dimensional coordinate system for expressing a plurality of three-dimensional images G (k) (S16). The reference three-dimensional coordinate system is, for example, as described above, a coordinate system including an xy coordinate system that defines the fundus image Ef ′ and a z coordinate axis in the fundus depth direction.

  The three-dimensional coordinate system setting unit 238 obtains the position of the fundus image Ef ′ in the blood vessel region V with respect to the blood vessel region V (k) of each accumulated image P (k), thereby obtaining each accumulated image P ( Find the position of k). Thereby, the position of each three-dimensional image G (k) with respect to the fundus oculi image Ef ′ is obtained. The three-dimensional coordinate system setting unit 238 represents the position of the fundus surface direction of each three-dimensional image G (k) using the xy direction of the reference three-dimensional coordinate system (S17).

  Further, the three-dimensional coordinate system setting unit 238 changes the positions in the z direction of the plurality of three-dimensional images G (k) so as to connect the plurality of layer regions M (k). The position (z coordinate value) in the fundus depth direction of k) is expressed using the z coordinate value of the reference three-dimensional coordinate system (S18).

  As described above, the three-dimensional position (the position in the fundus surface direction and the position in the fundus depth direction) of each three-dimensional image G (k) is expressed by a single reference three-dimensional coordinate system (x, y, z). The

  The image analysis unit 232 converts a plurality of three-dimensional images G (k) into a reference three-dimensional space (a three-dimensional space defined by the reference three-dimensional coordinate system) based on the coordinate values expressed by the reference three-dimensional coordinate system. By embedding each, three-dimensional image data (panoramic three-dimensional image data) G corresponding to the range of the fundus oculi Ef over a plurality of scanning regions R (k) is formed (S19). The main control unit 211 stores the panoramic 3D image data G in the storage unit 212.

  An example of the form of the panoramic 3D image data G is shown in FIG. The panoramic three-dimensional image data G shown in FIG. 12 is obtained when the scanning region R (k) (k = 1 to 4) shown in FIG. 10 is set. The panoramic 3D image data G is image data obtained by aligning the 3D image G (k) in each scanning region R (k) and embedding it in the reference 3D space.

  The adjacent three-dimensional images G (1) and G (2) have partial areas overlapping each other. This overlapping region g (1, 2) corresponds to the overlapping range of the scanning region described in step 1 of FIG.

  In the actual panoramic 3D image data G, a shift may occur between the tomographic images G (k) i and between the 3D images G (k) due to the movement of the eye E during measurement. . The factors include eye movement of the eye E and eye movement caused by heartbeat.

  In particular, in this embodiment, since the fundus oculi Ef is measured over a wide range by changing the fixation position of the eye E, a shift due to the rotational movement of the eyeball may occur. For example, when the fixation position is moved in the x direction (lateral direction), a deviation may occur in the y direction (vertical direction) due to rotation. Further, when the fixation position is moved in the y direction, a deviation in the x direction due to rotation may occur.

[Display example]
A display example of various information using the panoramic 3D image data G of the fundus oculi Ef will be described.

[First display example]
In the diagnosis of fundus diseases, the thickness of the retina is often referred to. For example, in the diagnosis of glaucoma and retinitis pigmentosa, the thickness of the retina layer is regarded as an important diagnostic material.

  According to the OCT apparatus, it is possible to acquire a distribution information of the thickness of the retina layer by analyzing a tomographic image or a three-dimensional image of the fundus. It is also possible to acquire standard layer thickness distribution information of normal eyes in advance and acquire a displacement distribution with respect to the standard layer thickness. Such techniques are disclosed in, for example, Japanese Patent Application Laid-Open No. 2004-105708, Japanese Patent Application No. 2006-160896, Japanese Patent Application No. 2007-234695, and the like.

  The fundus oculi observation device 1 (image processing unit 230) analyzes the tomographic image G (k) i, the three-dimensional image G (k), and the panoramic three-dimensional image data G using such a known technique, thereby obtaining the fundus oculi Ef. The layer thickness distribution of the retina can be obtained. In particular, according to the fundus oculi observation device 1, it is possible to acquire a layer thickness distribution over a wider range than a conventional OCT device by analyzing panoramic three-dimensional image data. In addition, according to the fundus oculi observation device 1, it is possible to acquire the displacement distribution with respect to the standard layer thickness over a wider range than before.

  An example of the display mode of the layer thickness distribution of the retina by the fundus oculi observation device 1 is shown in FIG. In this display mode, a plurality of scanning regions R (k), that is, measurement regions of the three-dimensional image G (k) are presented on the fundus oculi image Ef ′, and the layer thickness distribution acquired for each scanning region R (k). Are connected and displayed. This layer thickness distribution is a representation of the distribution in different display colors for each range of layer thickness values set in stages. Note that stepwise changes in the layer thickness may be expressed using gradations or patterns instead of display colors. The distribution of the displacement with respect to the standard layer thickness can be displayed in the same manner.

[Second display example]
As described above, the fundus oculi observation device 1 forms the panoramic 3D image data G so as to connect the layer regions M (k) of the plurality of 3D images G (k). On the other hand, each layer region M (k) is generally an image region having irregularities in the depth direction as shown in FIG.

  FIG. 14 shows tomographic images h (1) and h (2) by a certain cross section of the panoramic 3D image data G instead of the 3D image G (k). Here, the tomographic image h (1) is a tomographic image of the three-dimensional image G (1), and the tomographic image h (2) is a tomographic image of the three-dimensional image G (2). Here, the layer regions M (1) and M (2) represent the deepest layer (retinal pigment epithelium layer) of the retina (the same applies to the other layer regions M (k)).

  The image processing unit 230 adds pixels (pixels, voxels) arranged in the z direction of the panoramic 3D image data G so that each layer region M (k) is flat (that is, the same z coordinate value z0). Displace in the z direction. That is, assuming that the z coordinate value of the layer region M (k) at the position (x, y) in the fundus surface direction is z (x, y), this processing is performed for all the lines aligned in the z direction at the position (x, y). The pixel is moved in the z direction by z (x, y) -z0.

  FIG. 15 shows the result of applying the processing to the tomographic images h (1) and h (2) shown in FIG. FIG. 15 shows a tomographic image h (1) ′ obtained by flattening the layer region M (1) and a tomographic image h (2) ′ obtained by flattening the layer region M (2). Yes. Such processing is executed for each position (x, y) of the three-dimensional image G (k).

  By displaying an image based on the panoramic three-dimensional image data G in which the layer region M (k) is flattened in this way, a change in the thickness of the fundus oculi Ef (retinal) layer is visually grasped (intuitively). It becomes possible to do.

  When the layer region M (k) is a layer other than the retinal pigment epithelium layer, the image processing unit 230 analyzes the panoramic three-dimensional image data G, searches for a layer region corresponding to the retinal pigment epithelium layer, An image similar to the above can be displayed by performing the same processing so as to flatten the layer region.

  It is also possible to perform the same processing so as to flatten any layer other than the retinal pigment epithelium layer at the deepest part of the retina. Similar processing may be performed so as to flatten the choroid and sclera layers.

  The processing described above can be similarly performed on the tomographic image G (k) i and the three-dimensional image G (k).

[Action / Effect]
The operation and effect of the fundus oculi observation device 1 as described above will be described.

  The fundus oculi observation device 1 divides the low coherence light LO into the signal light LS and the reference light LR, and the interference light obtained by superimposing the signal light LS passing through the fundus oculi Ef and the reference light LR passing through the reference mirror 174. It functions as an OCT apparatus that detects LC and forms an OCT image (particularly a three-dimensional image) of the fundus oculi Ef based on the detection result.

  Further, the fundus oculi observation device 1 forms a plurality of three-dimensional images G (k) representing different parts (a plurality of scanning regions R (k)) of the fundus oculi Ef, and analyzes these three-dimensional images G (k). The mutual positional relationship is obtained, and each three-dimensional image G (k) is expressed by one three-dimensional coordinate system (reference three-dimensional coordinate system) based on this positional relationship. Thereby, panoramic 3D image data G including a plurality of 3D images G (k) is formed.

  Then, the fundus oculi observation device 1 displays a plurality of three-dimensional images G (k) based on the panoramic three-dimensional image data G expressed in the reference three-dimensional coordinate system. At this time, the fundus oculi observation device 1 forms and displays a pseudo panoramic three-dimensional image viewed from a predetermined line-of-sight direction, for example, by performing the rendering process described above.

  According to the fundus oculi observation device 1 acting in this way, it is possible to create a panoramic image representing the three-dimensional form of the fundus oculi Ef. By observing the panoramic image, the operator can grasp the three-dimensional form of the fundus oculi Ef over a wide range.

  The process for obtaining the positional relationship between the plurality of three-dimensional images G (k) is executed as follows. That is, the fundus oculi observation device 1 specifies an image region corresponding to a predetermined part of the fundus oculi Ef in each three-dimensional image G (k), and obtains a positional relationship between the plurality of specified image regions, thereby obtaining a plurality of three-dimensional images. The positional relationship of G (k) is obtained.

  At this time, regarding the positional relationship in the fundus surface direction, the blood vessel regions in each of the three-dimensional images G (k) are specified, and the positional relationships of the plurality of three-dimensional images G (k) are determined by connecting these blood vessel regions. It comes to ask for. Thus, by referring to the blood vessel region that is characteristically distributed in the fundus surface direction (xy direction), alignment in the fundus surface direction can be suitably performed.

  On the other hand, regarding the positional relationship in the fundus depth direction, the layer regions in each three-dimensional image G (k) are specified, and the positional relationship between the plurality of three-dimensional images G (k) is obtained by connecting these layer regions. It is like that. In this way, by aligning the specific tissues (layers) of the fundus oculi Ef, the alignment in the fundus depth direction can be suitably performed.

  Further, the fundus oculi observation device 1 has a function of forming a two-dimensional image (fundus image Ef ′) representing the surface form of the fundus oculi Ef. Then, the fundus oculi observation device 1 obtains the position of each three-dimensional image G (k) with respect to the fundus image Ef ′, and a two-dimensional coordinate system (xy coordinate system) in the fundus surface direction in which the fundus image Ef ′ is defined, and Each three-dimensional image G (k) is expressed by a three-dimensional coordinate system (reference three-dimensional coordinate system) composed of orthogonal coordinate axes (z coordinate axes) in the fundus depth direction.

  Further, the fundus oculi observation device 1 forms an accumulated image P (k) by accumulating each three-dimensional image G (k) in the fundus depth direction, and determines the position of each accumulated image P (k) in the fundus image Ef ′. As a result, the positional relationship in the fundus surface direction of a plurality of three-dimensional images G (k) is obtained.

  In this way, by expressing a plurality of three-dimensional images G (k) with a single three-dimensional coordinate system via the fundus oculi image Ef ′, the positional relationship of the three-dimensional images G (k) in the fundus surface direction is suitably used. Can be sought.

  In addition, the fundus oculi observation device 1 includes a projection unit (LCD 140) that projects a fixation target onto the eye E to be examined. Further, the fundus oculi observation device 1 acquires fixation position information of the eye E when the signal light LS is applied to the fundus oculi Ef, and associates this fixation with the three-dimensional image G (k) based on the signal light LS. Store location information. Then, the fundus oculi observation device 1 identifies an array of a plurality of three-dimensional images G (k) based on fixation position information associated with each three-dimensional image G (k), and a plurality of 3D images based on the array. This acts to determine the positional relationship of the dimensional image G (k).

  Thus, by specifying the arrangement of the three-dimensional image G (k) based on the fixation position of the eye E, it is possible to execute the process for obtaining the positional relationship of the three-dimensional image G more quickly and accurately. become.

  In addition, when the fundus oculi observation device 1 forms a plurality of three-dimensional images G (k), the projection position of the fixation target can be changed so that adjacent three-dimensional images include overlapping regions. Then, the fundus oculi observation device 1 analyzes the overlapping regions of the adjacent three-dimensional images, aligns the images of the overlapping regions, and thereby obtains the positional relationship between the plurality of three-dimensional images G (k). Acts as follows.

  In this way, the adjacent three-dimensional images are provided with overlapping areas, and the overlapping areas are pasted together as if “gluing”, and a plurality of three-dimensional images G (k) are pasted. The process of obtaining the positional relationship of k) can be executed more quickly and accurately. That is, by referring to the form of the overlapping region (the form of the blood vessel region or the layer region), the positional relationship between the plurality of three-dimensional images G (k) is obtained by matching the overlapping regions of the adjacent three-dimensional images. Therefore, it is possible to increase the speed and accuracy of the processing.

[Modification]
The configuration described above is merely an example for favorably implementing the present invention. Therefore, arbitrary modifications within the scope of the present invention can be made as appropriate.

  The fundus oculi observation device 1 can display an image corresponding to a part or the whole of the panoramic 3D image data G. The range of the display image can be arbitrarily designated by the user interface 240, for example. As a specific example, a pseudo three-dimensional image or fundus image Ef ′ corresponding to the entire panoramic three-dimensional image data G is displayed on the display unit 240A, and the display image is displayed by the operation unit 240B (for example, by a drag operation with the mouse 206). Specify the desired range inside. The image processing unit 230 performs a rendering process on the panoramic 3D image data G corresponding to the specified range. The main control unit 211 causes the display unit 240A to display an image obtained thereby.

  It is also possible to display a tomographic image at an arbitrary cross-sectional position of the panoramic 3D image data G. This cross-sectional position can also be arbitrarily designated using the operation unit 240B as described above, for example.

  Note that the display range can be automatically determined instead of manually specifying the display range as described above. For example, the image processing unit 230 analyzes the panoramic three-dimensional image data G, and specifies an image region corresponding to a characteristic part (macular part, optic disc, etc.) of the fundus oculi Ef. This process can be executed by analyzing the panoramic three-dimensional image data G and detecting the unevenness of the fundus surface according to the characteristic part. The image processing unit 230 determines the display range so as to include the image region specified in this way. In addition, if the fundus layer thickness can be obtained as described above, a range in which the layer thickness is characteristic (such as a thin portion or a portion where the deviation from the standard value is large) is specified, and this specific range is included. The display range can be determined.

  In the above embodiment, the positional relationship between the plurality of three-dimensional images G (k) is obtained via the fundus oculi image Ef ′ and the accumulated image P (k), but the present invention is not limited to this. Note that the image processing described below is executed by the image processing unit 230 (image analysis unit 232).

  For example, the positional relationship between a plurality of three-dimensional images G (k) can be obtained through only the fundus image Ef ′ without using the integrated image P (k). As a specific example, first, an image region (fundus surface region) corresponding to the fundus surface in each three-dimensional image G (k) is extracted. This process can be easily performed by analyzing the pixel value of the panoramic 3D image data G (for example, an image region corresponding to the boundary between the retina and the vitreous based on the change of the pixel value in the z direction). Can be extracted).

  Next, the position of each fundus surface area in the fundus image Ef ′ is specified. This process can be executed, for example, by aligning a feature part (blood vessel region, macular region, optic disc region, etc.) in the fundus image Ef ′ with a feature region in the fundus surface region.

  Then, based on the result of specifying the position of each fundus surface region in the fundus image Ef ′, by determining the position of each 3D image G (k) in the fundus surface direction, a plurality of 3D images G (k) The positional relationship can be obtained.

  In the present invention, it is also possible to obtain the positional relationship between a plurality of three-dimensional images G (k) without using the fundus oculi image Ef ′. As a first specific example, first, an integrated image P (k) of each three-dimensional image G (k) is formed as in the above embodiment. At this time, it is desirable to set the scanning region R (k) in advance so that adjacent integrated images (adjacent three-dimensional images) have overlapping regions.

  Next, as in the conventional processing for forming a panoramic image of a fundus image, the positions of a plurality of accumulated images P (k) are based on the position of a characteristic part such as a blood vessel region or the fixation position of the eye E. Seeking a relationship. Based on this positional relationship, the positional relationship between the plurality of three-dimensional images G (k) in the fundus surface direction can be obtained.

  The accumulated image P (k) in this specific example is an example of the “two-dimensional image” of the present invention. The accumulated image P (k) is formed by the accumulated image forming unit 233 as “integrated image forming means” of the present invention. Further, the integrated image forming means in this specific example is included in the “forming means” of the present invention.

  As a second specific example, a method for obtaining the positional relationship between a plurality of three-dimensional images G (k) itself without using the accumulated image P (k) will be described. Even when this method is applied, it is desirable to set the scanning region R (k) in advance so that adjacent three-dimensional images have overlapping regions.

  First, an image region (preferably including the overlapping region) at the edge of each three-dimensional image G (k) is specified. The size of the edge portion is set in advance, for example. Next, the image area at the edge of the adjacent three-dimensional image G (k) is three-dimensionally aligned. At this time, it is desirable to refer to arrangement information of a plurality of three-dimensional images G. Further, this alignment process can be executed by aligning feature regions (blood vessel regions, etc.) in the image region. By determining the position of each three-dimensional image G (k) based on such alignment results, the positional relationship between the plurality of three-dimensional images G (k) can be obtained.

  In the formation process of the panorama three-dimensional image data G, there is a concern that the alignment accuracy is deteriorated due to the movement of the eyeball being measured. In particular, as described above, there is a problem of positional deviation of the eyeball when the fixation position of the eye to be examined is changed when the eye to be examined has a convoluted position. Hereinafter, a method for avoiding deterioration in alignment accuracy due to such factors will be described.

  First, a method for correcting a shift caused by the rotational movement of the eyeball according to the fixation target display mode will be described. When the fixation position of the eye E is moved in the horizontal direction (x direction), the eyeball may be displaced in the vertical direction (y direction) due to rotation. Conversely, when the fixation position of the eye E is moved in the vertical direction, the eyeball may be displaced in the horizontal direction due to rotation. The former case will be described below (the same applies to the latter).

  In order to grasp the state of rotation of the eye E, a fundus image Ef ′ (for example, an infrared moving image) of the eye E is acquired by the fundus camera unit 1A. The main controller 211 moves the projection position of the fixation target with respect to the eye E in the x direction. Thereby, fundus images Ef ′ before and after movement are obtained. The main control unit 211 extracts each frame before and after the movement of the fixation position from the moving image.

  The image analysis unit 232 analyzes each of these frames to identify the position of the fundus feature point (optic nerve head, macula, blood vessel, lesion, etc.). This processing can be automatically performed by performing threshold processing on the pixel value. Note that feature points in each frame can also be designated manually.

  Subsequently, the image analysis unit 232 calculates the displacement of the feature points in these two frames in the y direction. This process can be performed by counting the number of pixels between the feature points of the two frames.

  Further, the image analysis unit 232 calculates the rotation angle of the eye E based on the displacement of the fixation position in the x direction and the displacement of the feature point in the y direction. The rotation angle is a deviation angle in the rotation oblique position. The rotation angle can be calculated in the same manner as in the past.

  The main control unit 211 changes the projection position of the fixation target so as to cancel the calculated rotation angle. For example, when the rotation occurs upward, the projection position of the fixation target is changed so that the downward direction is fixed by the rotation angle. On the contrary, when the rotation occurs downward, the projection position of the fixation target is changed so that the upper portion is fixed by the rotation angle.

  By performing such processing, the influence of the rotation of the eye E can be reduced, and images of various regions of the fundus oculi Ef can be suitably joined when a panoramic image is formed.

  Note that instead of capturing a moving image of the eye E, fundus images before and after changing the fixation position may be captured.

  Further, when the rotation angle of the eye E is measured in advance, the measurement value is stored in advance in storage means such as the storage unit 212, and the projection position of the fixation target is determined based on the measurement value. It can be configured to change.

  Next, a method for correcting the deviation caused by the rotational movement of the eyeball by changing the relative position between the eye to be examined and the optical system will be described. When this method is applied, a driving unit that rotates and moves at least one of the optical system mounted on the fundus camera unit 1A or the OCT unit 150 and the eye E to be examined is provided. This drive means is configured to include an actuator such as a stepping motor, for example, and changes the relative position between the optical system and the eye E under the control of the main controller 211.

  When rotating the optical system, for example, the driving means rotates the fundus camera unit 1A around a predetermined position (for example, the position of the eye E). When the eye E is rotated, the driving means rotates the chin rest and the forehead holding the subject's face around a predetermined position. In addition, since the latter structure may give a subject discomfort and anxiety, it seems that the former structure is more desirable.

  The method for obtaining the rotation angle of the eye E is the same as in the above modification. The main control unit 211 changes the relative position between the eye E and the optical system so as to cancel the acquired rotation angle.

  According to such a modification, it is possible to reduce the influence of the rotation of the eye E to be examined, and it is possible to suitably connect images of various regions of the fundus oculi Ef when forming a panoramic image.

  In addition, when the rotation angle of the eye E is measured in advance, the measurement value is stored in advance in a storage unit such as the storage unit 212, and the eye E and the optical system are based on the measurement value. It is possible to configure so as to change the relative position.

  Next, a modified example for preventing a situation where the eye E moves and the fixation position shifts during measurement will be described. When a general fixation target is used, the subject visually recognizes the signal light LS during the scan or the target for focusing, and sometimes this is followed by the eye. In order to prevent such a situation, it is possible to devise, for example, to present a large cross-shaped fixation target.

  In the above embodiment, the optical path length difference between the optical path of the signal light LS and the optical path of the reference light LR is changed by changing the position of the reference mirror 174. It is not limited. For example, the optical path length difference can be changed by moving the fundus camera unit 1A and the OCT unit 150 integrally with the eye E to change the optical path length of the signal light LS. In addition, the optical path length difference can be changed by moving the eye E in the depth direction (z direction).

[Fundus image processing device]
An embodiment of a fundus image processing apparatus according to the present invention will be described. The fundus image processing apparatus according to this embodiment has a hardware configuration similar to that of a general computer (see FIG. 4). Further, the fundus image processing apparatus according to this embodiment has the same functional configuration as the arithmetic control apparatus 200 of the above-described embodiment (see FIG. 6). However, the components for controlling the fundus camera unit 1A and the OCT unit 150 are not necessary.

  The fundus image processing apparatus according to this embodiment is connected to an external device such as a fundus observation apparatus that forms an OCT image of the fundus or a storage device that stores the OCT image of the fundus. Examples of this storage device include NAS (Network Attached Storage). The fundus image processing apparatus is configured to be able to communicate with an external apparatus via a communication line such as a LAN.

  The fundus image processing apparatus includes a receiving unit that receives an image from an external apparatus. This accepting means includes, for example, a communication interface 209 shown in FIG. In particular, the accepting means accepts a plurality of three-dimensional images representing different parts of the fundus of the eye to be examined.

  As a modification of the receiving unit, a drive device that reads information recorded on a recording medium can be applied. In this modification, a plurality of three-dimensional images recorded in advance on a recording medium are read by a drive device and input to the fundus image processing device.

  Further, instead of receiving an input of a three-dimensional image of the fundus, it is also possible to receive a plurality of tomographic images of the fundus and form a three-dimensional image of the fundus based on these tomographic images. In this case, the fundus image processing apparatus is provided with a three-dimensional image forming unit 231 shown in FIG.

  The fundus image processing apparatus obtains a positional relationship between a plurality of three-dimensional images by analyzing each received three-dimensional image, and analyzes each three-dimensional image in a single three-dimensional coordinate system based on the positional relationship. Have means. This analysis means is configured similarly to the image analysis unit 232 shown in FIG. 6, for example.

  Further, the fundus image processing apparatus includes a control unit that causes a display unit to display a plurality of three-dimensional images expressed in one three-dimensional coordinate system. The display unit 240A of the above embodiment is an example of the “display unit”. The control unit 210 of the above embodiment is an example of this “control unit”.

  According to such a fundus image processing apparatus, it is possible to create a panoramic image representing the three-dimensional form of the fundus oculi Ef.

  Note that the arbitrary configuration described in the embodiment of the fundus oculi observation device can be applied to the fundus image processing apparatus according to this embodiment.

[program]
An embodiment of a program according to the present invention will be described. The program according to this embodiment is executed by a general computer. The control program 204a (see FIG. 4) of the above embodiment is an example of a program according to this embodiment.

  The program according to this embodiment causes a computer that stores in advance a plurality of three-dimensional images representing different parts of the fundus of the subject's eye to function as follows. First, the computer analyzes each three-dimensional image to obtain a positional relationship among a plurality of three-dimensional images. Next, the computer represents each three-dimensional image in one three-dimensional coordinate system based on the obtained positional relationship. Then, the computer causes the display means to display a plurality of three-dimensional images expressed in one three-dimensional coordinate system.

  According to such a program, it is possible to cause the computer to create a panoramic image representing the three-dimensional form of the fundus oculi Ef.

  Note that the program according to this embodiment can be configured to cause a computer to execute the arbitrary processing described in the embodiment of the fundus oculi observation device.

  The program according to this embodiment can be stored in any recording medium that can be read by a drive device of a computer. As this recording medium, for example, an optical disk, a magneto-optical disk (CD-ROM / DVD-RAM / DVD-ROM / MO, etc.), a magnetic storage medium (hard disk / floppy (registered trademark) disk / ZIP, etc.), etc. are used. Is possible. It can also be stored in a storage device such as a hard disk drive or memory. Further, this program can be transmitted through a network such as the Internet or a LAN.

  In the embodiment described above, the fundus oculi observation device, the fundus image processing device, and the program used in the ophthalmic field have been described. However, the gist of the present invention can be applied to other fields. Other applicable fields are fields in which image formation by OCT technology is introduced, in particular, fields where a three-dimensional image of an object to be measured is used. Specific examples include medical fields other than ophthalmology (dermatology, dentistry, etc.), biology, and industrial fields.

It is a schematic block diagram showing an example of the whole structure of embodiment of the fundus oculi observation device concerning this invention. It is a schematic block diagram showing an example of a structure of the scanning unit incorporated in the fundus camera unit in the embodiment of the fundus oculi observation device according to the present invention. It is a schematic block diagram showing an example of a structure of the OCT unit in embodiment of the fundus oculi observation device concerning this invention. It is a schematic block diagram showing an example of the hardware constitutions of the arithmetic and control unit in embodiment of the fundus oculi observation device concerning this invention. It is a schematic block diagram showing an example of the structure of the control system of embodiment of the fundus observation apparatus concerning this invention. It is a schematic block diagram showing an example of the structure of the control system of embodiment of the fundus observation apparatus concerning this invention. It is the schematic showing an example of the scanning aspect of the signal light by embodiment of the fundus oculi observation device concerning this invention. FIG. 7A illustrates an example of a scanning mode of the signal light when the fundus is viewed from the incident side of the signal light with respect to the eye to be examined. FIG. 7B shows an example of an arrangement mode of scanning points on each scanning line. It is the schematic showing an example of the scanning aspect of the signal light by embodiment of the fundus oculi observation device concerning this invention, and the aspect of the tomographic image formed along each scanning line. It is a flowchart showing an example of the operation | movement aspect of embodiment of the fundus oculi observation device concerning this invention. It is the schematic showing an example of the setting aspect of the scanning area | region in embodiment of the fundus oculi observation device concerning this invention. It is a flowchart showing an example of the operation | movement aspect of embodiment of the fundus oculi observation device concerning this invention. It is the schematic showing an example of the form of the panoramic three-dimensional image data formed by embodiment of the fundus oculi observation device concerning this invention. It is the schematic showing an example of the information displayed by embodiment of the fundus oculi observation device concerning this invention. It is the schematic showing an example of the form of the tomographic image formed by embodiment of the fundus oculi observation device concerning this invention. It is the schematic showing an example of the result of the process performed by embodiment of the fundus oculi observation device concerning this invention.

Explanation of symbols

1 Fundus observation device 1A Fundus camera unit 140 LCD
141 Scanning unit 150 OCT unit 160 Low coherence light source 174 Reference mirror 180 Spectrometer 184 CCD
200 arithmetic control unit 210 control unit 211 main control unit 212 storage unit 213 fixation position acquisition unit 220 image forming unit 230 image processing unit 231 three-dimensional image forming unit 232 image analyzing unit 233 integrated image forming unit 234 blood vessel region specifying unit 235 layers Area specifying unit 236 Array specifying unit 237 Fundus image analyzing unit 238 Three-dimensional coordinate system setting unit 240 User interface 240A Display unit

Claims (15)

  1. An optical system that divides light from a light source into signal light and reference light, and generates interference light by superimposing the signal light passing through the fundus of the eye to be examined and the reference light passing through a reference object;
    Detecting means for detecting the interference light;
    A fundus oculi observation device that forms a three-dimensional image of the fundus oculi based on the detection result of the detection means,
    Analyzing a plurality of three-dimensional images representing different parts of the fundus to obtain a positional relationship between the plurality of three-dimensional images, and expressing each of the plurality of three-dimensional images in a single three-dimensional coordinate system based on the positional relationship Analysis means to
    Display means;
    Control means for causing the display means to display the plurality of three-dimensional images expressed in the one three-dimensional coordinate system;
    A fundus oculi observation device comprising:
  2. The analyzing unit includes an image region specifying unit that specifies an image region corresponding to a predetermined part of the fundus in each of the plurality of three-dimensional images, and obtaining the positional relationship between the specified plurality of image regions. Obtaining a positional relationship between a plurality of three-dimensional images;
    The fundus oculi observation device according to claim 1.
  3. The image area specifying means specifies a blood vessel area corresponding to a blood vessel of the fundus as the image area,
    The analyzing means obtains the positional relationship of the plurality of three-dimensional images in the fundus surface direction by connecting the plurality of identified blood vessel regions;
    The fundus oculi observation device according to claim 2.
  4. The image area specifying means specifies a layer area corresponding to a predetermined layer of the fundus as the image area,
    The analyzing means obtains a positional relationship of the plurality of three-dimensional images in the fundus depth direction so as to connect the plurality of identified layer regions;
    The fundus oculi observation device according to claim 2.
  5. Forming means for forming a two-dimensional image representing the morphology of the surface of the fundus;
    The analyzing means obtains positions of the plurality of three-dimensional images with respect to the two-dimensional image as the positional relationship, and a two-dimensional coordinate system in a fundus surface direction in which the two-dimensional image is defined and the two-dimensional coordinate system Each of the plurality of three-dimensional images is represented by a three-dimensional coordinate system composed of coordinate axes in the fundus depth direction orthogonal to
    The fundus oculi observation device according to claim 1.
  6. The forming means includes imaging means for forming the two-dimensional image by irradiating the fundus with illumination light, detecting the fundus reflection light, and imaging the surface of the fundus.
    The fundus oculi observation device according to claim 5.
  7. The analyzing means includes integrated image forming means for forming a plurality of integrated images by integrating each of the plurality of three-dimensional images in the fundus depth direction, and each position of the plurality of integrated images in the two-dimensional image. Obtaining the positional relationship of the fundus surface direction of the plurality of three-dimensional images by obtaining
    The fundus oculi observation device according to claim 6.
  8. The forming means includes integrated image forming means for forming a plurality of integrated images by integrating each of the plurality of three-dimensional images in the fundus depth direction as the two-dimensional image,
    The analyzing means obtains the positional relationship of the fundus surface direction of the plurality of three-dimensional images by obtaining the positional relationship of the plurality of accumulated images;
    The fundus oculi observation device according to claim 5.
  9. The analysis means obtains a positional relationship between the plurality of three-dimensional images by analyzing an image region at an edge portion of the plurality of three-dimensional images and aligning the image regions.
    The fundus oculi observation device according to claim 1.
  10. The optical system includes projection means for projecting a fixation target onto the eye to be examined,
    The control means acquires acquisition means for acquiring fixation position information of the subject eye when the signal light is applied to the fundus, and the fixation position information in association with the three-dimensional image of the fundus based on the signal light. Storage means for storing
    The analyzing means includes array specifying means for specifying an array of the plurality of three-dimensional images based on the fixation position information stored in association with each of the plurality of three-dimensional images, and the specified array Obtaining the positional relationship of the three-dimensional image based on
    The fundus oculi observation device according to claim 1.
  11. The optical system includes projection means for projecting a fixation target onto the eye to be examined,
    When forming the plurality of three-dimensional images, the control unit controls the projection unit to change the projection position of the fixation target so that adjacent three-dimensional images include overlapping regions,
    The analyzing unit analyzes each overlapping region of adjacent three-dimensional images and aligns the images of the overlapping regions to obtain a positional relationship between the plurality of three-dimensional images;
    The fundus oculi observation device according to claim 1.
  12. Forming means for forming a two-dimensional image representing the morphology of the surface of the fundus;
    The optical system includes a projection unit that projects a fixation target onto the eye to be examined.
    When the projection position of the fixation target with respect to the eye to be examined is changed, the analyzing means is based on the two two-dimensional images formed by the forming means before and after the change, and the rotation angle of the eye to be examined Detect
    The control means controls the projection means to change the projection position of the fixation target so as to cancel the rotation angle;
    The fundus oculi observation device according to claim 1.
  13. A storage means for storing in advance the rotation angle of the eye to be examined;
    The optical system includes a projection unit that projects a fixation target onto the eye to be examined.
    The control means controls the projection means to change the projection position of the fixation target so as to cancel the rotation angle;
    The fundus oculi observation device according to claim 1.
  14. Forming means for forming a two-dimensional image representing the morphology of the surface of the fundus;
    Driving means for changing a relative position between the optical system and the eye to be examined;
    Further comprising
    When the projection position of the fixation target with respect to the eye to be examined is changed, the analyzing means is based on the two two-dimensional images formed by the forming means before and after the change, and the rotation angle of the eye to be examined Detect
    The control means controls the drive means to change the relative position of the optical system and the eye to be examined so as to cancel the rotation angle;
    The fundus oculi observation device according to claim 1.
  15. A storage means for storing in advance the rotation angle of the eye to be examined;
    Driving means for changing a relative position between the optical system and the eye to be examined;
    The control means controls the drive means to change the relative position of the optical system and the eye to be examined so as to cancel the rotation angle;
    The fundus oculi observation device according to claim 1.
JP2008023505A 2008-02-04 2008-02-04 Fundus observation device Active JP5192250B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008023505A JP5192250B2 (en) 2008-02-04 2008-02-04 Fundus observation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008023505A JP5192250B2 (en) 2008-02-04 2008-02-04 Fundus observation device

Publications (3)

Publication Number Publication Date
JP2009183332A JP2009183332A (en) 2009-08-20
JP2009183332A5 JP2009183332A5 (en) 2012-07-19
JP5192250B2 true JP5192250B2 (en) 2013-05-08

Family

ID=41067267

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008023505A Active JP5192250B2 (en) 2008-02-04 2008-02-04 Fundus observation device

Country Status (1)

Country Link
JP (1) JP5192250B2 (en)

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5259374B2 (en) 2008-12-19 2013-08-07 富士フイルム株式会社 Optical structure observation apparatus and structure information processing method thereof
JP5704879B2 (en) * 2009-09-30 2015-04-22 株式会社ニデック Fundus observation device
JP5322890B2 (en) * 2009-11-05 2013-10-23 キヤノン株式会社 Optical coherence tomography apparatus and data compression method
WO2011074078A1 (en) 2009-12-15 2011-06-23 キヤノン株式会社 Image processing device, image processing method, imaging system, and program
JP5836564B2 (en) * 2010-03-12 2015-12-24 キヤノン株式会社 Ophthalmic imaging apparatus, ophthalmic imaging method, and program thereof
DE102010019657A1 (en) * 2010-05-03 2011-11-03 Carl Zeiss Meditec Ag Arrangement for improved imaging of eye structures
WO2011145182A1 (en) * 2010-05-19 2011-11-24 興和株式会社 Optical coherence tomography device
JP2012042348A (en) 2010-08-19 2012-03-01 Canon Inc Tomographic image display device and control method therefor
JP5777307B2 (en) * 2010-08-25 2015-09-09 キヤノン株式会社 Image processing apparatus, image processing method, and program.
JP5762712B2 (en) 2010-09-30 2015-08-12 株式会社ニデック Ophthalmic observation system
JP5735790B2 (en) * 2010-12-02 2015-06-17 株式会社ニデック Ophthalmic imaging equipment
JP6146951B2 (en) 2012-01-20 2017-06-14 キヤノン株式会社 Image processing apparatus, image processing method, photographing apparatus, and photographing method
JP6061554B2 (en) 2012-01-20 2017-01-18 キヤノン株式会社 Image processing apparatus and image processing method
JP2013153881A (en) * 2012-01-27 2013-08-15 Canon Inc Image processing system, processing method, and program
JP6146952B2 (en) * 2012-01-27 2017-06-14 キヤノン株式会社 Image processing apparatus, image processing method, and program.
EP2838426A4 (en) * 2012-04-17 2015-12-23 Collage Medical Imaging Ltd Organ mapping system using an optical coherence tomography probe
JP2014018251A (en) * 2012-07-13 2014-02-03 Nidek Co Ltd Ophthalmological photographing apparatus and ophthalmological photographing program
US10149615B2 (en) 2012-11-30 2018-12-11 Kabushiki Kaisha Topcon Fundus imaging apparatus that determines a state of alignment
JP6338358B2 (en) * 2012-11-30 2018-06-06 株式会社トプコン Fundus photography system
JPWO2014084139A1 (en) * 2012-11-30 2017-01-05 興和株式会社 Optical image measuring device
JP6130723B2 (en) * 2013-05-01 2017-05-17 キヤノン株式会社 Information processing apparatus, information processing apparatus control method, and program
JP6415030B2 (en) * 2013-08-07 2018-10-31 キヤノン株式会社 Image processing apparatus, image processing method, and program
JP5904976B2 (en) * 2013-08-19 2016-04-20 キヤノン株式会社 3D data processing apparatus, 3D data processing method and program
JP5634587B2 (en) * 2013-10-30 2014-12-03 キヤノン株式会社 Image processing apparatus, image processing method, and program
JPWO2015129718A1 (en) * 2014-02-27 2017-03-30 興和株式会社 Image processing apparatus, image processing method, and image processing program
JP6471593B2 (en) * 2015-04-09 2019-02-20 株式会社ニデック OCT signal processing apparatus and OCT signal processing program
US10002446B2 (en) 2015-04-15 2018-06-19 Canon Kabushiki Kaisha Image processing apparatus and method of operation of the same
KR101715167B1 (en) * 2015-08-26 2017-03-10 울산과학기술원 Method of examining tissue using optical coherence tomography, optical coherence microscopy and mosaic matching technology
KR101715166B1 (en) * 2015-08-26 2017-03-10 울산과학기술원 Method of examining three-dimensional tissue using optical coherence tomography technology and optical coherence microscopy technology
KR101781153B1 (en) * 2016-10-11 2017-09-22 울산과학기술원 Method of examining blood vessel using optical coherence tomography technology and optical coherence microscopy technology
JP6437038B2 (en) * 2017-04-14 2018-12-12 キヤノン株式会社 Information processing apparatus, information processing apparatus control method, and program
JP6568614B2 (en) * 2018-03-09 2019-08-28 キヤノン株式会社 Ophthalmic apparatus, ophthalmic system, ophthalmic apparatus control method and program

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4017157B2 (en) * 2003-02-28 2007-12-05 独立行政法人理化学研究所 Hollow organ blood vessel extraction method, hollow organ blood vessel extraction processing program, and image processing apparatus
JP4527471B2 (en) * 2004-08-24 2010-08-18 株式会社先端力学シミュレーション研究所 3D fundus image construction and display device
JP4546209B2 (en) * 2004-09-30 2010-09-15 株式会社ニデック Ophthalmic equipment
JP4512822B2 (en) * 2004-10-20 2010-07-28 国立大学法人 筑波大学 Line condensing type Fourier domain interference shape measuring device
JP4850495B2 (en) * 2005-10-12 2012-01-11 国立大学法人 筑波大学 Fundus observation apparatus and fundus observation program
JP4869756B2 (en) * 2006-03-24 2012-02-08 株式会社トプコン Fundus observation device

Also Published As

Publication number Publication date
JP2009183332A (en) 2009-08-20

Similar Documents

Publication Publication Date Title
Wojtkowski et al. Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography
US8724870B2 (en) Ophthalmic observation apparatus
JP5989523B2 (en) Ophthalmic equipment
CN101084824B (en) An eyeground observation device, an ophthalmologic image processing unit, an ophthalmologic image processing method
EP1775545B1 (en) Optical image measuring device, optical image measuring program, fundus observation device, and fundus observation program
US20120184846A1 (en) Imaging and visualization systems, instruments, and methods using optical coherence tomography
JP2007252693A (en) Eyeground observation apparatus
JP5089940B2 (en) Eye movement measuring device, eye movement measuring method, and eye movement measuring program
EP2395343B1 (en) Optical image measuring device
EP1908397A2 (en) A fundus oculi observation device
US7641338B2 (en) Fundus observation device
EP2189110A1 (en) Eyeground observing device, eyeground image processing device, and program
US8348426B2 (en) Optical image measurement device and image processing device
JP5007114B2 (en) Fundus observation apparatus, fundus image display apparatus, and program
EP1972265B1 (en) Fundus oculi observation device and ophthalmic image display device
JP4864516B2 (en) Ophthalmic equipment
EP1962083A1 (en) Optical image measurement device
US7828437B2 (en) Fundus oculi observation device and fundus oculi image processing device
JP4916779B2 (en) Fundus observation device
EP2147634B1 (en) Eyeground observing device and program for controlling same
US8801180B2 (en) Ophthalmic tomographic imager with corneo-retinal image analysis
JP4996917B2 (en) Optical image measurement device and program for controlling optical image measurement device
EP2138826B1 (en) Optical image measurement instrument and program for controlling the same
JP4884777B2 (en) Fundus observation device
EP1842483B1 (en) A fundus observation device with movable fixation target

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20110121

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120531

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20120827

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120904

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120921

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20130129

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20130131

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20160208

Year of fee payment: 3

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250