WO2010113459A1 - 眼科観察装置 - Google Patents

眼科観察装置 Download PDF

Info

Publication number
WO2010113459A1
WO2010113459A1 PCT/JP2010/002240 JP2010002240W WO2010113459A1 WO 2010113459 A1 WO2010113459 A1 WO 2010113459A1 JP 2010002240 W JP2010002240 W JP 2010002240W WO 2010113459 A1 WO2010113459 A1 WO 2010113459A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
light
photographed image
pair
scanning
Prior art date
Application number
PCT/JP2010/002240
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
町田和敏
Original Assignee
株式会社トプコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社トプコン filed Critical 株式会社トプコン
Publication of WO2010113459A1 publication Critical patent/WO2010113459A1/ja

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/102Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]

Definitions

  • the present invention relates to an ophthalmic observation apparatus that forms a tomographic image of an eye to be examined using optical coherence tomography and irradiates the eye to be examined with illumination light.
  • optical coherence tomography that forms an image representing the surface form and internal form of an object to be measured using a light beam from a laser light source or the like has attracted attention.
  • Optical coherence tomography does not have invasiveness to the human body like an X-ray CT apparatus, and therefore is expected to be applied particularly in the medical field and biological field.
  • Patent Document 1 discloses an apparatus to which optical coherence tomography is applied.
  • the measuring arm scans an object with a rotary turning mirror (galvanomirror), a reference mirror is installed on the reference arm, and the intensity of the interference light of the light beam from the measuring arm and the reference arm is dispersed at the exit.
  • An interferometer is provided for analysis by the instrument.
  • the reference arm is configured to change the phase of the reference light beam stepwise by a discontinuous value.
  • Patent Document 1 uses a so-called “Fourier Domain OCT (Fourier Domain Optical Coherence Tomography)” technique.
  • a low-coherence beam is irradiated onto the object to be measured, the reflected light and the reference light are superimposed to generate interference light, and the spectral intensity distribution of the interference light is acquired and subjected to Fourier transform.
  • This type of technique is also referred to as a spectral domain.
  • the apparatus described in Patent Document 1 includes a galvanometer mirror that scans a light beam (signal light), thereby forming an image of a desired measurement target region of the object to be measured. Since this apparatus is configured to scan the light beam only in one direction (x direction) orthogonal to the z direction, the image formed by this apparatus is in the scanning direction (x direction) of the light beam. It becomes a two-dimensional tomogram in the depth direction (z direction) along.
  • a plurality of horizontal two-dimensional tomographic images are formed by scanning signal light in the horizontal direction (x direction) and the vertical direction (y direction), and the measurement range is determined based on the plurality of tomographic images.
  • a technique for acquiring and imaging three-dimensional tomographic information is disclosed. Examples of the three-dimensional imaging include a method of displaying a plurality of tomographic images side by side in a vertical direction (referred to as stack data) and a method of rendering a plurality of tomographic images to form a three-dimensional image. Conceivable.
  • Patent Documents 3 and 4 disclose other types of OCT apparatuses.
  • Patent Document 3 scans the wavelength of light applied to an object to be measured, acquires a spectral intensity distribution based on interference light obtained by superimposing reflected light of each wavelength and reference light
  • an OCT apparatus for imaging the form of an object to be measured by performing Fourier transform on the object is described.
  • Such an OCT apparatus is called a swept source type.
  • the swept source type is an example of a Fourier domain type.
  • Patent Document 4 the traveling direction of light is obtained by irradiating the object to be measured with light having a predetermined beam diameter, and analyzing the component of interference light obtained by superimposing the reflected light and the reference light.
  • An OCT apparatus for forming an image of an object to be measured in a cross-section orthogonal to is described. Such an OCT apparatus is called a full-field type or an en-face type.
  • Patent Document 5 discloses an apparatus in which optical coherence tomography is applied to the ophthalmic field.
  • a fundus observation apparatus before optical coherence tomography is applied a fundus camera that irradiates illumination light to the subject's eye to photograph the fundus is known (see, for example, Patent Document 6).
  • An apparatus that acquires a tomographic image of the cornea using optical coherence tomography is also known (see, for example, Patent Document 7).
  • the cornea can be imaged with a fundus camera, a slit lamp, or the like (see, for example, Patent Document 8).
  • the apparatus described in Patent Document 5 has functions of both a fundus camera and an OCT apparatus.
  • the fundus image (fundus image) is suitable for grasping the state of the fundus surface over a wide range.
  • the tomographic image is suitable for grasping the layer structure of the fundus in detail.
  • this method has an advantage that the positional relationship between the captured image and the tomographic image can be obtained only by image processing, but it requires a long time for data processing or is sufficient for forming a three-dimensional image. There is also a demerit that a large number of tomographic images must be acquired.
  • the mounting position and mounting posture of the digital camera change due to a minute difference between the form (size and shape) of the mounting part on the digital camera side and the form of the mounting part on the ophthalmic observation apparatus side, or changes over time (backlash). There is. As a result, positional deviation occurs in the captured image, and the positional relationship between the captured image and the tomographic image changes. Then, there is a difference between the positional relationship between images acquired in the past and the positional relationship between newly acquired images, and there is a possibility that follow-up observation and treatment cannot be performed effectively.
  • the present invention has been made to solve the above problems, and an object of the present invention is to provide an ophthalmologic observation apparatus that can grasp the positional relationship between a captured image and a tomographic image with high accuracy. is there.
  • Another object of the present invention is to provide an ophthalmic observation apparatus capable of correcting the positional relationship between the fundus image and the tomographic image even when a relatively small number of tomographic images are acquired.
  • Another object of the present invention is to provide an ophthalmologic observation apparatus capable of correcting the positional relationship between a fundus image and a tomographic image by simple processing.
  • the invention according to claim 1 includes an irradiating means for irradiating illumination light to the eye to be examined, and a light receiving means for receiving reflected light of the illumination light from the eye to be examined,
  • a captured image forming unit that forms a captured image of the eye to be inspected based on a reception result of reflected light, and divides low-coherence light into signal light and reference light, and the signal light and reference light path that pass through the eye to be inspected.
  • a tomographic image forming unit that includes an optical system that generates and detects interference light with reference light that has passed through, and that forms a tomographic image of the eye to be examined based on the detection result of the interference light.
  • the photographed image forming unit is provided at a position substantially conjugate to the light receiving means, and includes a marking means that is imprinted in the photographed image as a mark representing the position of the light receiving means, and is disposed at a reference position.
  • a storage unit that stores in advance reference position information based on the position of the mark imprinted in the generated reference captured image, and a new captured image is formed by the captured image forming unit.
  • the invention according to claim 2 is the ophthalmologic observation apparatus according to claim 1, wherein the photographed image forming unit is provided at a position substantially conjugate with the light receiving means, and the center of the reflected light Including a photographic mask having a transmissive region that transmits a part and a shielding region that shields a peripheral part, wherein the marking means includes a translucent part formed in the shielding region of the photographic mask, and the correcting means includes
  • the reference position information is generated in advance based on the position of the mark imprinted in the reference photographed image as an image of the reflected light transmitted through the translucent part, and the new photographed image and the tomographic image are formed.
  • the current position information representing the current position of the light receiving means is generated based on the position of the mark of the translucent portion imprinted in the captured image, and the current position information, the reference position information, Compare Correcting the relative position, characterized in that.
  • the invention according to claim 3 is the ophthalmologic observation apparatus according to claim 2, wherein the marking means is a pair of the light transmissions provided at opposing positions across the transmission region of the imaging mask.
  • the correction means obtains an intermediate position of the pair of marks corresponding to the pair of translucent parts based on the photographed image by the photographed image forming part, and the intermediate position based on the reference photographed image and the intermediate photographed image
  • the relative position is corrected by translating the captured image and / or the tomographic image based on a displacement from the intermediate position based on a new captured image.
  • the invention according to claim 4 is the ophthalmologic observation apparatus according to claim 2, wherein the marking means is a pair of the light transmissions provided at opposing positions across the transmission region of the imaging mask.
  • the correction means obtains a straight line connecting a pair of the marks corresponding to the pair of translucent parts based on a photographed image by the photographed image forming part, and the straight line based on the reference photographed image and the straight line
  • the relative position is corrected by rotationally moving the captured image and / or the tomographic image based on an angle formed with the straight line based on a new captured image.
  • the invention according to claim 5 is the ophthalmologic observation apparatus according to claim 1, wherein the marking means includes a light emitting member provided in a peripheral portion of the reflected light or an external position of the reflected light.
  • the correction means generates the reference position information in advance based on the position of the mark imprinted in the reference photographed image as an image of the light output from the light emitting member, and further, the new photographed image and When the tomographic image is formed, generating current position information representing the current position of the light receiving means based on the position of the mark imprinted in the captured image as an image of light from the light emitting member, The relative position is corrected by comparing current position information with the reference position information.
  • the invention according to claim 6 is the ophthalmologic observation apparatus according to claim 5, wherein the marking means is provided at an opposing position across an optical axis of an optical system that guides the reflected light.
  • the correction unit includes a pair of the light emitting members, and the correction unit obtains an intermediate position of the pair of marks corresponding to the pair of light emitting members based on a photographed image by the photographed image forming unit, and the intermediate based on the reference photographed image
  • the relative position is corrected by translating the captured image and / or the tomographic image based on a displacement between the position and the intermediate position based on the new captured image.
  • the invention according to claim 7 is the ophthalmologic observation apparatus according to claim 5, wherein the marking means is provided at an opposing position across an optical axis of an optical system that guides the reflected light.
  • the correction means includes a pair of the light emitting members, and the correction unit obtains a straight line connecting the pair of marks corresponding to the pair of light emitting members based on a photographed image by the photographed image forming unit, and based on the reference photographed image
  • the relative position is corrected by rotationally moving the photographed image and / or the tomographic image based on an angle formed by a straight line and the straight line based on the new photographed image.
  • the invention according to claim 8 is the ophthalmologic observation apparatus according to claim 1, wherein the marking means is capable of imprinting a plurality of sets of marks at different positions in a captured image.
  • the storage means stores in advance the reference position information based on the positions of the respective marks of the plurality of sets copied in the reference photographed image, and the correction means stores the reference position information copied in the new photographed image. The relative position is corrected based on the position of a set of marks and the reference position information.
  • the invention according to claim 9 is the ophthalmologic observation apparatus according to claim 1, wherein the photographed image forming unit is photographed by moving along the optical axis of an optical system that guides the reflected light.
  • the zoom lens further includes a variable power lens for changing a magnification, and the correction unit includes a position of the variable power lens when the reference captured image is formed, and a position of the variable power lens when the new captured image is formed. Based on the position, the magnification of the reference photographed image and the magnification of the new photographed image are matched.
  • the invention according to claim 10 is the ophthalmic observation apparatus according to claim 1, wherein the photographed image forming unit moves along an optical axis of an optical system that guides the reflected light to photograph.
  • the zoom lens further includes a variable power lens for changing the magnification, and the correction unit is configured to determine the reference based on the size of the mark imprinted on the reference photographed image and the size of the mark imprinted on the new photographed image. The magnification of the photographed image and the magnification of the new photographed image are matched.
  • the invention described in claim 11 includes an irradiating means for irradiating illumination light to the eye to be examined and a light receiving means for receiving reflected light of the illumination light from the eye to be examined.
  • a captured image forming unit that forms a captured image of the eye to be inspected, and a low-coherence light is divided into a signal light and a reference light, and the signal light passing through the eye to be examined and a reference light passing through a reference light path
  • a tomographic image that includes an optical system that generates and detects interference light, and scanning means that scans the signal light with respect to the eye to be examined, and forms a tomographic image of the eye to be examined based on the detection result of the interference light
  • An ophthalmic observation device having a forming unit, wherein the photographed image forming unit is provided at a position substantially conjugate with the light receiving unit, and is imprinted in the photographed image as a mark representing the position of the light receiving unit.
  • Standard marking A tomographic image is formed by a storage unit that stores in advance reference position information based on the position of the mark imprinted in a reference photographed image formed using the light receiving unit disposed in And a correcting means for correcting the position of the tomographic image based on the scanning mode of the signal light by the scanning means and the stored reference position information.
  • the invention according to claim 12 is the ophthalmologic observation apparatus according to claim 11, wherein the photographed image forming unit is provided at a position substantially conjugate with the light receiving means, and the center of the reflected light Including a photographic mask having a transmissive region that transmits a part and a shielding region that shields a peripheral part, wherein the marking means includes a translucent part formed in the shielding region of the photographic mask, and the correcting means includes The reference position information is generated in advance based on the position of the mark imprinted in the reference photographed image as an image of the reflected light transmitted through the light transmitting portion, and when the tomographic image is formed, The position is corrected based on a scanning mode of signal light and the reference position information.
  • the invention according to claim 13 is the ophthalmologic observation apparatus according to claim 12, wherein the scanning means scans the signal light within a predetermined scanning region of the eye to be examined, and the marking means A pair of translucent portions formed in the shielding area of the photographing mask, and the correcting means includes a pair of marks corresponding to the pair of translucent portions imprinted in the reference photographed image.
  • An intermediate position is obtained to generate the reference position information, and a center position of the predetermined scanning area when the tomographic image is formed is obtained, and an intermediate position between the obtained center position and the pair of marks The position is corrected by translating the tomographic image based on the displacement.
  • the invention according to claim 14 is the ophthalmologic observation apparatus according to claim 12, wherein the scanning means scans the signal light along a predetermined scanning line of the eye to be examined, and the marking means. Includes a pair of translucent portions formed in the shielding area of the imaging mask, and the correcting means includes a pair of marks corresponding to the pair of translucent portions imprinted in the reference captured image.
  • the reference position information is generated by obtaining a straight line connecting the lines, and the tomographic image is rotated and moved based on an angle formed by the direction of the predetermined scanning line when the tomographic image is formed and the straight line. Then, the position is corrected.
  • the invention according to claim 15 is the ophthalmologic observation apparatus according to claim 11, wherein the marking means includes a light emitting member provided in a peripheral portion of the reflected light or an external position of the reflected light.
  • the correction means generates the reference position information in advance based on the position of the mark imprinted in the reference photographed image as an image of light output from the light emitting member, and further, the tomographic image is formed. The position is corrected based on the scanning mode of the signal light and the reference position information.
  • the invention according to claim 16 is the ophthalmologic observation apparatus according to claim 15, wherein the scanning means scans the signal light within a predetermined scanning region of the eye to be examined, and the marking means A pair of light emitting members provided at opposite positions across an optical axis of an optical system that guides the reflected light, and the correction means includes the pair of light emitting members imaged in the reference photographed image.
  • the reference position information is generated by obtaining an intermediate position between a pair of marks corresponding to the position, and the center position of the predetermined scanning region when the tomographic image is formed is obtained. And correcting the position by translating the tomographic image based on the displacement between the pair of marks and the intermediate position of the pair of marks.
  • the invention according to claim 17 is the ophthalmologic observation apparatus according to claim 15, wherein the scanning means scans the signal light along a predetermined scanning line of the eye to be examined, and the marking means. Includes a pair of the light emitting members provided at opposite positions across the optical axis of the optical system that guides the reflected light, and the correcting means includes the pair of light emitting elements that are captured in the reference photographed image.
  • the reference position information is generated by obtaining a straight line connecting a pair of marks corresponding to a member, and further, based on an angle formed by the direction of the predetermined scanning line and the straight line when the tomographic image is formed. Then, the position is corrected by rotating the tomographic image.
  • the invention according to claim 18 is the ophthalmologic observation apparatus according to claim 11, wherein the photographed image forming unit is along an optical axis of an optical system that guides the reflected light and the signal light.
  • the zoom lens further includes a zoom lens that moves to change the shooting magnification, and the correction unit includes a position of the zoom lens when the reference shot image is formed and the zoom lens when the tomographic image is formed. Based on the position of the lens, the magnification of the reference photographed image and the magnification of the tomographic image are matched.
  • the invention according to claim 19 is the ophthalmologic observation apparatus according to claim 1, wherein the image is subjected to image processing for erasing the mark imprinted on the photographed image formed by the photographed image forming unit.
  • the image processing apparatus further includes processing means and display means for displaying the captured image from which the mark has been deleted.
  • the invention according to claim 20 is the ophthalmologic observation apparatus according to claim 11, wherein the image is subjected to image processing for erasing the mark imprinted on the photographed image formed by the photographed image forming unit.
  • the image processing apparatus further includes processing means and display means for displaying the captured image from which the mark has been deleted.
  • the ophthalmic observation apparatus includes marking means for reflecting a mark representing the position of the light receiving means in the photographed image, and storage means for storing reference position information representing the reference position of the light receiving means. Further, when the ophthalmic observation apparatus forms a new captured image and tomographic image, the relative relationship between the captured image and the tomographic image is determined based on the position of the mark imprinted in the captured image and the reference position information. It has a correction means for correcting the position.
  • the relative position between the captured image and the tomographic image can be corrected based on the deviation of the mounting position of the light receiving means, so that the positional relationship between the captured image and the tomographic image can be grasped with high accuracy. Is possible.
  • the position of the captured image and the tomographic image cannot be corrected unless a sufficient number of tomographic images are formed to form a three-dimensional image.
  • a small number of tomographic images are acquired. Even in this case, unlike the conventional image processing, the positional relationship between the captured image and the tomographic image can be corrected by referring to the reference position information.
  • the ophthalmic observation apparatus forms a tomographic image while scanning signal light on the eye to be examined.
  • the ophthalmic observation apparatus further includes a marking unit that reflects a mark representing the position of the light receiving unit in the captured image, and a storage unit that stores reference position information representing the reference position of the light receiving unit.
  • the ophthalmic observation apparatus includes a correcting unit that corrects the position of the tomographic image based on the scanning mode of the signal light and the reference position information when the tomographic image is formed.
  • the present invention it is possible to correct the relative positional shift between the fundus image and the tomographic image due to the positional shift of the tomographic image, so that the positional relationship between the fundus image and the tomographic image can be grasped with high accuracy.
  • position correction can be performed by referring to the reference position information instead of the conventional image processing. Therefore, even when a small number of tomographic images are acquired, the captured image and the tomographic image are acquired. It is possible to correct the positional relationship with the image.
  • the ophthalmic observation apparatus forms a tomographic image of the eye to be examined using optical coherence tomography.
  • the type of optical coherence tomography applicable to this ophthalmic observation apparatus is not limited to the Fourier domain type described in detail below, and may be any type such as a swept source type or a full field type. Note that an image acquired by optical coherence tomography may be referred to as an OCT image.
  • an ophthalmologic observation apparatus having substantially the same configuration as the apparatus disclosed in Patent Document 5, that is, an apparatus combining a Fourier domain type OCT apparatus and a fundus camera is taken up. Even when other configurations are applied, the same operations and effects can be obtained by applying the same configuration as that of this embodiment.
  • the configuration for photographing the eye to be examined is not limited to the fundus camera, and any ophthalmic observation device configuration such as a slit lamp or SLO (Scanning Laser Ophthalmoscope) is applied to the present invention. It is possible.
  • the ophthalmic observation apparatus 1 includes a fundus camera unit 1 ⁇ / b> A, an OCT unit 150, and an arithmetic control device 200.
  • the fundus camera unit 1A has an optical system that is substantially the same as that of a conventional fundus camera.
  • a fundus camera is a device that captures the surface of the fundus and forms a two-dimensional image (captured image).
  • the fundus camera is used for photographing a fundus blood vessel.
  • the OCT unit 150 stores an optical system for acquiring an OCT image of the fundus.
  • the arithmetic and control unit 200 includes a computer that executes various arithmetic processes and control processes.
  • connection line 152 One end of a connection line 152 is attached to the OCT unit 150.
  • a connector 151 for connecting the connection line 152 to the retinal camera unit 1A is attached to the other end of the connection line 152.
  • An optical fiber 152a is conducted inside the connection line 152 (see FIG. 3).
  • the OCT unit 150 and the fundus camera unit 1A are optically connected via a connection line 152.
  • the arithmetic and control unit 200 is connected to each of the fundus camera unit 1A and the OCT unit 150 via a communication line that transmits an electrical signal.
  • the fundus camera unit 1A has an optical system for forming a captured image representing the form of the fundus surface by irradiating the eye E with illumination light and receiving the fundus reflection light.
  • Typical photographed images of the fundus surface include color images and monochrome images that depict the fundus surface, and fluorescence images that depict vascular dynamics (fluorescein fluorescence image, indocyanine green fluorescence image, etc.).
  • the fundus camera unit 1A is provided with an illumination optical system 100 and a photographing optical system 120 as in the case of a conventional fundus camera.
  • the illumination optical system 100 and the photographing optical system 120 are examples of the “photographed image forming unit” of the present invention.
  • the illumination optical system 100 irradiates the fundus oculi Ef with illumination light, and is an example of the “illumination means” in the present invention.
  • the imaging optical system 120 guides the fundus reflection light of the illumination light to the imaging devices 10 and 12.
  • the imaging optical system 120 guides the signal light from the OCT unit 150 to the fundus oculi Ef and guides the signal light passing through the fundus oculi Ef to the OCT unit 150.
  • the illumination optical system 100 includes an observation light source 101, a condenser lens 102, a photographing light source 103, a condenser lens 104, exciter filters 105 and 106, a ring translucent plate 107 (ring slit 107a), a mirror 108, as in a conventional fundus camera.
  • An LCD (Liquid Crystal Display) 109, an illumination stop 110, a relay lens 111, a perforated mirror 112, and an objective lens 113 are included.
  • the observation light source 101 outputs illumination light including wavelengths in the near-infrared region, for example, in the range of about 700 nm to 800 nm. This near-infrared light is set shorter than the wavelength of light used in the OCT unit 150 (described later).
  • the imaging light source 103 outputs illumination light including a wavelength in the visible region in the range of about 400 nm to 700 nm, for example.
  • the illumination light output from the observation light source 101 is a perforated mirror 112 via condenser lenses 102 and 104, (exciter filter 105 or 106) ring translucent plate 107, mirror 108, LCD 109, illumination diaphragm 110, and relay lens 111. To reach. Further, the illumination light is reflected by the perforated mirror 112 and enters the eye E through the objective lens 113 to illuminate the fundus oculi Ef. On the other hand, the illumination light output from the imaging light source 103 enters the eye E through the condenser lens 104 to the objective lens 113 and illuminates the fundus oculi Ef.
  • the photographing optical system 120 includes an objective lens 113, a perforated mirror 112 (hole 112a), a photographing aperture 121, barrier filters 122 and 123, a variable power lens 124, a relay lens 125, a photographing lens 126, a dichroic mirror 134, and a photographing mask. 127, a field lens (field lens) 128, a half mirror 135, a relay lens 131, a dichroic mirror 136, a photographing lens 133, an imaging device 10, a photographing lens 137, a mount 138, an imaging device 12, a lens 139, and an LCD 140.
  • the photographing optical system 120 has substantially the same configuration as a conventional fundus camera.
  • the dichroic mirror 134 reflects the fundus reflection light (having a wavelength included in the range of about 400 nm to 800 nm) of the illumination light from the illumination optical system 100.
  • the dichroic mirror 134 transmits the signal light LS (for example, having a wavelength included in the range of about 800 nm to 900 nm; see FIG. 3) from the OCT unit 150.
  • the dichroic mirror 136 reflects near-infrared light (fundus reflected light of illumination light from the observation light source 101) and transmits visible light (fundus reflected light of illumination light from the imaging light source 103).
  • the LCD 140 displays a fixation target (internal fixation target) for fixing the eye E to be examined.
  • the light from the LCD 140 is collected by the lens 139, reflected by the half mirror 135, and reflected by the dichroic mirror 134 via the field lens 128. Further, this light is incident on the eye E through the photographing lens 126, the relay lens 125, the variable power lens 124, the aperture mirror 112 (the aperture 112a thereof), the objective lens 113, and the like. Thereby, the internal fixation target is projected onto the fundus oculi Ef.
  • the fixation direction of the eye E can be changed by changing the display position of the internal fixation target on the LCD 140.
  • As the fixation direction of the eye E for example, as with a conventional fundus camera, a fixation direction for acquiring an image centered on the macular portion of the fundus oculi Ef or an image centered on the optic disc is acquired. And the fixation direction for acquiring an image centered on the fundus center between the macula and the optic disc.
  • the imaging device 10 includes an imaging element 10a.
  • the imaging device 10 can particularly detect light having a wavelength in the near infrared region. That is, the imaging device 10 functions as an infrared television camera that detects near-infrared light.
  • the imaging device 10 detects near infrared light and outputs a video signal.
  • the imaging element 10a is an arbitrary imaging element (area sensor) such as a CCD (Charge Coupled Devices) or a CMOS (Complementary Metal Oxide Semiconductor).
  • the imaging device 12 is a digital camera attached to the housing of the fundus camera unit 1A by a mount 138.
  • the mount 138 is a mounting part on the ophthalmic observation apparatus 1 side, and is configured to be able to engage with the mounting part on the imaging apparatus 12 side.
  • the mount 138 has a form according to a predetermined standard, and is configured so that various digital cameras can be attached / detached. Thereby, the user can use a desired digital camera as the imaging device 12.
  • the imaging device 12 includes an imaging element 12a.
  • the imaging device 12 can particularly detect light having a wavelength in the visible region. That is, the imaging device 12 functions as a television camera that detects visible light.
  • the imaging device 12 detects visible light and outputs a video signal.
  • the image sensor 12a is configured by an arbitrary image sensor (area sensor), similarly to the image sensor 10a.
  • the imaging device 12 (imaging element 12a) is an example of the “light receiving means” in the present invention.
  • the two imaging devices 10 and 12 can be selectively used. Can do.
  • the imaging apparatus 10 is built in the ophthalmic observation apparatus 1, particularly high details are required by mounting a digital camera having higher performance (such as a higher number of pixels) than the imaging apparatus 10 on the mount as the imaging apparatus 12.
  • the imaging device 12 can be used when acquiring an image to be obtained, and the imaging device 10 can be used when acquiring other images.
  • both of the imaging devices 10 and 12 may be of a built-in type (see, for example, Patent Document 5), or both may be of a type that is attached to the housing of the fundus camera unit 1A.
  • the number of imaging devices provided in the ophthalmic observation apparatus 1 is arbitrary, and the number of built-in types and the number of wearing types are also arbitrary.
  • the touch panel monitor 11 displays the fundus oculi image Ef ′ based on the video signal from the image sensor 10a. Further, the video signal from the image sensor 10 a is sent to the arithmetic and control unit 200. The video signal from the image sensor 12a is transmitted to the arithmetic control device 200 and other devices (display device and image analysis device). Note that a fundus image based on the video signal from the image sensor 12 a may be displayed on the touch panel monitor 11.
  • the imaging mask 127 is a member that determines the imaging range of images captured by the imaging devices 10 and 12.
  • the photographing mask 127 is disposed at a position that is substantially conjugate with respect to each of the imaging elements 10a and 12a. In the state where the optical system is aligned and focused on the fundus oculi Ef, the photographing mask 127 is substantially conjugate with the fundus oculi Ef.
  • alignment and focusing can be performed in the same manner as a conventional fundus camera.
  • FIG. 2 shows a configuration example of the photographing mask 127.
  • the imaging mask 127 is provided with a transmission region 127a that transmits the central portion (of the beam cross section) of the fundus reflection light of the illumination light, and a shielding region 127b that blocks the peripheral portion thereof.
  • the transmission region 127a is formed of a transparent material or as an opening so as to transmit fundus reflection light.
  • the transmission region 127a is formed in a substantially circular shape, and is arranged so that the center position thereof passes through the optical axis of the photographing optical system 120.
  • region 127a is for enabling it to identify the direction of a picked-up image.
  • the shielding region 127b is formed of a material having a light shielding action so as to shield the fundus reflection light, or the surface thereof is painted in a color having a light shielding action (for example, black).
  • the shield region 127b is provided with a pair of light-transmitting portions 127c and 127d formed at positions facing each other so as to sandwich the transmission region 127a.
  • the light transmitting portions 127c and 127d are an example of the “marking unit” of the present invention.
  • Each of the translucent portions 127c and 127d is formed of a transparent material or as an opening, similarly to the transmissive region 127a.
  • Each of the translucent portions 127c and 127d has an isosceles triangle shape with the inner side (the transmission region 127a side) as a vertex.
  • the pair of translucent portions 127c and 127d are arranged such that a straight line (line segment) connecting these inner vertices passes through the center position of the transmissive region 127a.
  • a pair of light transmitting portions 127c and 127d may be disposed so that the center position coincides with a predetermined internal division position of the line segment. Further, instead of the inner vertex, the positions of the pair of light transmitting portions 127c and 127d may be determined based on the characteristic positions of the light transmitting portions 127c and 127d (the position of the center of gravity, the other vertex, the middle point of the side, etc.). Good.
  • the translucent part should just be provided in the position which can specify the said center position and the said line segment uniquely.
  • the imaging mask 127 Since the imaging mask 127 is disposed almost conjugate with the imaging element 12a (the light receiving surface thereof), the fundus reflection light transmitted through the respective light transmitting portions 127c and 127d forms an image on the imaging element 12a. Therefore, the imaging device 12 forms a triangular image (mark) based on the fundus reflection light transmitted through each of the light transmitting portions 127c and 127d, together with the fundus image based on the fundus reflection light transmitted through the transmission region 127a.
  • the pair of marks are formed in a black background region (corresponding to the shielding region 127b) around the fundus image, and are formed at positions facing each other so as to sandwich the fundus image. In other words, due to the above conjugate relationship, the positional relationship between the pair of marks and the fundus image corresponds to the positional relationship between the transmissive region 127a and the translucent portions 127c and 127d.
  • the fundus camera unit 1A is provided with a scanning unit 141 and a lens 142.
  • the scanning unit 141 scans the irradiation position of the signal light LS output from the OCT unit 150 to the fundus oculi Ef.
  • the scanning unit 141 scans the signal light LS on the xy plane shown in FIG.
  • the scanning unit 141 is provided with, for example, a galvanometer mirror for scanning in the x direction and a galvanometer mirror for scanning in the y direction.
  • the configuration of the OCT unit 150 will be described with reference to FIG.
  • the OCT unit 150 includes an optical system similar to that of a conventional Fourier domain type OCT apparatus. That is, the OCT unit 150 divides the low-coherence light into reference light and signal light, and generates and detects interference light by causing the signal light passing through the fundus of the subject's eye to interfere with the reference light passing through the reference object. It has an optical system.
  • the detection result (detection signal) of the interference light is sent to the arithmetic and control unit 200.
  • the interferometer since the Fourier domain type is applied, the interferometer detects a spectral component of the generated interference light.
  • the low coherence light source 160 is a broadband light source that outputs a broadband low coherence light L0.
  • a broadband light source for example, a super luminescent diode (SLD), a light emitting diode (LED), or the like can be used.
  • SLD super luminescent diode
  • LED light emitting diode
  • the low coherence light L0 includes, for example, light having a wavelength in the near infrared region, and has a temporal coherence length of about several tens of micrometers.
  • the low coherence light L0 includes a wavelength longer than the illumination light (wavelength of about 400 nm to 800 nm) of the fundus camera unit 1A, for example, a wavelength in the range of about 800 nm to 900 nm.
  • the low coherence light L0 output from the low coherence light source 160 is guided to the optical coupler 162 through the optical fiber 161.
  • the optical fiber 161 is configured by, for example, a single mode fiber, a PM fiber (Polarization maintaining fiber), or the like.
  • the optical coupler 162 splits the low coherence light L0 into the reference light LR and the signal light LS.
  • the optical coupler 162 has both functions of a means for splitting light (splitter) and a means for superposing light (coupler), but here it is conventionally referred to as an “optical coupler”.
  • the reference light LR generated by the optical coupler 162 is guided by an optical fiber 163 made of a single mode fiber or the like and emitted from the end face of the fiber. Further, the reference light LR is converted into a parallel light beam by the collimator lens 171 and reflected by the reference mirror 174 through the glass block 172 and the density filter 173.
  • the reference light LR reflected by the reference mirror 174 passes through the density filter 173 and the glass block 172 again, is condensed on the fiber end surface of the optical fiber 163 by the collimator lens 171, and is guided to the optical coupler 162 through the optical fiber 163.
  • the glass block 172 and the density filter 173 act as delay means for matching the optical path lengths (optical distances) of the reference light LR and the signal light LS. Further, the glass block 172 and the density filter 173 function as dispersion compensation means for matching the dispersion characteristics of the reference light LR and the signal light LS.
  • the density filter 173 acts as a neutral density filter that reduces the amount of the reference light LR.
  • the density filter 173 is configured by, for example, a rotary ND (Neutral Density) filter.
  • the density filter 173 is rotationally driven by a drive mechanism (not shown) to change the amount of the reference light LR that contributes to the generation of the interference light LC.
  • the reference mirror 174 is moved in the traveling direction of the reference light LR (the direction of the double-sided arrow shown in FIG. 3) by a driving mechanism (not shown). Thereby, the optical path length of the reference light LR can be ensured according to the axial length of the eye E and the working distance (distance between the objective lens 113 and the eye E).
  • a polarizing element for adjusting the polarization state may be provided on the optical path (reference optical path) of the reference light LR.
  • the signal light LS generated by the optical coupler 162 is guided to the end of the connection line 152 by an optical fiber 164 made of a single mode fiber or the like.
  • the optical fiber 164 and the optical fiber 152a may be formed from a single optical fiber, or may be formed integrally by joining the respective end faces.
  • the signal light LS is guided by the optical fiber 152a and guided to the fundus camera unit 1A. Further, the signal light LS passes through the lens 142, the scanning unit 141, the dichroic mirror 134, the photographing lens 126, the relay lens 125, the variable magnification lens 124, the photographing aperture 121, the hole 112 a of the aperture mirror 112, and the objective lens 113. Then, it enters the eye E and irradiates the fundus oculi Ef. When irradiating the fundus oculi Ef with the signal light LS, the barrier filters 122 and 123 are retracted from the optical path in advance.
  • the signal light LS incident on the eye E is imaged and reflected on the fundus oculi Ef.
  • the signal light LS is not only reflected by the surface of the fundus oculi Ef, but also reaches the deep region of the fundus oculi Ef and is scattered at the refractive index boundary. Therefore, the signal light LS passing through the fundus oculi Ef includes information reflecting the surface form of the fundus oculi Ef and information reflecting the state of backscattering at the refractive index boundary of the deep tissue of the fundus oculi Ef. This light may be simply referred to as “fundus reflected light of the signal light LS”.
  • the fundus reflection light of the signal light LS is guided in the reverse direction along the same path as the signal light LS toward the eye E to be collected on the end surface of the optical fiber 152a. Further, the fundus reflection light of the signal light LS enters the OCT unit 150 through the optical fiber 152 a and returns to the optical coupler 162 through the optical fiber 164.
  • the optical coupler 162 superimposes the signal light LS returned via the fundus oculi Ef and the reference light LR reflected by the reference mirror 174 to generate interference light LC.
  • the interference light LC is guided to the spectrometer 180 through an optical fiber 165 made of a single mode fiber or the like.
  • a spectrometer (spectrometer) 180 detects a spectral component of the interference light LC.
  • the spectrometer 180 includes a collimator lens 181, a diffraction grating 182, an imaging lens 183, and a CCD 184.
  • the diffraction grating 182 may be transmissive or reflective. Further, in place of the CCD 184, other light detection elements (line sensor or area sensor) such as CMOS may be used.
  • the interference light LC incident on the spectrometer 180 is converted into a parallel light beam by the collimator lens 181 and split (spectral decomposition) by the diffraction grating 182.
  • the split interference light LC is imaged on the light receiving surface of the CCD 184 by the imaging lens 183.
  • the CCD 184 detects each spectral component of the separated interference light LC and converts it into electric charges.
  • the CCD 184 accumulates this electric charge and generates a detection signal. Further, the CCD 184 sends this detection signal to the arithmetic and control unit 200.
  • a Michelson interferometer is used.
  • any type of interferometer such as a Mach-Zehnder type can be appropriately used.
  • the configuration of the arithmetic and control unit 200 will be described.
  • the arithmetic and control unit 200 analyzes the detection signal input from the CCD 184 and forms an OCT image of the fundus oculi Ef.
  • the arithmetic processing for this is the same as that of a conventional Fourier domain type OCT apparatus.
  • the arithmetic and control unit 200 controls each part of the fundus camera unit 1A and the OCT unit 150.
  • the arithmetic control device 200 controls the output of illumination light by the observation light source 101 and the imaging light source 103, and controls the insertion / retraction operation of the exciter filters 105 and 106 and the barrier filters 122 and 123 on the optical path. , Operation control of the display device such as the LCD 140, movement control of the illumination aperture 110 (control of the aperture value), control of the aperture value of the photographing aperture 121, movement control of the variable power lens 124 (control of magnification / angle of view), and the like. . Further, the arithmetic and control unit 200 controls the scanning unit 141 to scan the signal light LS.
  • the arithmetic and control unit 200 controls the output of the low coherence light L0 by the low coherence light source 160, the movement control of the reference mirror 174, and the rotation operation of the density filter 173 (the amount of decrease in the light amount of the reference light LR). Control), charge accumulation time by CCD 184, charge accumulation timing, signal transmission timing, and the like.
  • the arithmetic and control unit 200 includes a microprocessor, a RAM, a ROM, a hard disk drive, a keyboard, a mouse, a display, a communication interface, and the like, like a conventional computer.
  • a computer program for controlling the ophthalmologic observation apparatus 1 is stored in the hard disk drive.
  • the arithmetic and control unit 200 may include a dedicated circuit board that forms an OCT image based on a detection signal from the CCD 184.
  • Control system The configuration of the control system of the ophthalmologic observation apparatus 1 will be described with reference to FIG.
  • the control system of the ophthalmologic observation apparatus 1 is configured around the control unit 210 of the arithmetic and control unit 200.
  • the control unit 210 includes, for example, the aforementioned microprocessor, RAM, ROM, hard disk drive, communication interface, and the like.
  • the control unit 210 is provided with a main control unit 211 and a storage unit 212.
  • the main control unit 211 performs the various controls described above.
  • the storage unit 212 stores various data. Examples of data stored in the storage unit 212 include image data of an OCT image, image data of a fundus oculi image Ef ′, and eye information to be examined.
  • the eye information includes information about the subject such as patient ID and name, and information about the eye such as left / right eye identification information.
  • the main control unit 211 performs a process of writing data to the storage unit 212 and a process of reading data from the storage unit 212.
  • the reference position information 213 is stored in the storage unit 212.
  • the storage unit 212 is an example of the “storage unit” in the present invention.
  • the reference position information 213 is generated based on a captured image formed using the imaging device 12 (imaging device 12a) arranged at a predetermined reference position. This generation process is executed by the image processing unit 230.
  • An image (reference photographed image) T shown in FIG. 5 is photographed by the imaging device 12 arranged at the reference position.
  • the reference photographed image T does not need to depict an actual fundus image, and it is sufficient if a pair of marks F1 and F2 are depicted. Further, the reference photographed image T may be acquired by photographing the fundus of the model eye.
  • the reference photographed image T forms a fundus image or a tomographic image to be corrected, for example, before shipment of the apparatus, during maintenance, when the apparatus is turned on, before starting an examination for each subject, before starting an examination for each eye. Get before you do.
  • the reference position does not have to have a special meaning.
  • the reference position is the position of the imaging device 12 when the reference captured image T is acquired, that is, the mounting position or mounting posture of the imaging device 12 with respect to the mount 138.
  • the position (reference position) of the imaging device 12 when the reference captured image T is acquired and the imaging device 12 (or imaging device) when the actual fundus image or tomographic image of the eye E is acquired.
  • the positional relationship between the fundus image and the tomographic image is corrected by comparing the position 10) (current position).
  • the imaging device that acquires the reference captured image T and the imaging device that acquires the actual fundus image need not be the same device.
  • a dedicated imaging device called a tool camera or the like
  • the reference captured image T is acquired by the tool camera
  • the actual fundus image is acquired by the imaging device for inspection. become.
  • the reference captured image T and the actual fundus image may be acquired by the same imaging device.
  • the image processing unit 230 obtains a line segment (reference line) H0 connecting the pair of marks F1 and F2.
  • This process can be executed as follows, for example. Note that the following processing is executed using an arbitrary two-dimensional coordinate system such as a coordinate system representing the position of the pixel forming the reference captured image T.
  • the pixel value of the reference photographed image T is analyzed, and the positions of the marks F1 and F2 in the reference photographed image T are specified.
  • a characteristic position for example, a position corresponding to the above-described inner vertex or a gravity center position
  • a reference line H0 connecting the feature position of the mark F1 and the feature position of the mark F2 is obtained.
  • the image processing unit 230 obtains an intermediate position (reference center) K0 of the line segment.
  • the intermediate position of the line segment connecting the two light transmitting portions 127c and 127d is coincident with the center position of the transmission region 127a, and the imaging mask 127 and the imaging device 12 are arranged almost conjugate. Therefore, the reference center K0 which is the middle position of the line segment connecting the two marks F1 and F2 is substantially coincident with the center position of the light image (substantially circular) received through the transmission region 127a. To do.
  • the image processing unit 230 sends the position information (coordinate values) of the reference line H0 and the reference center K0 thus obtained to the control unit 210.
  • the main control unit 211 stores the position information in the storage unit 212 as reference position information 213.
  • the shooting magnification (shooting angle of view) when the reference shot image T is acquired may be included in the reference position information 213 and stored.
  • the photographing magnification is obtained from the position of the variable magnification lens 124 when the reference photographed image T is acquired.
  • the reference position information 213 is not limited to the above data form.
  • coordinate values representing the positions of the two marks F1 and F2 may be stored as the reference position information 213.
  • the process for obtaining the reference line H0 and the reference center K0 is executed at the time of actual photographing of the fundus oculi Ef.
  • the reference position information 213 may be stored as the reference position information 213, and the reference center K0 may be obtained during actual photographing.
  • the reference photographed image T itself itself may be stored as the reference position information 213, and the reference line H0 and the reference center K0 may be obtained during actual photographing.
  • the scanning unit 141 galvanometer mirror
  • This adjustment operation is performed, for example, by placing a scanning adjustment scale on the front side of the objective lens 113.
  • a scanning adjustment scale For example, vertical lines and horizontal lines are provided on the scanning adjustment scale in a mesh pattern.
  • the scan adjustment scale is provided with a mark indicating the center position.
  • the scanning adjustment scale is arranged in front of the objective lens 113 so that the mark coincides with the optical axis of the photographing optical system 120, the horizontal line extends along the x direction, and the vertical line extends along the y direction. Be placed.
  • both galvanometer mirrors are in a neutral position (origin position: for example, a position in a state where a driving voltage is not applied)
  • the signal light LS is projected onto the center position of the scanning adjustment scale.
  • the position of the galvano mirror is adjusted so that the scanning line is parallel to the horizontal line. adjust.
  • the scanning line of the galvano mirror is arranged so as to be parallel to the vertical line. Adjust the position. With the above operation, the scanning center and the scanning direction in the x and y directions are corrected.
  • the image forming unit 220 forms tomographic image data of the fundus oculi Ef based on the detection signal from the CCD 184.
  • This process includes processes such as noise removal (noise reduction), filter processing, and FFT (Fast Fourier Transform) as in the conventional Fourier domain type optical coherence tomography.
  • the image forming unit 220 includes, for example, the above-described circuit board and communication interface.
  • the image forming unit 220 together with the optical system in the OCT unit 150 and the optical system in the retinal camera unit 1A for guiding the signal light LS, constitutes the “tomographic image forming unit” of the present invention.
  • image data and “image” presented based on the “image data” may be identified with each other.
  • the image processing unit 230 performs various types of image processing and analysis processing on the image formed by the image forming unit 220. For example, the image processing unit 230 executes various correction processes such as image brightness correction and dispersion correction. In addition, the image processing unit 230 executes the above-described processing for the reference captured image T.
  • the image processing unit 230 forms image data of a three-dimensional image of the fundus oculi Ef by executing interpolation processing for interpolating pixels between tomographic images formed by the image forming unit 220.
  • the image data of a three-dimensional image means image data in which pixel positions are defined by a three-dimensional coordinate system.
  • image data of a three-dimensional image there is image data composed of voxels arranged three-dimensionally. This image data is called volume data or voxel data.
  • the image processing unit 230 When displaying an image based on the volume data, the image processing unit 230 performs rendering processing (volume rendering, MIP (Maximum Intensity Projection), etc.) on the volume data, and views the image from a specific line-of-sight direction.
  • rendering processing volume rendering, MIP (Maximum Intensity Projection), etc.
  • MIP Maximum Intensity Projection
  • stack data of a plurality of tomographic images is image data of a three-dimensional image.
  • the stack data is image data obtained by three-dimensionally arranging a plurality of tomographic images obtained along a plurality of scanning lines based on the positional relationship of the scanning lines. That is, the stack data is image data obtained by expressing a plurality of tomographic images originally defined by individual two-dimensional coordinate systems using one three-dimensional coordinate system (that is, embedding in one three-dimensional space). is there.
  • the image processing unit 230 is provided with a displacement correction unit 231 and a rotation correction unit 232.
  • the image processing unit 230 including the correction units 231 and 232 is an example of the “correction unit” of the present invention.
  • the displacement correction unit 231 obtains an intermediate position between the pair of marks corresponding to the pair of translucent units 127c and 127d based on the image captured by the imaging device 12 (or the imaging device 10). For the reference photographed image T, the reference center K0 of the marks F1 and F2 is obtained as described above.
  • the intermediate position K represents the center position of an image depicting the surface of the fundus oculi Ef in the fundus oculi image Ef ′.
  • the intermediate position K may be referred to as the image center K.
  • This rectangular scanning region R is a three-dimensional scan (described later).
  • the signal light LS is scanned along a plurality of scanning lines each extending linearly in the horizontal direction (x direction).
  • the plurality of scanning lines are arranged in the vertical direction (y direction). Is set.
  • the horizontal direction corresponds to the x direction
  • the vertical direction corresponds to the y direction.
  • the scanning region of the signal light LS is generally narrower than the acquisition range of the fundus oculi image Ef ′.
  • the scanning region R in FIG. 6 has a square shape of 6 mm ⁇ 6 mm, for example.
  • the displacement correcting unit 231 translates the fundus image Ef ′ and / or the tomographic image based on the displacement between the reference center K0 and the image center K, and corrects the relative positions of these two images. This process will be described with reference to FIG.
  • the XY coordinate system shown in FIG. 7 is the above-described two-dimensional coordinate system such as a coordinate system representing the positions of pixels forming an image.
  • the image processing unit 230 enlarges / reduces one or both images so that the magnifications are the same. Adjust the scale. Thereby, the magnifications of both images are matched.
  • the imaging magnification of each of the images T and Ef ′ is obtained from the position of the variable magnification lens 124 when the images are acquired, for example.
  • the size of a mark (a physical quantity representing the size of the mark such as area, height, side length, etc.) imprinted on each image T, Ef ′ is obtained, and the size of the mark on both images T, Ef ′. It is also possible to match the magnifications of both images so as to be equal.
  • This displacement ⁇ ( ⁇ X, ⁇ Y) represents the displacement in the XY plane of the image center K of the fundus image Ef ′ with respect to the reference center K0 of the reference captured image T.
  • the coordinate value (X, Y) of the image center K is an example of the “current position information” in the present invention.
  • the current position information is information representing the current position of the imaging device 12, and this is due to the relative position of the imaging device 12a and the light transmitting portions 127c and 127d changing depending on the mounting position and mounting posture of the imaging device 12. is there.
  • the displacement correction unit 231 translates the fundus oculi image Ef ′ so as to cancel out this displacement ⁇ .
  • the fundus oculi image Ef ′ is moved by ( ⁇ X, ⁇ Y). Since the position of the scanning unit 141 has been adjusted as described above, the reference center K0 and the scanning center (the center position of the scanning region R) match. Therefore, by such displacement correction, the image center K coincides with the scanning center, and thereby the relative positions of the fundus image Ef ′ and each tomographic image in the X direction and the Y direction are corrected.
  • the rotation correcting unit 232 obtains a straight line (line segment) connecting a pair of marks corresponding to the pair of translucent units 127c and 127d based on an image captured by the imaging device 12 (or the imaging device 10).
  • the straight line is already calculated
  • the rotation correction unit 232 obtains a straight line first, the displacement correction unit 231 may use this as it is.
  • the reference line H0 connecting the marks F1 and F2 is obtained as described above.
  • the position of the center line H is an example of “current position information” in the present invention.
  • the rotation correction unit 232 obtains an angle (intersection angle) ⁇ formed by the reference line H0 and the center line H.
  • the intersection angle ⁇ can be easily calculated by expressing both straight lines in the XY coordinate system.
  • the rotation correction unit 232 rotates and moves the fundus oculi image Ef ′ so as to cancel the intersection angle ⁇ .
  • the fundus oculi image Ef ′ is rotated by ⁇ .
  • the rotation center at this time is, for example, the image center K. Since the position of the scanning unit 141 is adjusted as described above, the reference line H0 and the x direction of scanning are parallel to each other. Therefore, by such rotation correction, the center line H of the fundus oculi image Ef ′ and the direction of the tomographic image along the x direction coincide with each other. The position is corrected.
  • rotation correction can be similarly applied to a tomographic image along a direction other than the x direction. That is, since the tomographic image in any direction has a cross section in the xy plane, the cross-sectional direction of the tomographic image can be expressed in the xy coordinate system, and further, the x direction and the X direction can be represented by rotation correction. Since the y direction and the Y direction coincide with each other, the cross-sectional direction can be expressed in the XY coordinate system, whereby rotation correction can be performed. For example, for the tomographic image along the y direction, rotation correction is performed so that the cross-sectional direction is orthogonal to the reference line H0.
  • the position of the variable magnification lens 124 when the signal light LS is irradiated to the eye E and the magnification when the reference captured image T is acquired. Based on the position of the lens 124, the magnification of the tomographic image and the reference captured image T can be adjusted. Thereby, the magnification of the fundus oculi image Ef ′ and the tomographic image can be matched.
  • magnification adjustment of the tomographic image and the reference photographed image T is performed so that the size of the feature part of the fundus oculi Ef depicted in the tomographic image is the same as the size of the feature part depicted in the reference photographed image T. It is also possible to do this.
  • the position correction of the tomographic image has been described.
  • the position correction can be similarly performed on a three-dimensional image based on a plurality of tomographic images and a tomographic image of an arbitrary cross section.
  • the image processing unit 230 includes, for example, the above-described microprocessor, RAM, ROM, hard disk drive, circuit board, and the like.
  • the display unit 240 includes a display.
  • the operation unit 250 includes an input device such as a keyboard and a mouse and an operation device.
  • the operation unit 250 may include various buttons and keys provided on the housing of the ophthalmologic observation apparatus 1 or on the outside.
  • the display unit 240 and the operation unit 250 need not be configured as individual devices.
  • a device in which the display unit 240 and the operation unit 250 are integrated, such as a touch panel LCD, can be used.
  • the scanning mode of the signal light LS by the ophthalmic observation apparatus 1 includes, for example, a horizontal scan, a vertical scan, a cross scan, a radiation scan, a circle scan, a concentric scan, and a spiral (vortex) scan. These scanning modes are selectively used as appropriate in consideration of the observation site of the fundus, the analysis target (such as retinal thickness), the time required for scanning, the precision of scanning, and the like.
  • the horizontal scan is to scan the signal light LS in the horizontal direction (x direction).
  • the horizontal scan also includes an aspect in which the signal light LS is scanned along a plurality of horizontal scanning lines arranged in the vertical direction (y direction). In this aspect, it is possible to arbitrarily set the interval between adjacent scanning lines. By sufficiently narrowing the interval between the scanning lines, the above-described three-dimensional image can be formed (three-dimensional scan). The same applies to the vertical scan.
  • the cross scan scans the signal light LS along a cross-shaped trajectory composed of two linear trajectories (straight trajectories) orthogonal to each other.
  • the signal light LS is scanned along a radial trajectory composed of a plurality of linear trajectories arranged at a predetermined angle.
  • the cross scan is an example of a radiation scan.
  • the circle scan scans the signal light LS along a circular locus.
  • the signal light LS is scanned along a plurality of circular trajectories arranged concentrically around a predetermined center position.
  • a circle scan is considered a special case of a concentric scan.
  • the signal light LS is scanned along a spiral (spiral) locus while the radius of rotation is gradually reduced (or increased).
  • the scanning unit 141 can scan the signal light LS independently in the x direction and the y direction, respectively, by the configuration as described above. Therefore, the scanning unit 141 can scan the signal light LS along an arbitrary locus on the xy plane. . Thereby, various scanning modes as described above can be realized.
  • a tomographic image in the depth direction (x direction) along the scanning line (scanning locus) can be formed.
  • the above-described three-dimensional image can be formed.
  • the ophthalmologic observation apparatus 1 has a function of forming a captured image (fundus image Ef ′) of the fundus oculi Ef and a function of forming a tomographic image of the fundus oculi Ef.
  • the photographing optical system 120 includes translucent portions 127c and 127d that are captured in the photographed image as marks indicating the position of the imaging device 12.
  • the translucent portions 127c and 127d are provided at substantially conjugate positions with respect to the imaging device 12 (imaging element 12a).
  • the storage unit 212 stores reference position information 213 based on the positions of the marks F1 and F2 copied in the reference photographed image T.
  • the reference position information 213 includes position information of the reference center K0 and the reference line H0.
  • the image processing unit 230 determines the fundus oculi image based on the positions of the marks M1 and M2 and the reference position information 213 imprinted in the fundus oculi image Ef ′. The relative position between Ef ′ and the tomographic image is corrected. At this time, the fundus image Ef ′ and the tomographic image are translated based on the image center K and the reference center K0 based on the marks M1 and M2. Further, the fundus image Ef ′ and the tomographic image are rotationally moved based on the center line H based on the marks M1 and M2 and the reference line H0.
  • the relative position between the fundus image Ef ′ and the tomographic image can be corrected based on the displacement of the mounting position of the imaging device 12. It is possible to grasp with high accuracy.
  • the positional correction between the fundus image Ef ′ and the tomographic image cannot be performed unless a sufficient number of tomographic images are formed to form a three-dimensional image. Even when a single tomographic image is acquired, the positional relationship between the fundus oculi image Ef ′ and the tomographic image can be corrected.
  • the conventional position correction processing such as forming a three-dimensional image, generating an integrated image, and obtaining an image correlation between the integrated image and the fundus image is necessary. According to this, since the position correction can be performed by a simple process, it is possible to shorten the processing time and save resources for calculation.
  • the mark is imprinted on the photographed image by providing a pair of light transmitting portions on the photographing mask, but the marking means is not limited to this. For example, it is possible to perform the same position correction using a single translucent part.
  • the “set” is a mark unit that can execute position correction.
  • the pair is a set.
  • a single mark is used as described above, one is a set.
  • This modification enables the position correction to be performed even when there is a mark that cannot be detected in the photographed image by allowing a plurality of sets of marks that can be corrected to be included.
  • the mark cannot be detected there are a case where a flare occurs, a case where a part of the fundus reflected light is lost due to turbidity of the eye to be examined or a small pupil.
  • This modification can be realized as follows, for example.
  • a plurality of pairs of translucent parts as in the above embodiment are provided.
  • a pair of translucent portions disposed at the opposite positions in the vertical direction are provided together with a pair of translucent portions disposed at the opposite positions in the horizontal direction.
  • reference position information is generated and stored based on the positions of each of a plurality of sets of marks imprinted in the reference photographed image. Then, the relative positions of the fundus image and the tomographic image are corrected based on the position of the set of marks imprinted on the fundus image of the eye to be examined and the reference position information.
  • the marking means is not limited to the translucent part provided on the photographing mask.
  • a light emitting member can be provided as a marking means at a peripheral portion of the fundus reflected light or an external position of the fundus reflected light.
  • a light emitting element such as an LED can be used.
  • the image processing unit 230 generates reference position information based on the position of the mark imprinted in the reference photographed image as the light image output from the light emitting member. Further, current position information representing the current position of the imaging device 12 is generated based on the position of the mark that is captured in the fundus image of the eye to be examined as light from the light emitting member. Then, the current position information and the reference position information are compared to correct the relative position between the fundus image and the tomographic image.
  • the pair of light emitting members are provided at opposing positions across the optical axis of the optical system that guides fundus reflected light.
  • the image processing unit 230 obtains an intermediate position between the pair of marks corresponding to the pair of light emitting members based on the captured image. Then, based on the displacement between the intermediate position based on the reference photographed image and the intermediate position based on the fundus image of the eye to be examined, the fundus image or tomographic image is translated to correct the relative position of these images.
  • the image processing unit 230 obtains a straight line connecting a pair of marks corresponding to the pair of light emitting members based on the photographed image. Then, based on the angle formed by the straight line based on the reference photographed image and the straight line based on the fundus image of the eye to be examined, the fundus image and the tomographic image are rotated and the relative positions of these images are corrected.
  • the positional relationship between the fundus image and the tomographic image can be grasped with high accuracy. Even when a small number of tomographic images are acquired, the positional relationship between the fundus image and the tomographic image can be corrected. In addition, processing time can be shortened and resources required for computation can be saved.
  • a method for restricting the image presentation range by performing image processing for blackening a peripheral region (an image region corresponding to the shielding region 127b) in the frame of the photographic image instead of using a photographic mask, a method for restricting the image presentation range by performing image processing for blackening a peripheral region (an image region corresponding to the shielding region 127b) in the frame of the photographic image.
  • an electronic mask or the like for example, Japanese Patent Application Laid-Open No. 2007-143671
  • Modification 4 A modification in which the position of the tomographic image is corrected by a method different from that of the above embodiment will be described.
  • a tomographic image is formed by scanning signal light within a predetermined scanning region of the eye to be examined (fundus). The scanning of the signal light is performed by the scanning means (scanning unit 141).
  • the captured image forming unit and the tomographic image forming unit have the same configuration as that of the above-described embodiment, for example.
  • a reference photographed image is formed in the same manner as in the above-described embodiments and modifications, and reference position information is generated based on the position of the mark imprinted on this reference photographed image.
  • the generated reference position information is stored in the storage unit (storage unit 212).
  • the marking means for reflecting the mark may be, for example, a light-transmitting part of the photographing mask or a light emitting member, as in the above-described embodiment or modification. Further, as long as it is possible to include a mark in a captured image, marking means other than these may be provided.
  • the correcting means (image processing unit 230) of the ophthalmologic observation apparatus corrects the position of the tomographic image based on the scanning mode of the signal light and the reference position information.
  • image processing unit 230 the correcting means of the ophthalmologic observation apparatus
  • the reference photographed image T includes a pair of marks F1 and F2.
  • the image processing unit 230 obtains an intermediate position (reference center) K0 between the marks F1 and F2 and generates reference position information.
  • the image processing unit 230 obtains the center position (scanning center) C of the scanning region R.
  • the scanning region R may be obtained to obtain the scanning center C, or an observation image of the fundus oculi Ef (observation light source)
  • the scanning center R may be obtained by obtaining the scanning region R based on the scanning locus reflected in the image captured using the illumination light from 101.
  • the image processing unit 230 translates the tomographic image based on the displacement between the scanning center C and the reference center K0. That is, the image processing unit 230 translates the tomographic image so that the scanning center C coincides with the reference center K0.
  • the image processing unit 230 obtains a straight line (reference line) H0 that connects the marks F1 and F2 in the reference photographed image T, and generates reference position information.
  • the image processing unit 230 obtains an angle (crossing angle) formed by the direction of the scanning line Ri and the reference line H0, and the crossing angle is set to this crossing angle. Based on this, the tomogram is rotated.
  • the direction of the scanning line Ri may be obtained based on the control content of the scanning of the signal light, or the direction of the scanning line Ri may be obtained based on the scanning locus reflected in the observation image of the fundus oculi Ef.
  • the scanning center C coincides with the reference center K0, and the scanning line Ri becomes parallel to the reference line H0.
  • the position of the tomographic image is corrected (see FIG. 9).
  • the shift of the relative position between the fundus image and the tomographic image due to the positional shift of the tomographic image can be corrected, the positional relationship between the fundus image and the tomographic image can be grasped with high accuracy.
  • the positional relationship between the fundus image and the tomographic image can be corrected even when a small number (or one) of tomographic images is acquired.
  • the position of the tomographic image can be corrected even when only the tomographic image is acquired.
  • position correction can be performed by a simple process as compared with the case of performing conventional image processing, so that it is possible to shorten processing time and save resources for calculation.
  • the three-dimensional scan has been described in detail, but the same method can be applied to other scan modes.
  • a horizontal scan, a vertical scan, a cross scan, and a radiation scan are a combination of linear scanning lines, a method using an intersection angle is applicable.
  • the scanning center is determined by regarding the center point position of one scanning line as the scanning center, or by determining the scanning center by regarding each of the scanning lines as a scanning region. It is possible to apply a technique using. For example, it is possible to specify the midpoint position of both ends of one scanning line forming a horizontal scan and regard it as the scanning center, and perform correction based on the displacement between the scanning center and the reference center. Further, both end positions of two scanning lines (crossing each other) forming a cross scan are specified, and a square formed by connecting these four end positions is defined as a scanning area, and the center of this scanning area The position can be regarded as the scanning center, and correction can be performed based on the displacement between the scanning center and the reference center.
  • the signal light is scanned along the circular scanning line. Therefore, the region surrounded by the scanning line can be regarded as the scanning region, and correction can be performed with the center position as the scanning center. . It is also possible to obtain a tangent direction at a specific position (scanning start position, search end position, etc.) on the circular scanning line and perform correction based on the intersection angle between the tangential direction and the reference line. Further, it is possible to obtain a straight line connecting the specific position and the scanning center and perform correction based on the intersection angle between the straight line and the reference line.
  • a center position (scanning center) can be calculated based on a spiral trajectory, and correction can be performed based on the displacement between the scanning center and the reference center. It is also possible to obtain a straight line connecting the scanning start position and the scanning end position in the spiral trajectory and perform correction based on the intersection angle between the straight line and the reference line.
  • the same correction can be performed for scanning modes other than the above. For example, if the scanning center and the direction of the scanning line can be defined based on a certain scanning mode, the scanning mode can be corrected by the same method as described above.
  • a characteristic target (target expressed as a physical quantity such as position, direction, area, length, etc.) based on the scanning mode of the signal light can be defined.
  • the correction can be performed with reference to the target.
  • reference position information corresponding to the characteristic target to be applied is acquired in advance based on the mark.
  • correction can be performed by regarding the position of the center of gravity of the region surrounded by the scanning line as the scanning center.
  • the identification information of the scanning mode is associated with each reference position information.
  • the reference position information can be selectively used based on the identification information of the designated scanning mode.
  • the mark is used for the correction process as described above and is not an object of observation, so it may be deleted when displayed. Even if the mark is erased at the time of display, it is not necessary to erase even the information indicating the mark position.
  • the process of filling the mark is executed by the image processing unit 230.
  • the image processing unit 230 functions as an “image processing unit” of the present invention.
  • the captured image is displayed on the display unit 240 under the control of the control unit 210.
  • the display unit 240 functions as the “display unit” of the present invention.
  • the position of the reference mirror 174 is changed to change the optical path length difference between the optical path of the signal light LS and the optical path of the reference light LR.
  • the method of changing the optical path length difference is limited to this. Is not to be done.
  • the optical path length difference can be changed by moving the fundus camera unit 1A or the OCT unit 150 with respect to the eye E to change the optical path length of the signal light LS. It is also effective to change the optical path length difference by moving the measurement object in the depth direction (z direction), particularly when the measurement object is not a living body part.
  • the computer program for executing the correction process according to the above embodiment can be stored in any recording medium that can be read by the drive device of the computer.
  • this recording medium for example, an optical disk, a magneto-optical disk (CD-ROM / DVD-RAM / DVD-ROM / MO, etc.), a magnetic storage medium (hard disk / floppy (registered trademark) disk / ZIP, etc.), etc. are used. Is possible. It can also be stored in a storage device such as a hard disk drive or memory.
  • this program can be transmitted and received through a network such as the Internet or a LAN.
  • SYMBOLS 1 Ophthalmic observation apparatus 1A Fundus camera unit 127 Shooting mask 127a Transmission area 127b Shielding area 127c, 127d Translucent part 141 Scan unit 150 OCT unit 160 Low coherence light source 174 Reference mirror 180 Spectrometer 184 CCD 200 Arithmetic Control Unit 210 Control Unit 213 Reference Position Information 220 Image Forming Unit 230 Image Processing Unit 231 Displacement Correction Unit 232 Rotation Correction Unit 240 Display Unit 250 Operation Unit T Reference Captured Image K0 Reference Center H0 Reference Line Ef ′ Fundus Image K Image Center H Center line R Scan area Ri Scan line C Scan center F1, F2, M1, M2 mark

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)
PCT/JP2010/002240 2009-04-02 2010-03-29 眼科観察装置 WO2010113459A1 (ja)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009-090212 2009-04-02
JP2009090212A JP5144579B2 (ja) 2009-04-02 2009-04-02 眼科観察装置

Publications (1)

Publication Number Publication Date
WO2010113459A1 true WO2010113459A1 (ja) 2010-10-07

Family

ID=42827769

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/002240 WO2010113459A1 (ja) 2009-04-02 2010-03-29 眼科観察装置

Country Status (2)

Country Link
JP (1) JP5144579B2 (enrdf_load_stackoverflow)
WO (1) WO2010113459A1 (enrdf_load_stackoverflow)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102670169A (zh) * 2011-03-10 2012-09-19 佳能株式会社 摄像设备及其控制方法
CN114712182A (zh) * 2022-02-21 2022-07-08 北京师范大学 家用弱视后像治疗仪

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6188297B2 (ja) * 2012-01-25 2017-08-30 キヤノン株式会社 画像処理装置、画像処理方法およびプログラム
JP6429447B2 (ja) * 2013-10-24 2018-11-28 キヤノン株式会社 情報処理装置、比較方法、位置合わせ方法及びプログラム
JP6243957B2 (ja) * 2016-04-18 2017-12-06 キヤノン株式会社 画像処理装置、眼科システム、画像処理装置の制御方法および画像処理プログラム

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000245699A (ja) * 1998-12-30 2000-09-12 Canon Inc 眼科装置
JP2006212153A (ja) * 2005-02-02 2006-08-17 Nidek Co Ltd 眼科撮影装置
JP2008154704A (ja) * 2006-12-22 2008-07-10 Topcon Corp 眼底観察装置、眼底画像表示装置及びプログラム
JP2008206684A (ja) * 2007-02-26 2008-09-11 Topcon Corp 眼底観察装置、眼底画像処理装置及びプログラム

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3664937B2 (ja) * 2000-03-27 2005-06-29 株式会社ニデック 眼科装置
DE04724579T1 (de) * 2003-04-11 2006-08-31 Bausch & Lomb Inc. System und methode zur erfassung von daten, zum ausrichten und zum verfolgen eines auges

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000245699A (ja) * 1998-12-30 2000-09-12 Canon Inc 眼科装置
JP2006212153A (ja) * 2005-02-02 2006-08-17 Nidek Co Ltd 眼科撮影装置
JP2008154704A (ja) * 2006-12-22 2008-07-10 Topcon Corp 眼底観察装置、眼底画像表示装置及びプログラム
JP2008206684A (ja) * 2007-02-26 2008-09-11 Topcon Corp 眼底観察装置、眼底画像処理装置及びプログラム

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102670169A (zh) * 2011-03-10 2012-09-19 佳能株式会社 摄像设备及其控制方法
US8992018B2 (en) 2011-03-10 2015-03-31 Canon Kabushiki Kaisha Photographing apparatus and photographing method
US9687148B2 (en) 2011-03-10 2017-06-27 Canon Kabushiki Kaisha Photographing apparatus and photographing method
CN114712182A (zh) * 2022-02-21 2022-07-08 北京师范大学 家用弱视后像治疗仪

Also Published As

Publication number Publication date
JP2010240068A (ja) 2010-10-28
JP5144579B2 (ja) 2013-02-13

Similar Documents

Publication Publication Date Title
JP5404078B2 (ja) 光画像計測装置
JP5058627B2 (ja) 眼底観察装置
JP5340636B2 (ja) 眼底観察装置
JP5324839B2 (ja) 光画像計測装置
JP4855150B2 (ja) 眼底観察装置、眼科画像処理装置及び眼科画像処理プログラム
JP4864516B2 (ja) 眼科装置
JP5061380B2 (ja) 眼底観察装置、眼科画像表示装置及びプログラム
JP4864515B2 (ja) 眼底観察装置
JP5543171B2 (ja) 光画像計測装置
JP5916110B2 (ja) 画像表示装置、画像表示方法、及びプログラム
US20130093870A1 (en) Fundus image processing apparatus and fundus observation apparatus
JP5491064B2 (ja) 光画像計測装置
WO2015029675A1 (ja) 眼科装置
WO2013187146A1 (ja) 眼科撮影装置及び眼科画像処理装置
US9072457B2 (en) Optical image measurement apparatus and optical attenuator
JP6624641B2 (ja) 眼科装置
JP5996959B2 (ja) 眼底解析装置
JP5144579B2 (ja) 眼科観察装置
JP5584345B2 (ja) 光画像計測装置及び撮影装置
US10045691B2 (en) Ophthalmologic observation apparatus using optical coherence tomography
WO2013085042A1 (ja) 眼底観察装置
JP5919175B2 (ja) 光画像計測装置
JP2019154764A (ja) 涙液層厚み測定装置及び方法
JP2024127325A (ja) 光画像形成装置、光画像形成装置の制御方法、及び、プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10758239

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10758239

Country of ref document: EP

Kind code of ref document: A1