WO2010119632A1 - 眼底観察装置 - Google Patents
眼底観察装置 Download PDFInfo
- Publication number
- WO2010119632A1 WO2010119632A1 PCT/JP2010/002424 JP2010002424W WO2010119632A1 WO 2010119632 A1 WO2010119632 A1 WO 2010119632A1 JP 2010002424 W JP2010002424 W JP 2010002424W WO 2010119632 A1 WO2010119632 A1 WO 2010119632A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- scanning
- fundus
- signal light
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/102—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/14—Arrangements specially adapted for eye photography
- A61B3/15—Arrangements specially adapted for eye photography with means for aligning, spacing or blocking spurious reflection ; with means for relaxing
- A61B3/152—Arrangements specially adapted for eye photography with means for aligning, spacing or blocking spurious reflection ; with means for relaxing for aligning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/113—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/12—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/12—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
- A61B3/1225—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes using coherent radiation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/14—Arrangements specially adapted for eye photography
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20068—Projection on vertical or horizontal image axis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Definitions
- the present invention relates to a fundus oculi observation device that forms an image of the fundus oculi using optical coherence tomography (Optical Coherence Tomography).
- optical coherence tomography that forms an image representing the surface form and internal form of an object to be measured using a light beam from a laser light source or the like has attracted attention.
- Optical coherence tomography does not have invasiveness to the human body like an X-ray CT apparatus, and therefore is expected to be applied particularly in the medical field and biological field.
- Patent Document 1 discloses an apparatus to which optical coherence tomography is applied.
- the measuring arm scans an object with a rotary turning mirror (galvanomirror), a reference mirror is installed on the reference arm, and the intensity of the interference light of the light beam from the measuring arm and the reference arm is dispersed at the exit.
- An interferometer is provided for analysis by the instrument.
- the reference arm is configured to change the phase of the reference light beam stepwise by a discontinuous value.
- Patent Document 1 uses a so-called “Fourier Domain OCT (Fourier Domain Optical Coherence Tomography)” technique.
- a low-coherence beam is irradiated onto the object to be measured, the reflected light and the reference light are superimposed to generate interference light, and the spectral intensity distribution of the interference light is acquired and subjected to Fourier transform.
- This type of technique is also referred to as a spectral domain.
- the apparatus described in Patent Document 1 includes a galvanometer mirror that scans a light beam (signal light), thereby forming an image of a desired measurement target region of the object to be measured. Since this apparatus is configured to scan the light beam only in one direction (x direction) orthogonal to the z direction, the image formed by this apparatus is in the scanning direction (x direction) of the light beam. It becomes a two-dimensional tomogram in the depth direction (z direction) along.
- a plurality of horizontal two-dimensional tomographic images are formed by scanning signal light in the horizontal direction (x direction) and the vertical direction (y direction), and the measurement range is determined based on the plurality of tomographic images.
- a technique for acquiring and imaging three-dimensional tomographic information is disclosed. Examples of the three-dimensional imaging include a method of displaying a plurality of tomographic images side by side in a vertical direction (referred to as stack data) and a method of rendering a plurality of tomographic images to form a three-dimensional image. Conceivable.
- Patent Documents 3 and 4 disclose other types of OCT apparatuses.
- Patent Document 3 scans the wavelength of light applied to an object to be measured, acquires a spectral intensity distribution based on interference light obtained by superimposing reflected light of each wavelength and reference light
- an OCT apparatus for imaging the form of an object to be measured by performing Fourier transform on the object is described.
- Such an OCT apparatus is called a swept source type.
- the swept source type is an example of a Fourier domain type.
- Patent Document 4 the traveling direction of light is obtained by irradiating the object to be measured with light having a predetermined beam diameter, and analyzing the component of interference light obtained by superimposing the reflected light and the reference light.
- An OCT apparatus for forming an image of an object to be measured in a cross-section orthogonal to is described. Such an OCT apparatus is called a full-field type or an en-face type.
- Patent Document 5 discloses a configuration in which optical coherence tomography is applied to the ophthalmic field. According to the fundus oculi observation device, a tomographic image or a three-dimensional image of the fundus can be acquired. Prior to the application of the OCT apparatus to the ophthalmology field, a fundus observation apparatus such as a fundus camera was used (see, for example, Patent Document 6).
- the fundus oculi observation device using optical coherence tomography has an advantage that a tomographic image and a three-dimensional image of the fundus can be obtained as compared with a fundus camera that only photographs the fundus from the front. Therefore, it is expected to contribute to improvement of diagnostic accuracy and early detection of lesions.
- measurement is performed by two-dimensionally scanning the signal light. This scan takes a few seconds. Therefore, there is a possibility that the eye to be inspected may move during the scan (such as fixation disparity) or blink. If it does so, the accuracy of an image will fall, for example, a 3D image will be distorted or a part of image of a measurement object area may not be obtained.
- the present invention has been made to solve the above problems, and its purpose is to obtain a highly accurate OCT image even when the eye to be examined is moved or blinked during scanning of signal light.
- An object of the present invention is to provide a fundus oculi observation device that can do this.
- the invention according to claim 1 divides low-coherence light into signal light and reference light, and the signal light passing through the fundus of the eye to be examined and the reference light passing through the reference light path And an optical system for detecting the generated interference light, and a scanning unit that scans the fundus with the signal light and sequentially irradiates the signal light to a plurality of scanning points. And an image forming means for forming a one-dimensional image extending in the depth direction of the fundus at each of the plurality of scanning points based on the detection result of the interference light by the optical system, and the signal light is scanned.
- Calculation means to calculate A fundus observation device comprising: a.
- the invention according to claim 2 is the fundus oculi observation device according to claim 1, wherein the signal light is irradiated to one scanning point of the plurality of scanning points during the predetermined time interval.
- the scanning time interval from when the signal light is irradiated to the next scanning point is approximately an integer multiple, and when the signal light is sequentially irradiated to the plurality of scanning points by the scanning unit,
- the detection means detects the position of the fundus oculi each time the integer number of scanning points is scanned, and the calculation means divides the plurality of one-dimensional images into one-dimensional image groups for each integer number.
- the position of each one-dimensional image group is specified based on the detection result of the position of the fundus when the integer number of scanning points corresponding to each one-dimensional image group is scanned, and each of the specified ones Calculating the positional deviation amount based on the position of the dimensional image group, and That.
- the invention according to claim 3 is the fundus oculi observation device according to claim 2, wherein the integer is 1, the one-dimensional image group is composed of one one-dimensional image, and the computing means includes: For each of the plurality of one-dimensional images, the position of the one-dimensional image is specified based on the detection result of the position of the fundus when the scanning point corresponding to the one-dimensional image is scanned. The positional deviation amount is calculated based on the plurality of positions.
- the invention according to claim 4 is the fundus oculi observation device according to claim 2, wherein the integer is 2 or more, the one-dimensional image group is composed of two or more one-dimensional images, and the calculation is performed.
- the means includes a detection result of the position of the fundus when two or more scanning points corresponding to one one-dimensional image group are scanned, and two or more corresponding to the next one-dimensional image group. Based on the detection result of the position of the fundus when the scanning point is scanned, the positional shift amount of the one-dimensional image included in the one-dimensional image group and / or the next one-dimensional image group It is characterized by estimating.
- the invention according to claim 5 is the fundus oculi observation device according to claim 1, wherein the detection means scans the fundus when the signal light is scanned by the scanning means for the predetermined time.
- Image capturing means for capturing a moving image by capturing images at intervals; and an image area specifying means for specifying an image area of a characteristic part of the fundus in each still image forming the moving image, and the image in each still image The position of the region is obtained as the position of the fundus.
- the invention according to claim 6 is the fundus oculi observation device according to claim 5, wherein when the calculation means has a still image in which the image area is not specified by the image area specifying means, Scanning point specifying means for specifying a scanning point of a one-dimensional image corresponding to a still image, the scanning means again irradiates the specified scanning point with the signal light, and the image forming means re-irradiates A new one-dimensional image is formed based on the detection result of the interference light between the signal light and the reference light.
- the invention according to claim 7 is the fundus oculi observation device according to claim 1, wherein the calculation means is configured to calculate the plurality of 1 in the fundus surface direction based on the calculated displacement amount.
- the invention according to claim 8 is the fundus oculi observation device according to claim 1, wherein when the signal light is scanned, the calculation means sequentially detects the detection at the predetermined time intervals.
- the positional shift amount is sequentially calculated based on the fundus position, and the scanning unit is controlled based on the sequentially calculated positional shift amount to correct the irradiation position of the signal light on the fundus. It has a control means, It is characterized by the above-mentioned.
- the invention according to claim 9 is the fundus oculi observation device according to claim 1, wherein the plurality of scanning points are arranged along a predetermined scanning line, and the scanning means includes the predetermined scanning line. And the image forming means repeatedly forms the plurality of one-dimensional images corresponding to the plurality of scanning points according to the repeated scanning, and the computing means repetitively forms the signal light.
- the position shift amount is repeatedly calculated according to the determination means, and a determination unit that determines whether each of the repeatedly calculated position shift amounts is included in a predetermined allowable range, and the position determined to be included in the predetermined allowable range
- An image superimposing unit that superimposes a set of the plurality of one-dimensional images corresponding to the shift amount for each one-dimensional image corresponding to each scanning point, and the image forming unit includes a new one formed by the superposition.
- a plurality of one-dimensional images By arranging according to the sequence of ⁇ forms a tomographic image along said predetermined scanning line, characterized in that.
- the invention according to claim 10 is the fundus oculi observation device according to claim 1, wherein the calculation means specifies a one-dimensional image in which the calculated positional deviation amount is a predetermined value or more. Including a specifying unit, and the scanning unit again irradiates the signal light toward a scanning point corresponding to each one-dimensional image specified by the image specifying unit, and the image forming unit re-irradiates the re-irradiated A new one-dimensional image at the scanning point is formed based on the detection result of the interference light between the signal light and the reference light.
- the invention according to claim 11 is the fundus oculi observation device according to claim 1, wherein the plurality of scanning points are arranged along a predetermined scanning line, and the computing means is configured to calculate the calculated position. For each of the plurality of scanning points, an image selection unit that selects a one-dimensional image closest to the original position of the scanning point based on a shift amount, and the image forming unit includes: A tomographic image along the predetermined scanning line is formed by arranging the selected one-dimensional image according to the arrangement of the plurality of scanning points.
- the invention according to claim 12 is the fundus oculi observation device according to claim 1, wherein the calculation means detects the interference light between the signal light and the reference light separately scanned by the scanning means.
- the invention according to claim 13 is the fundus oculi observation device according to claim 12, wherein the scanning means follows the scanning line that intersects the array direction of the plurality of scanning points as the separate scanning.
- the signal light is sequentially irradiated to a predetermined number of scanning points, and the image forming unit forms the one-dimensional image at each of the predetermined number of scanning points, and the predetermined number of one-dimensional images thus formed are formed.
- a tomographic image corresponding to the scanning line is formed on the basis of the tomographic image, and the computing unit specifies an image region of the fundus characteristic layer in the tomographic image, and the feature in the tomographic image formed by arranging the plurality of scanning points side by side. Identifying an image area of a layer, calculating a displacement in the depth direction between the image area corresponding to the scanning line and the image area corresponding to the plurality of scanning points, and based on the calculated displacement, the depth In direction Calculating a positional deviation amount of the serial plurality of one-dimensional image, and wherein the.
- the invention according to claim 14 is the fundus oculi observation device according to claim 12, wherein the calculation means is configured to perform the calculation in the depth direction based on the calculated amount of the positional deviation in the depth direction. And a second correction unit that corrects the positions of the plurality of one-dimensional images.
- the invention according to claim 15 divides low-coherence light into signal light and reference light, and superimposes the signal light passing through the fundus of the eye to be examined and the reference light passing through the reference light path to interfere light.
- Image forming means for forming a three-dimensional image corresponding to the fundus region scanned two-dimensionally, and imaging means for forming a moving image of the fundus when the signal light is scanned two-dimensionally; Correcting the position of the three-dimensional image in the fundus surface direction based on the formed moving image, and based on the detection result of the interference light between the signal light and the reference light separately scanned by the scanning unit,
- the fundus formed by the image forming means Based on the tomographic image, a fundus observation device, characterized in that it comprises a correction means for correcting the position in the fundus depth direction of the three-dimensional image.
- the invention according to claim 16 is the fundus oculi observation device according to claim 15, wherein the scanning means follows the scanning lines parallel to each other as the two-dimensional scanning.
- the signal light is scanned, and the image forming unit forms a tomographic image corresponding to each of the plurality of scanning lines, forms the three-dimensional image based on the formed tomographic images, and the imaging The unit forms the moving image by forming a still image when the signal light is scanned along each of the plurality of scanning lines, and the correcting unit includes each of the plurality of still images.
- the image region of the characteristic part of the fundus in the image is specified, the positional shift amount of the image region in the plurality of still images is calculated, and the relative positions of the plurality of tomographic images are corrected based on the calculated positional shift amount To do Correcting the position in the fundus oculi surface direction of the 3-dimensional image by, characterized in that.
- the invention according to claim 17 is the fundus oculi observation device according to claim 16, wherein the correction means calculates intervals between the plurality of tomographic images after the relative position is corrected,
- the image forming means forms a plurality of tomographic images arranged at equal intervals based on the calculated intervals and the plurality of tomographic images, and forms a three-dimensional image based on the formed tomographic images at equal intervals. It is characterized by that.
- the invention according to claim 18 is the fundus oculi observation device according to claim 15, wherein the scanning means is adapted to follow each of a plurality of parallel scanning lines as the two-dimensional scanning.
- the signal light is scanned, and the image forming unit forms a tomographic image corresponding to each of the plurality of scanning lines, forms the three-dimensional image based on the formed tomographic images, and the imaging The unit forms the moving image by forming a still image when the signal light is scanned along each of the plurality of scanning lines, and the correcting unit includes each of the plurality of still images.
- An image region of the characteristic region of the fundus is determined, a positional shift amount of the image region in the plurality of still images is calculated, it is determined whether the calculated positional shift amount is a predetermined value or more, and the positional shift is determined.
- the scanning means moves along the scanning line located in the vicinity of the scanning line of the tomographic image corresponding to the still image for which the displacement amount is determined to be equal to or greater than a predetermined value.
- the signal light is scanned again, and the image forming unit forms a new tomographic image based on the detection result of the interference light between the rescanned signal light and the reference light, and based on the new tomographic image.
- a three-dimensional image corresponding to the neighboring region is formed.
- the invention according to claim 19 is the fundus oculi observation device according to claim 18, wherein the image forming means corresponds to the tomographic image corresponding to the still image in which the displacement amount is determined to be less than a predetermined value.
- the invention according to claim 20 is the fundus oculi observation device according to claim 15, wherein the scanning means follows the scanning lines parallel to each other as the two-dimensional scanning.
- the signal light is scanned, and the image forming unit forms a tomographic image corresponding to each of the plurality of scanning lines, forms the three-dimensional image based on the formed tomographic images, and the imaging
- the unit forms the moving image by forming a still image when the signal light is scanned along each of the plurality of scanning lines, and the correcting unit includes each of the plurality of still images.
- An image region of a characteristic part of the fundus of the eye is specified, a positional shift amount of the image region in the plurality of still images is calculated, and a front shift is performed for each of the plurality of scanning lines based on the calculated positional shift amount.
- the tomographic image closest to the original position of the scanning line is selected from a plurality of tomographic images, and the image forming unit forms the three-dimensional image based on the selected tomographic image. .
- the invention according to claim 21 is the fundus oculi observation device according to claim 16, wherein when there is the still image in which the image region of the characteristic part is not specified, the correction means
- the scanning unit scans the signal light again along the specified scanning line, and the image forming unit scans with the signal light scanned again.
- a new tomographic image is formed based on the detection result of the interference light with the reference light, and a three-dimensional image of a region corresponding to the scanning line is formed based on the new tomographic image.
- the invention according to claim 22 is the fundus oculi observation device according to claim 18, wherein when there is the still image in which the image region of the characteristic part is not specified, the correction means
- the scanning unit scans the signal light again along the specified scanning line, and the image forming unit scans with the signal light scanned again.
- a new tomographic image is formed based on a detection result of interference light with the reference light, and a three-dimensional image of a region corresponding to the scanning line is formed based on the new tomographic image.
- the invention according to claim 23 is the fundus oculi observation device according to claim 20, wherein when there is the still image in which the image region of the characteristic part is not specified, the correction means
- the scanning unit scans the signal light again along the specified scanning line, and the image forming unit scans with the signal light scanned again.
- a new tomographic image is formed based on the detection result of the interference light with the reference light, and a three-dimensional image of a region corresponding to the scanning line is formed based on the new tomographic image.
- the invention according to claim 24 is the fundus oculi observation device according to claim 16, wherein the image forming means is in the tomographic image corresponding to a predetermined end region in each of the plurality of scanning lines.
- the three-dimensional image of the fundus is formed based only on the central portion excluding the image area.
- the invention according to claim 25 is the fundus oculi observation device according to claim 18, wherein the image forming means is in the tomographic image corresponding to a predetermined end region in each of the plurality of scanning lines.
- the three-dimensional image of the fundus is formed based only on the central portion excluding the image area.
- the invention according to claim 26 is the fundus oculi observation device according to claim 20, wherein the image forming means is in the tomographic image corresponding to a predetermined end region in each of the plurality of scanning lines.
- the three-dimensional image of the fundus is formed based only on the central portion excluding the image area.
- the invention according to claim 27 is the fundus oculi observation device according to claim 15, wherein the scanning means performs a predetermined number of correction scanning lines intersecting the plurality of scanning lines as the separate scanning.
- the signal light is scanned along each of the correction light lines, the image forming unit forms a correction tomographic image corresponding to each of the correction scanning lines, and the correction unit performs the predetermined number of the formed
- An image region of the fundus feature layer in the tomographic image for correction is specified, and the plurality of tomographic images are aligned so that the depth positions of the specified image region and the image region of the feature layer in each of the plurality of tomographic images are matched.
- the position of the three-dimensional image in the fundus depth direction is corrected by moving each of the images in the fundus depth direction.
- the position of the fundus is detected at predetermined time intervals when the signal light is scanned, and the fundus surface direction is detected based on the time change of the detected fundus position.
- the positional deviation amount of a plurality of one-dimensional images at can be calculated.
- the position of the three-dimensional image of the fundus in the fundus surface direction is corrected based on the fundus moving image, and the signal light and the reference light separately scanned by the scanning unit Since the position in the fundus depth direction of the three-dimensional image can be corrected based on the tomographic image of the fundus based on the detection result of the interference light with the eye, when the eye to be examined is moved or blinked during scanning of the signal light Even if it exists, it is possible to acquire a highly accurate three-dimensional image (OCT image).
- the fundus oculi observation device forms a tomographic image of the fundus oculi using optical coherence tomography.
- any type of optical coherence tomography that involves scanning of signal light such as a Fourier domain type or a swept source type, can be applied.
- an image acquired by optical coherence tomography may be referred to as an OCT image.
- a measurement operation for forming an OCT image may be referred to as OCT measurement.
- a fundus oculi observation device capable of acquiring both a tomographic image and a captured image of the fundus as in the device disclosed in Patent Document 5 is taken up.
- the fundus oculi observation device 1 includes a fundus camera unit 2, an OCT unit 100, and an arithmetic control unit 200.
- the retinal camera unit 2 has almost the same optical system as a conventional retinal camera.
- the OCT unit 100 is provided with an optical system for acquiring an OCT image of the fundus.
- the arithmetic control unit 200 includes a computer that executes various arithmetic processes and control processes.
- the fundus camera unit 2 shown in FIG. 1 is provided with an optical system for forming a two-dimensional image (fundus image) representing the surface form of the fundus oculi Ef of the eye E to be examined.
- the fundus image includes an observation image and a captured image.
- the observation image is, for example, a monochrome moving image formed at a predetermined frame rate using near infrared light.
- the captured image is a color image obtained by flashing visible light, for example.
- the fundus camera unit 2 may be configured to be able to acquire images other than these, for example, a fluorescein fluorescent image or an indocyanine green fluorescent image.
- the fundus camera unit 2 is provided with a chin rest and a forehead for supporting the subject's face so that the face of the subject does not move, as in a conventional fundus camera. Further, the fundus camera unit 2 is provided with an illumination optical system 10 and a photographing optical system 30 as in the conventional fundus camera.
- the illumination optical system 10 irradiates the fundus oculi Ef with illumination light.
- the photographing optical system 30 guides the fundus reflection light of the illumination light to the imaging device (CCD image sensors 35 and 38). Further, the imaging optical system 30 guides the signal light LS from the OCT unit 100 to the fundus oculi Ef and guides the signal light LS passing through the fundus oculi Ef to the OCT unit 100.
- the observation light source 11 of the illumination optical system 10 is composed of, for example, a halogen lamp.
- the light (observation illumination light) output from the observation light source 11 is reflected by the reflection mirror 12 having a curved reflection surface, passes through the condensing lens 13, passes through the visible cut filter 14, and is converted into near infrared light. Become. Further, the observation illumination light is once converged in the vicinity of the photographing light source 15, reflected by the mirror 16, and passes through the relay lenses 17 and 18, the diaphragm 19, and the relay lens 20. Then, the observation illumination light is reflected by the peripheral part (region around the hole part) of the perforated mirror 21 and illuminates the fundus oculi Ef via the objective lens 22.
- the fundus reflection light of the observation illumination light is refracted by the objective lens 22, passes through a hole formed in the central region of the aperture mirror 21, passes through the dichroic mirror 55, passes through the focusing lens 31, and then goes through the dichroic mirror. 32 is reflected. Further, the fundus reflection light passes through the half mirror 40, is reflected by the dichroic mirror 33, and forms an image on the light receiving surface of the CCD image sensor 35 by the condenser lens 34.
- the CCD image sensor 35 detects fundus reflected light at a predetermined frame rate, for example.
- the display device 3 displays an image (observation image) K based on fundus reflected light detected by the CCD image sensor 35.
- the photographing light source 15 is constituted by, for example, a xenon lamp.
- the light (imaging illumination light) output from the imaging light source 15 is applied to the fundus oculi Ef through the same path as the observation illumination light.
- the fundus reflection light of the imaging illumination light is guided to the dichroic mirror 33 through the same path as that of the observation illumination light, passes through the dichroic mirror 33, is reflected by the mirror 36, and is reflected by the condenser lens 37 of the CCD image sensor 38.
- An image is formed on the light receiving surface.
- On the display device 3, an image (captured image) H based on fundus reflection light detected by the CCD image sensor 38 is displayed.
- the display device 3 that displays the observation image K and the display device 3 that displays the captured image H may be the same or different.
- the LCD 39 displays a fixation target and a visual target for visual acuity measurement.
- the fixation target is a target for fixing the eye E, and is used when photographing the fundus or forming a tomographic image.
- the visual acuity measurement target is a visual target used for measuring the visual acuity value of the eye E, such as a Landolt ring.
- the visual acuity measurement target may be simply referred to as a visual target.
- a part of the light output from the LCD 39 is reflected by the half mirror 40, reflected by the dichroic mirror 32, passes through the focusing lens 31 and the dichroic mirror 55, and passes through the hole of the perforated mirror 21.
- the light is refracted by the objective lens 22 and projected onto the fundus oculi Ef.
- fixation position of the eye E by changing the display position of the fixation target on the screen of the LCD 39.
- the fixation position of the eye E for example, a position for acquiring an image centered on the macular portion of the fundus oculi Ef, or a position for acquiring an image centered on the optic disc as in the case of a conventional fundus camera And a position for acquiring an image centered on the fundus center between the macula and the optic disc.
- the fundus camera unit 2 is provided with an alignment optical system 50 and a focus optical system 60 as in the conventional fundus camera.
- the alignment optical system 50 generates a visual target (alignment visual target) for performing alignment (alignment) of the apparatus optical system with respect to the eye E.
- the focus optical system 60 generates a visual target (split visual target) for focusing on the fundus oculi Ef.
- the light (alignment light) output from the LED (Light Emitting Diode) 51 of the alignment optical system 50 is reflected by the dichroic mirror 55 via the apertures 52 and 53 and the relay lens 54, and passes through the hole portion of the perforated mirror 21. It passes through and is projected onto the cornea of the eye E by the objective lens 22.
- the corneal reflection light of the alignment light passes through the objective lens 22 and the hole, and a part thereof passes through the dichroic mirror 55, passes through the focusing lens 31, is reflected by the dichroic mirror 32, and passes through the half mirror 40. Then, it is reflected by the dichroic mirror 33 and projected onto the light receiving surface of the CCD image sensor 35 by the condenser lens 34.
- a light reception image (alignment target) by the CCD image sensor 35 is displayed on the display device 3 together with the observation image K.
- the user performs alignment by performing the same operation as that of a conventional fundus camera. Further, the arithmetic control unit 200 may perform alignment by analyzing the position of the alignment target and moving the optical system.
- the reflecting surface of the reflecting rod 67 is obliquely provided on the optical path of the illumination optical system 10.
- the light (focus light) output from the LED 61 of the focus optical system 60 passes through the relay lens 62, is separated into two light beams by the split target plate 63, passes through the two-hole aperture 64, and is reflected by the mirror 65.
- the light is once focused on the reflecting surface of the reflecting bar 67 by the condenser lens 66 and reflected. Further, the focus light passes through the relay lens 20, is reflected by the perforated mirror 21, and forms an image on the fundus oculi Ef by the objective lens 22.
- the fundus reflection light of the focus light is detected by the CCD image sensor 35 through the same path as the corneal reflection light of the alignment light.
- a light reception image (split target) by the CCD image sensor 35 is displayed on the display device 3 together with the observation image.
- the arithmetic and control unit 200 analyzes the position of the split target and moves the focusing lens 31 and the focus optical system 60 to focus, as in the conventional case. Alternatively, focusing may be performed manually while visually checking the split target.
- An optical path including a mirror 41, a collimator lens 42, and galvanometer mirrors 43 and 44 is provided behind the dichroic mirror 32. This optical path is connected to the OCT unit 100.
- the galvanometer mirror 44 scans the signal light LS from the OCT unit 100 in the x direction.
- the galvanometer mirror 43 scans the signal light LS in the y direction.
- the OCT unit 100 shown in FIG. 2 is provided with an optical system for acquiring a tomographic image of the fundus oculi Ef.
- This optical system has the same configuration as a conventional Fourier domain type OCT apparatus. That is, this optical system divides low-coherence light into reference light and signal light, and generates interference light by causing interference between the signal light passing through the fundus and the reference light passing through the reference optical path, and the spectrum of this interference light The component is to be detected.
- This detection result (detection signal) is sent to the arithmetic control unit 200.
- the light source unit 101 outputs low-coherence light L0.
- the low coherence light L0 is, for example, light (invisible light) having a wavelength that cannot be detected by the human eye. Furthermore, the low-coherence light L0 is near-infrared light having a center wavelength of about 1050 to 1060 nm, for example.
- the light source unit 101 includes a light output device such as a super luminescent diode (SLD) or an SOA (Semiconductor Optical Amplifier).
- SLD super luminescent diode
- SOA semiconductor Optical Amplifier
- the low coherence light L0 output from the light source unit 101 is guided to the fiber coupler 103 by the optical fiber 102, and is divided into the signal light LS and the reference light LR.
- the fiber coupler 103 functions as both a means for splitting light (splitter) and a means for combining light (coupler), but here it is conventionally referred to as a “fiber coupler”.
- the signal light LS is guided by the optical fiber 104 and becomes a parallel light beam by the collimator lens unit 105. Further, the signal light LS is reflected by the respective galvanometer mirrors 44 and 43, collected by the collimator lens 42, reflected by the mirror 41, transmitted through the dichroic mirror 32, and through the same path as the light from the LCD 39, the fundus oculi Ef. Is irradiated. The signal light LS is scattered and reflected on the fundus oculi Ef. The scattered light and reflected light may be collectively referred to as fundus reflected light of the signal light LS. The fundus reflection light of the signal light LS travels in the opposite direction on the same path and is guided to the fiber coupler 103.
- the reference light LR is guided by the optical fiber 106 and becomes a parallel light beam by the collimator lens unit 107. Further, the reference light LR is reflected by the mirrors 108, 109, 110, is attenuated by the ND (Neutral Density) filter 111, is reflected by the mirror 112, and forms an image on the reflection surface of the reference mirror 114 by the collimator lens 113. . The reference light LR reflected by the reference mirror 114 travels in the opposite direction on the same path and is guided to the fiber coupler 103. Note that an optical element for dispersion compensation (such as a pair prism) and an optical element for polarization correction (such as a wavelength plate) may be provided in the optical path (reference optical path) of the reference light LR.
- an optical element for dispersion compensation such as a pair prism
- an optical element for polarization correction such as a wavelength plate
- the fiber coupler 103 combines the fundus reflection light of the signal light LS and the reference light LR reflected by the reference mirror 114.
- the interference light LC thus generated is guided by the optical fiber 115 and emitted from the emission end 116. Further, the interference light LC is converted into a parallel light beam by the collimator lens 117, dispersed (spectral decomposition) by the diffraction grating 118, condensed by the condenser lens 57, and projected onto the light receiving surface of the CCD image sensor 120.
- the CCD image sensor 120 is, for example, a line sensor, and detects each spectral component of the split interference light LC and converts it into electric charges.
- the CCD image sensor 120 accumulates this electric charge and generates a detection signal. Further, the CCD image sensor 120 sends this detection signal to the arithmetic control unit 200.
- a Michelson interferometer is used.
- any type of interferometer such as a Mach-Zehnder type can be appropriately used.
- another form of image sensor for example, a CMOS (Complementary Metal Oxide Semiconductor) image sensor can be used.
- CMOS Complementary Metal Oxide Semiconductor
- the configuration of the arithmetic control unit 200 will be described.
- the arithmetic control unit 200 analyzes the detection signal input from the CCD image sensor 120 and forms an OCT image of the fundus oculi Ef.
- the arithmetic processing for this is the same as that of a conventional Fourier domain type OCT apparatus.
- the arithmetic control unit 200 controls each part of the fundus camera unit 2, the display device 3, and the OCT unit 100.
- the arithmetic control unit 200 controls the operation of the observation light source 11, the imaging light source 15 and the LEDs 51 and 61, the operation control of the LCD 39, the movement control of the focusing lens 31, the movement control of the reflector 67, and the focus.
- the movement control of the optical system 60, the operation control of each galvanometer mirror 43, 44, etc. are performed.
- the arithmetic control unit 200 performs operation control of the light source unit 101, movement control of the reference mirror 114 and collimator lens 113, operation control of the CCD image sensor 120, and the like.
- the arithmetic control unit 200 includes, for example, a microprocessor, a RAM, a ROM, a hard disk drive, a communication interface, and the like, as in a conventional computer.
- a computer program for controlling the fundus oculi observation device 1 is stored in a storage device such as a hard disk drive.
- the arithmetic control unit 200 may include a dedicated circuit board that forms an OCT image based on a detection signal from the CCD image sensor 120.
- the arithmetic control unit 200 may include an operation device (input device) such as a keyboard and a mouse, and a display device such as an LCD.
- the fundus camera unit 2, the display device 3, the OCT unit 100, and the arithmetic control unit 200 may be configured integrally (that is, in a single casing) or may be configured separately.
- Control system The configuration of the control system of the fundus oculi observation device 1 will be described with reference to FIG.
- the control system of the fundus oculi observation device 1 is configured around the control unit 210 of the arithmetic control unit 200.
- the control unit 210 includes, for example, the aforementioned microprocessor, RAM, ROM, hard disk drive, communication interface, and the like.
- the control unit 210 is provided with a main control unit 211 and a storage unit 212.
- the main control unit 211 performs the various controls described above.
- the main control unit 211 controls the scanning driving unit 70 and the focusing driving unit 80 of the retinal camera unit 2 and the reference driving unit 130 of the OCT unit 100.
- the scanning drive unit 70 includes a servo motor, for example, and independently changes the directions of the galvanometer mirrors 43 and 44.
- the focusing drive unit 80 includes, for example, a pulse motor, and moves the focusing lens 31 in the optical axis direction. Thereby, the focus position of the light toward the fundus oculi Ef is changed.
- the reference driving unit 130 includes, for example, a pulse motor, and moves the collimator lens 113 and the reference mirror 114 integrally along the traveling direction of the reference light LR.
- the main control unit 211 performs a process of writing data to the storage unit 212 and a process of reading data from the storage unit 212.
- the storage unit 212 stores various data. Examples of the data stored in the storage unit 212 include OCT image image data, fundus image data, and examined eye information.
- the eye information includes information about the subject such as patient ID and name, and information about the eye such as left / right eye identification information.
- the image forming unit 220 forms tomographic image data of the fundus oculi Ef based on the detection signal from the CCD image sensor 120.
- This process includes processes such as noise removal (noise reduction), filter processing, and FFT (Fast Fourier Transform) as in the conventional Fourier domain type optical coherence tomography.
- the image forming unit 220 includes, for example, the above-described circuit board and communication interface.
- image data and “image” presented based on the “image data” may be identified with each other.
- the image processing unit 230 performs various types of image processing and analysis processing on the image formed by the image forming unit 220. For example, the image processing unit 230 executes various correction processes such as image brightness correction and dispersion correction.
- the image processing unit 230 forms image data of a three-dimensional image of the fundus oculi Ef by executing an interpolation process for interpolating pixels between tomographic images formed by the image forming unit 220.
- the image data of a three-dimensional image means image data in which pixel positions are defined by a three-dimensional coordinate system.
- image data of a three-dimensional image there is image data composed of voxels arranged three-dimensionally. This image data is called volume data or voxel data.
- the image processing unit 230 When displaying an image based on the volume data, the image processing unit 230 performs rendering processing (volume rendering, MIP (Maximum Intensity Projection), etc.) on the volume data, and views the image from a specific line-of-sight direction.
- rendering processing volume rendering, MIP (Maximum Intensity Projection), etc.
- MIP Maximum Intensity Projection
- stack data of a plurality of tomographic images is image data of a three-dimensional image.
- the stack data is image data obtained by three-dimensionally arranging a plurality of tomographic images obtained along a plurality of scanning lines based on the positional relationship of the scanning lines. That is, the stack data is image data obtained by expressing a plurality of tomographic images originally defined by individual two-dimensional coordinate systems using one three-dimensional coordinate system (that is, embedding in one three-dimensional space). is there.
- the image processing unit 230 includes, for example, the above-described microprocessor, RAM, ROM, hard disk drive, circuit board, and the like.
- the image processing unit 230 is provided with an x correction unit 231, a y correction unit 232, and a z correction unit 233.
- the x correction unit 231, the y correction unit 232, and the z correction unit 233 perform position correction of the three-dimensional image in the x direction (horizontal direction), the y direction (vertical direction), and the z direction (depth direction), respectively.
- the x direction and the y direction are directions along the surface of the fundus oculi Ef (fundus surface direction).
- the z direction is a direction along the depth direction of the fundus oculi Ef (fundus depth direction).
- These correction units 231 to 233 are examples of the “correction unit” of the present invention. Hereinafter, processing executed by the correction units 231 to 233 will be described.
- the x correction unit 231 corrects the position in the x direction of a plurality of tomographic images obtained by the following three-dimensional scan, thereby correcting the position in the x direction of the three-dimensional image based on these tomographic images.
- the signal light LS is scanned along a plurality of scanning lines arranged in the y direction.
- Each scanning line includes a plurality of scanning points arranged linearly along the x direction.
- an observation image K moving image
- the frame rate is set so that a still image (frame) corresponding to scanning along each scanning line is obtained. Thereby, a still image can be associated with each scanning line (each tomographic image).
- the eye E may move during the scan (eg, fixation disparity) or may blink.
- FIG. 4 shows the arrangement of the tomographic image Gi when the fundus oculi Ef is viewed from the fundus oculi observation device 1 side.
- tomographic images Gi that are not displaced in the x direction (direction along each scanning line Ri) and are arranged at equal intervals in the scanning region R are obtained.
- the observation image K of the fundus oculi Ef is simultaneously acquired, and a still image (frame) corresponding to each scanning line Ri (each tomographic image Gi) is obtained.
- the x correction unit 231 analyzes the pixel value (luminance value) of each still image, and specifies the image region of the characteristic part of the fundus oculi Ef in the still image.
- the characteristic part include an optic disc, a macula, a blood vessel, a bifurcation of a blood vessel, and a lesion.
- the x correction unit 231 calculates the positional deviation amount of the image area in these still images.
- the displacement of the image area in the still image corresponding to each of the other tomographic images G2 to Gm with respect to the image area in the still image (reference still image) corresponding to the first fundus image G1 is calculated. It is.
- the displacement calculated here is a displacement in the x direction and a displacement in the y direction.
- the x correction unit 231 corrects the relative positions in the x direction of the plurality of tomographic images Gi so as to cancel the calculated positional deviation amount (displacement). Thereby, the position in the x direction of the three-dimensional image based on the plurality of tomographic images Gi is corrected.
- the x correction unit 231 deletes a part (end region) of each tomographic image Gi included in the end regions Ra and Rb of the scanning region R. As a result, a three-dimensional image of the central portion (image region) Rc of the scanning region R is obtained.
- the y correction unit 232 corrects the relative positions of the plurality of tomographic images Gi in the y direction so as to cancel the positional shift amount (displacement) calculated based on the still image. Accordingly, the position in the y direction of the three-dimensional image based on the plurality of tomographic images Gi is corrected. Note that the y correction unit 232 may perform the process of calculating the positional deviation amount.
- the y correction unit 232 adjusts the interval between the plurality of tomographic images Gi after the relative position is corrected as described above.
- This process includes a process of filling (complementing) a portion where the tomographic image is sparse (complementary processing) and a process of thinning out a portion where the tomographic image is dense (decimating process).
- the y correction unit 232 determines whether the calculated interval is equal to or greater than a predetermined value.
- This predetermined value is set based on, for example, the size of the scanning region R and the number of scanning lines Ri. In addition, when calculating
- control unit 210 controls the scan driving unit 70 to follow the scan line located in the region sandwiched between the two tomographic images with the interval equal to or greater than the predetermined value.
- the signal light LS is scanned again.
- the image forming unit 220 forms a new tomographic image based on the detection result of the interference light between the signal light LS and the reference light LR scanned again, and the image processing unit 230 based on the new tomographic image, A three-dimensional image corresponding to the region is formed.
- the y correction unit 232 can also perform the following processing. First, the y correction unit 232 determines whether each positional shift amount calculated based on a plurality of still images is greater than or equal to a predetermined value.
- control unit 210 controls the scan driving unit 70 to scan the tomographic image corresponding to the still image whose positional deviation amount is determined to be greater than the predetermined value.
- the signal light LS is scanned again along the scanning line located in the vicinity region of.
- the image forming unit 220 forms a new tomographic image along the rescanned scanning line based on the detection result of the interference light between the rescanned signal light LS and the reference light LR. Then, the image processing unit 230 forms a three-dimensional image corresponding to the neighboring region based on these new tomographic images.
- the image processing unit 230 can also form a three-dimensional image based on a tomographic image corresponding to a still image for which the positional deviation amount is determined to be less than a predetermined value and a new tomographic image.
- the y correction unit 232 can perform the following processing. First, for each of the plurality of scanning lines, the y correction unit 232 is closest to the original position of the scanning line among the plurality of tomographic images Gi based on the positional shift amount calculated based on the plurality of still images. Select a tomogram. The original position of the scanning line is expressed by the coordinate value of the scanning line set in the scanning region R. This coordinate value (especially the y coordinate value) can be easily obtained based on the size of the scanning region R and the number of scanning lines. The y correction unit 232 selects a tomographic image located closest to the coordinate position.
- the image processing unit 230 forms a three-dimensional image based only on the selected tomographic image.
- the y correction unit 232 can also perform the following processing. After the relative positions of the plurality of tomographic images Gi are corrected, the y correction unit 232 calculates the interval between these tomographic images Gi.
- the image processing unit 230 forms a plurality of tomographic images arranged at equal intervals based on the calculated intervals and these tomographic images Gi. In this process, for example, linear interpolation processing based on pixel values (luminance values) at scanning points arranged in the y direction is performed, and pixel values at positions arranged at equal intervals in the y direction are calculated. A plurality of tomographic images arranged at equal intervals can be obtained by forming an image using the calculated pixel values. Further, the image processing unit 230 forms a three-dimensional image based on these tomographic images arranged at equal intervals.
- the y correction unit 232 specifies the scanning line of the tomographic image corresponding to the still image. Since the still image and the tomographic image are associated with each other as described above, and the tomographic image and the scanning line correspond to each other one to one, the processing can be easily performed.
- control unit 210 controls the scan driving unit 70 to scan the signal light LS again along the specified scanning line. At this time, an observation image K is also acquired.
- the image forming unit 220 forms a new tomographic image along the specified scanning line based on the detection result of the interference light between the signal light LS and the reference light LR that has been scanned again.
- the x correction unit 231 and the y correction unit 232 can perform the above correction processing based on the new tomographic image and the observation image K.
- the image processing unit 230 can form a three-dimensional image of a region corresponding to the specified scanning line based on the new tomographic image.
- the z correction unit 233 corrects the position in the z direction of the three-dimensional image (plural tomographic images Gi) as described above. Therefore, scanning is performed separately from the three-dimensional scanning (separate scanning). This separate scanning is scanning in a direction intersecting with the plurality of scanning lines Ri. In this embodiment, as a separate scan, the signal light LS is scanned along each of a predetermined number of scan lines (correction scan lines) orthogonal to the plurality of scan lines Ri.
- the image forming unit 220 forms a tomographic image (correction tomographic image) corresponding to each correction scanning line based on the detection result of the interference light LC obtained by this separate scanning.
- the z correction unit 233 identifies the image region of the feature layer of the fundus oculi Ef in the predetermined number of correction tomographic images formed. As this feature layer, it is desirable to select a part that can be easily specified in a tomographic image, such as a part (tissue) that is clearly depicted with high luminance.
- the z correction unit 233 converts each tomographic image Gi to the fundus depth so that the depth positions (z coordinate values) of the image region in the correction tomographic image and the image region of the feature layer in each tomographic image Gi are matched. Move in the direction (z direction). Thereby, position correction in the fundus depth direction of the three-dimensional image can be performed.
- the image forming unit 220 and the image processing unit 230 are examples of the “image forming unit” of the present invention.
- the display unit 240 includes the display device of the arithmetic control unit 200 described above.
- the operation unit 250 includes the operation device of the arithmetic control unit 200 described above.
- the operation unit 250 may include various buttons and keys provided on the housing of the fundus oculi observation device 1 or outside.
- the operation unit 250 may include a joystick or an operation panel provided on the housing.
- the display unit 240 may include various display devices such as a touch panel monitor provided on the housing of the fundus camera unit 2.
- the display unit 240 and the operation unit 250 need not be configured as individual devices.
- a device in which a display function and an operation function are integrated, such as a touch panel monitor, can be used.
- Examples of the scanning mode of the signal light LS by the fundus oculi observation device 1 include a horizontal scan, a vertical scan, a cross scan, a radiation scan, a circle scan, a concentric scan, and a spiral (vortex) scan. These scanning modes are selectively used as appropriate in consideration of the observation site of the fundus, the analysis target (such as retinal thickness), the time required for scanning, the precision of scanning, and the like.
- the horizontal scan is to scan the signal light LS in the horizontal direction (x direction).
- the horizontal scan also includes an aspect in which the signal light LS is scanned along a plurality of horizontal scanning lines arranged in the vertical direction (y direction). In this aspect, it is possible to arbitrarily set the scanning line interval. Further, the above-described three-dimensional image can be formed by sufficiently narrowing the interval between adjacent scanning lines (three-dimensional scanning). The same applies to the vertical scan.
- the cross scan scans the signal light LS along a cross-shaped trajectory composed of two linear trajectories (straight trajectories) orthogonal to each other.
- the signal light LS is scanned along a radial trajectory composed of a plurality of linear trajectories arranged at a predetermined angle.
- the cross scan is an example of a radiation scan.
- the circle scan scans the signal light LS along a circular locus.
- the signal light LS is scanned along a plurality of circular trajectories arranged concentrically around a predetermined center position.
- a circle scan is considered a special case of a concentric scan.
- the signal light LS is scanned along a spiral (spiral) locus while the radius of rotation is gradually reduced (or increased).
- the scanning unit 141 can scan the signal light LS independently in the x direction and the y direction, respectively, by the configuration as described above. Therefore, the scanning unit 141 can scan the signal light LS along an arbitrary locus on the xy plane. . Thereby, various scanning modes as described above can be realized.
- a tomographic image in the depth direction (x direction) along the scanning line (scanning locus) can be formed.
- the above-described three-dimensional image can be formed.
- the region on the fundus oculi Ef to be scanned with the signal light LS as described above is called a scanning region as described above.
- the scanning area in the three-dimensional scan is a rectangular area in which a plurality of horizontal scans are arranged (see the scanning area R in FIG. 4).
- the scanning area in the concentric scan is a disk-shaped area surrounded by the locus of the circular scan with the maximum diameter.
- the scanning area in the radial scan is a disk-shaped (or polygonal) area connecting both end positions of each scan line.
- the fundus oculi observation device 1 even if a tomographic image Gi as shown in FIG. 4 is obtained, the positions of the tomographic image Gi (three-dimensional image) in the x direction and the y direction are corrected based on the observation image K. be able to.
- the fundus oculi observation device 1 it is possible to complement the tomographic image by scanning again the region where the tomographic image Gi (scanning line Ri) is sparse. Thereby, as shown in FIG. 5, a new tomographic image Jk along the scanning line Rk in the sparse region Rd is acquired, and a three-dimensional image of the sparse region Rd can be formed based on these tomographic images Jk.
- the tomographic image Gi can be thinned out for a portion where the tomographic image Gi is dense.
- a plurality of tomographic images arranged at suitable intervals can be acquired, and a suitable three-dimensional image can be obtained.
- the fundus oculi observation device based on the tomographic image (correction tomographic image) based on the detection result of the interference light LC of the signal light LS and the reference light LR scanned separately from the three-dimensional scan, 3 The position of the three-dimensional image in the fundus depth direction can be corrected.
- the intervals between the plurality of tomographic images Gi after the relative positions are corrected are calculated, and a plurality of tomograms arranged at equal intervals based on the calculated intervals and the plurality of tomographic images Gi.
- An image can be formed, and a three-dimensional image can be formed based on these equally-spaced tomographic images.
- the image region of the characteristic part of the fundus oculi Ef in each still image constituting the observation image K is specified, the amount of positional deviation of these image regions is calculated, and the amount of positional deviation is equal to or greater than a predetermined value. If it is determined, the signal light LS is scanned again along the scanning line located in the vicinity of the scanning line of the tomographic image corresponding to the still image in which the positional deviation amount is determined to be equal to or greater than the predetermined value. Thus, a new tomographic image can be formed, and a three-dimensional image corresponding to the neighboring region can be formed based on the new tomographic image.
- a tomographic image closest to the original position of the scanning line Ri is selected from the plurality of tomographic images Gi based on the positional deviation amount calculated above.
- a three-dimensional image can be formed based on the selected tomographic image.
- the scanning line of the tomographic image corresponding to the still image is specified, and along the specified scanning line.
- the signal light LS can be scanned again to form a new tomographic image, and a three-dimensional image of a region corresponding to the scanning line can be formed based on the new tomographic image.
- the fundus oculi observation device 1 acting in this way, it is possible to acquire a highly accurate three-dimensional image even when the eye E moves or blinks during scanning of the signal light LS.
- each scanning line is composed of a plurality of scanning points.
- a technique for obtaining a positional deviation amount with one or more scanning points as a unit will be described.
- the obtained positional deviation amount can be used for correcting the positional deviation as in the first embodiment, or can be used for other purposes.
- an application to a technique for forming a highly accurate image by superimposing two or more images obtained by scanning the same part of the fundus will be described.
- the fundus oculi observation device of this embodiment performs the same measurement as in the first embodiment, and forms a one-dimensional image extending in the depth direction of the fundus at each scanning point.
- This one-dimensional image is called an A scan image.
- the tomographic image is formed by arranging the plurality of A-scan images according to the arrangement of the plurality of scanning points.
- the fundus oculi observation device detects the fundus position at predetermined time intervals when the signal light is scanned, and in the fundus surface direction (xy direction) based on the temporal change of the detected fundus position. A positional deviation amount of a plurality of A-scan images is calculated.
- the signal light LS is irradiated toward each scanning point Rij.
- the eye E moves during measurement, the actual irradiation position Tij of the signal light LS deviates from the original scanning point Rij as shown in FIG. 7B.
- the position of the A scan image that should depict the position of the fundus oculi Ef corresponding to the scanning point Rij is shifted (that is, an A scan image depicting the position of the fundus oculi Ef corresponding to the actual irradiation position Tij is obtained). End up). This is the positional deviation of the A scan image. In this embodiment, the amount of positional deviation (positional deviation amount) of such an A-scan image is obtained.
- the positional deviation amount of each A-scan image may be obtained, or the positional deviation amounts of a predetermined number of consecutive A-scan images may be obtained collectively.
- the positional deviation amounts of n A-scan images on each scanning line Ri are determined together, and is an example of the latter process.
- the positional deviation amount of the A scan image is a vector amount. That is, the positional deviation amount includes information (displacement direction information) indicating the displacement direction of the actual irradiation position Tij with respect to the scanning point Rij and information (deviation amount information) indicating the displacement amount.
- the fundus oculi observation device has the following configuration.
- the fundus oculi observation device has a hardware configuration similar to that of the first embodiment. That is, this fundus oculi observation device has the configuration shown in FIGS. 1 and 2. The following description will be given with reference to these drawings as appropriate.
- Control system The configuration of the control system of the fundus oculi observation device will be described. A part of the control system of the fundus oculi observation device is the same as that of the first embodiment (see FIG. 3). An example of the configuration of the control system of the fundus oculi observation device is shown in FIG. In addition, in the component shown in FIG. 8, the same code
- the configuration other than the image processing unit 230 is the same as that of the first embodiment.
- the image processing unit 230 is provided with a characteristic part specifying unit 261, a calculation unit 262, a scanning point specifying unit 265, and a correcting unit 266.
- a characteristic part specifying unit 261 a characteristic part specifying unit 261
- a calculation unit 262 a scanning point specifying unit 265, and a correcting unit 266.
- This fundus oculi observation device forms an observation image K (moving image) of the fundus oculi Ef using the observation light source 11 and the CCD image sensor 35.
- the observation image K is obtained by photographing the fundus oculi Ef at a predetermined frame rate.
- the reciprocal of this frame rate corresponds to the “predetermined time interval” of the present invention.
- this fundus oculi observation device forms an observation image K by photographing the fundus oculi Ef when the signal light LS is scanned.
- the configuration for forming the observation image K (the illumination optical system 10 and the imaging optical system 30) is an example of the “imaging unit” of the present invention.
- the characteristic part specifying unit 261 analyzes each still image forming the observation image K and specifies an image region in the characteristic part of the fundus oculi Ef. This process has been described in the first embodiment.
- the characteristic part specifying unit 261 is an example of the “image region specifying unit” of the present invention.
- the characteristic part specifying unit 261 obtains the position of the image region of the characteristic part in each still image as the position of the fundus oculi Ef. That is, a two-dimensional coordinate system is defined in advance for each still image, and the characteristic part specifying unit 261 uses the coordinate value of the image area in the two-dimensional coordinate system as the position of the fundus oculi Ef.
- the coordinate value of the image area for example, the coordinate value of a feature point (center point, barycentric point, etc.) in the image area can be used.
- the two-dimensional coordinate system and the xy coordinate system are associated with each other so that coordinate conversion is possible.
- the xy coordinate system itself can be used as the two-dimensional coordinate system.
- Such photographing means and image area specifying means constitute an example of the “detecting means” of the present invention.
- the calculation unit 262 Based on the temporal change in the position of the fundus oculi Ef obtained by the characteristic part specifying unit 261, the calculation unit 262 calculates the amount of positional deviation of the plurality of A-scan images in the fundus surface direction.
- the calculation unit 262 is provided with a position specifying unit 263 and a positional deviation amount calculation unit 264.
- the scanning time interval means a time interval from when the signal light LS is irradiated to one scanning point Rij to when the signal light LS is irradiated to the next scanning point Ri (j + 1).
- the time interval (scan line change time interval) may be the same as or different from the scan time interval. If they are different, the position detection interval may be controlled in accordance with the scanning line switching time interval. Further, instead of controlling the position detection interval, the scanning line switching time interval may be set to a value that is an integral multiple of the scanning time interval.
- the position detection interval is set to an integer (Q ⁇ 1) times the scanning time interval. That is, this fundus oculi observation device detects the position of the fundus oculi Ef every time the Q scanning points are scanned while sequentially irradiating the plurality of scanning points Rij with the signal light LS.
- the fundus oculi observation device detects the position of the fundus oculi Ef every time the signal light LS is irradiated to each scanning point Rij.
- this fundus oculi observation device detects the position of the fundus oculi Ef every Q scanning points. Such an operation is realized by synchronizing the control of the accumulation time of the CCD image sensor 35 and the control of the scanning drive unit 70.
- the calculation unit 262 divides a plurality of A scan images formed sequentially into Q A scan image groups.
- This “division” may be a method of actually dividing a plurality of A scan images every Q (for example, storing each A scan image group separately), or adding each identification information to each A scan image.
- the scan image group may be identifiable.
- processing can be executed for each A-scan image group in the subsequent processing. Therefore, a case where the ratio (Q) between the position detection interval and the scanning time interval is stored and a plurality of A scan images are processed every Q in the subsequent processing is also included in “division”.
- the position specifying unit 263 specifies the position of each one-dimensional image group based on the detection result of the position of the fundus oculi Ef when Q scanning points corresponding to each A scan image group have been scanned. This process will be described in more detail. That is, as described above, since the scan point group and the detection result of the position of the fundus oculi Ef are associated with the A scan image group, the position specifying unit 263 refers to this association and performs each A scan.
- the detection result of the position of the fundus oculi Ef corresponding to the image group is specified as the position of the A scan image group. This process corresponds to specifying the actual irradiation position Tij shown in FIG. 7B.
- the positional deviation amount calculation unit 264 stores position information (scanning point position information) of each scanning point Rij corresponding to a preset scanning mode.
- the scanning point position information is represented by, for example, coordinate values defined by the xy coordinate system described above.
- the scanning point position information may be expressed by a coordinate value defined by a two-dimensional coordinate system having one of a plurality of scanning points Rij (for example, the first scanning point R11) as an origin.
- the xy coordinate value of any one of the plurality of scanning points Rij (for example, the first scanning point R11) and the interval between adjacent scanning points (interval in the x direction, interval in the y direction) May be stored.
- the length of each scanning line, the interval between adjacent scanning lines, and the number of scanning points on each scanning line may be stored.
- the form of the scanning point position information is arbitrary as long as it uniquely defines the position of each scanning point.
- the positional deviation amount calculation unit 264 first acquires the position of each scanning point Rij corresponding to each A scan image group (that is, the original position of each A scan image) from the scanning point position information. Next, for each A scan image group, the positional deviation amount calculation unit 264 calculates the position of the acquired scanning point Rij and the actual irradiation position Tij specified by the position specifying unit 263 for each scanning point Rij. Compare. Thereby, a positional deviation amount of the irradiation position Tij with respect to the position of the scanning point Rij is obtained.
- the correcting unit 266 corrects the position of the A scan image in the fundus surface direction based on the positional shift amount calculated by the positional shift amount calculating unit 264.
- the correction unit 266 is an example of the “first correction unit” in the present invention.
- the positional shift amount in the fundus surface direction obtained by the positional shift amount calculation unit 264 corresponds to the positional shift amount of the irradiation position Tij with respect to the position of the scanning point Rij.
- the correction unit 266 adjusts the position of the A scan image so that the position shift amount corresponding to each A scan image is canceled, that is, the actual irradiation position Tij is moved to the original scan point Rij. to correct. Thereby, the actually acquired A scan image can be arranged at the original position (the position of the scanning point Rij). This is the end of the description regarding the calculation processing of the positional deviation amount in the fundus surface direction.
- the arithmetic unit 262 is an interference light composed of the signal light LS and the reference light LR separately scanned in the above scanning (scanning for acquiring a plurality of A-scan images: referred to as main scanning). Based on the one-dimensional image group (separate A-scan image group) based on the LC detection result, the positional deviation amounts of the plurality of A-scan images in the depth direction are calculated.
- the separate A-scan image group includes a predetermined number of A-scan images arranged in the separate scan direction.
- the direction of this separate scan is different from the main scan. That is, it is assumed that the scanning lines connecting a predetermined number of scanning points in separate scanning intersect with the scanning lines in the main scanning.
- this fundus oculi observation device executes the above-described separate scan to form a separate A-scan image group, and further, based on these separate A-scan image groups.
- a tomographic image reference tomographic image
- the calculation unit 262 specifies the image region of the feature layer of the fundus oculi Ef in the reference tomographic image, and specifies the image region of the feature layer in the tomographic image obtained by the main scan. .
- the calculation unit 262 calculates the image area specified from the reference tomographic image and the image area specified from the tomographic image of the main scan. The displacement in the depth direction is calculated. Further, the calculation unit 262 (positional deviation amount calculation unit 264) calculates the positional deviation amount in the depth direction of the A-scan image obtained by the main scanning, based on the calculated displacement, as in the first embodiment. To do.
- the correction unit 266 corrects the position in the depth direction of the A-scan image obtained by the main scan based on the position shift amount in the depth direction calculated by the calculation unit 262. This process is executed such that the position in the depth direction of the A-scan image obtained in the main scan is moved in the depth direction so as to cancel out the positional deviation amount.
- the correcting unit 266 is an example of the “second correcting unit” in the present invention. This is the end of the description of the position shift amount calculation process in the depth direction.
- the scanning point specifying unit 265 operates when there is a still image (a frame of the observation image K) in which the target image region is not specified by the feature part specifying unit 261.
- the scanning point specifying unit 265 specifies the scanning point of the A scan image corresponding to the still image for each still image for which the image region is not specified. This processing is performed by associating the aforementioned A-scan image group, scanning point group, and detection result of the position of the fundus oculi Ef, and associating the detection result of the fundus oculi Ef position with the still image (position detection is based on the still image). It can be easily executed based on the fact that it was executed).
- the scanning point specifying unit 265 is an example of the “scanning point specifying unit” of the present invention.
- the main control unit 211 controls the scan driving unit 70 to place the galvanometer mirrors 43 and 44 at positions corresponding to the specified scanning point. Further, the main control unit 211 turns on the observation light source 11 to acquire the observation image K, and controls the light source unit 101 to output the low coherence light L0. Thereby, the signal light LS is irradiated to the specified scanning point. If there are a plurality of specified scanning points, these scanning points are sequentially scanned with the signal light LS.
- the image forming unit 220 receives the detection result of the interference light LC composed of the signal light LS and the reference light LR from the CCD image sensor 120, and forms a new A scan image corresponding to the scanning point.
- the image processing unit 230 performs the above-described processing on the new A-scan image and the corresponding still image (the frame of the observation image K). Furthermore, the image forming unit 220 can form a tomographic image of the fundus oculi Ef based on the new A-scan image and other already acquired scanning points.
- the position of the fundus oculi Ef is detected at a predetermined time interval when the signal light LS is scanned with respect to a plurality of scanning points Rij, and the time variation of the detected position of the fundus oculi Ef is detected. Based on this, it is possible to calculate the positional deviation amount of the plurality of A-scan images in the fundus surface direction. Furthermore, according to this fundus oculi observation device, it is possible to correct the positions of a plurality of A-scan images based on the calculated positional deviation amount.
- the fundus oculi observation device when an image region of a characteristic part for acquiring a positional deviation amount of a certain A scan image is not specified, a scan point corresponding to the A scan image is specified, and this scan is performed.
- the point measurement can be executed again to form a new A-scan image.
- the acquisition can be automatically performed again, so that an OCT image with high accuracy can be acquired.
- the examination time can be shortened and the burden on the patient can be reduced.
- a plurality of A-scan image groups in the depth direction of the fundus oculi Ef are based on the separate A-scan image group based on the detection result of the interference light LC of the separately scanned signal light LS and the reference light LR.
- the amount of positional deviation of the A scan image can be calculated.
- the positional deviation amount calculation unit 264 corresponds to the detection result of the fundus position when the Q scanning points corresponding to the first A scan image group have been scanned, and the second A scan image group next to the detection result.
- the detection result of the position of the fundus oculi Ef when the Q scanning points to be scanned is acquired. Then, the positional deviation amount calculation unit 264 estimates the positional deviation amount of each A scan image included in the first A scan image group and / or the second A scan image group based on these two detection results. .
- the first scanning point group U1 corresponding to the first A scan image group includes three scanning points Ri1 to Ri3, and the second scanning point group U2 corresponding to the second A scan image group includes 3 scanning points. Two scanning points Ri4 to Ri6 are included.
- the position of the fundus oculi Ef is detected when the first scanning points Ri1 and Ri4 are being scanned. That is, the image processing unit 230 sets each scanning point group on the basis of a still image (a frame of the observation image K) captured when the first scanning points Ri1 and Ri4 in the scanning point groups U1 and U2 are scanned. A positional shift amount corresponding to U1 and U2 is obtained. This process has been described above.
- the positional deviation amount corresponding to the first scanning point group U1 is ( ⁇ x1, ⁇ y1)
- the positional deviation amount corresponding to the second scanning point group U2 is ( ⁇ x2, ⁇ y2).
- the positional deviation amount calculation unit 264 sets the positional deviation amounts of the first scanning points Ri1 and Ri4 as ( ⁇ x1, ⁇ y1) and ( ⁇ x2, ⁇ y2), respectively.
- the positional deviation amount calculation unit 264 based on these positional deviation amounts ( ⁇ x1, ⁇ y1), ( ⁇ x2, ⁇ y2), the positional deviation amounts of the respective scanning points Ri2, Ri3 sandwiched between the two scanning points Ri1, Ri4. Is estimated as follows.
- the moving speed of the fundus oculi Ef from when the scanning point Ri1 is scanned to when the scanning point Ri4 is scanned can be assumed to be constant. Further, the respective scanning points Ri2 and Ri3 to be estimated are located at points that internally divide the line segments connecting the scanning points Ri1 and Ri4 into 1: 2 and 2: 1, respectively.
- the positional deviation amount calculation unit 264 calculates (( ⁇ x2 ⁇ x1) / 3, ( ⁇ y2 ⁇ y1) / 3) and sets this as the positional deviation amount corresponding to the scanning point Ri2. Similarly, the positional deviation amount calculation unit 264 calculates (2 ⁇ ( ⁇ x2 ⁇ x1) / 3, 2 ⁇ ( ⁇ y2 ⁇ y1) / 3) and sets this as the positional deviation amount corresponding to the scanning point Ri3. As a result, a positional shift amount corresponding to each of the four scanning points Ri1 to Ri4 is obtained.
- the positional deviation amount calculation unit 264 sequentially obtains the positional deviation amounts corresponding to the respective scanning points Rij.
- the positional shift amount corresponding to the scanning points Ri2 and Ri3 sandwiched between the first scanning points Ri1 and Ri4 of the respective scanning point groups U1 and U2 is estimated, but the other scanning points are used as a reference. Even in this case, it is possible to similarly estimate the amount of positional deviation corresponding to each scanning point sandwiched between two reference scanning points. For example, when the intermediate scanning points Ri2 and Ri5 are used as a reference, the amount of positional deviation corresponding to each of the scanning point Ri3 in the first scanning point group U1 and the scanning point Ri4 in the second scanning point group U2 is set. Presumed. In addition, when the last scanning points Ri3 and Ri6 are used as references, the amount of positional deviation corresponding to each of the scanning points Ri4 and Ri5 in the second scanning point group U2 is estimated.
- the correction unit 266 can correct each position of the plurality of A-scan images based on the obtained positional deviation amount.
- the position shift amount can be obtained for each A-scan image while acquiring the detection of the position of the fundus oculi Ef for every Q scanning points. Therefore, even when the detection interval of the position of the fundus oculi Ef is limited, it is possible to obtain the positional deviation amount of each A scan image. There is also an advantage that the scanning time interval can be set short.
- Modification 2 In the above embodiment, the position of the already formed A scan image is corrected based on the amount of positional deviation of the A scan image.
- this modification an invention will be described in which the scanning of the signal light LS is controlled in real time based on the amount of positional deviation of the A-scan image.
- the image processing unit 230 sequentially calculates the amount of positional deviation based on the position of the fundus oculi Ef that is sequentially detected at predetermined time intervals. Detection of the position of the fundus oculi Ef can be performed in the same manner as in the above embodiment. Further, the processing for calculating each positional deviation amount is also performed in the same manner as in the above embodiment.
- the main control unit 211 controls the scan driving unit 70 based on the sequentially calculated positional deviation amounts, and corrects the irradiation position of the signal light LS on the fundus oculi Ef.
- the main control unit 211 is an example of the “control unit” in the present invention.
- the correction process of the irradiation position of the signal light LS will be described in more detail.
- the positions (mirror positions) of the galvanometer mirrors 43 and 44 with respect to each scanning point Rij are set in advance based on the scanning mode to be executed.
- the main control unit 211 controls the scan driving unit 70 to sequentially move the galvanometer mirrors 43 and 44 to the respective mirror positions in accordance with the scanning order of the scanning points Rij.
- the signal light LS is irradiated to the position of the original scanning point Rij, that is, a place deviated from the original measurement position.
- the positional deviation generated in this way is corrected by correcting the position of the already acquired A-scan image.
- the irradiation position of the signal light LS is corrected according to the calculated positional deviation amount. That is, the main control unit 211 controls the scanning drive unit 70 so that the signal light LS is irradiated to a position displaced from the original position of the next scanning point Rij by the positional deviation amount.
- the irradiation position of the signal light LS can follow the movement of the eye E (fundus Ef) in real time, so that the eye E moves during the scanning of the signal light LS. Even in such a case, it is possible to acquire a highly accurate OCT image.
- the calculation unit 262 is provided with an image specifying unit 267.
- the image specifying unit 267 compares each positional shift amount calculated by the positional shift amount calculation unit 264 with a predetermined value. Then, the image specifying unit 267 specifies an A scan image whose positional deviation amount is equal to or greater than a predetermined value.
- the image specifying unit 267 is an example of the “image specifying unit” of the present invention.
- the main control unit 211 controls the light source unit 101 and the scan driving unit 70 to irradiate the signal light LS again toward the scanning point corresponding to the identified A scan image.
- the main control unit 211 sequentially irradiates the signal light LS again sequentially toward two or more scanning points corresponding to them.
- the image forming unit 220 forms a new A-scan image at this scanning point based on the detection result of the interference light LC between the re-irradiated signal light LS and the reference light LR.
- the image forming unit 220 can form a tomographic image based on the new A scan image and the A scan image corresponding to another scan point.
- the measurement of the scanning point corresponding to the A-scan image having a large positional deviation can be automatically performed again. Therefore, even if the eye E moves greatly during the measurement, It is possible to automatically measure and acquire a highly accurate OCT image. In addition, even when the eye E blinks during measurement, it is impossible to calculate the amount of positional deviation (at this time, the amount of positional deviation is determined to be greater than or equal to a predetermined value), and the scan is executed again. An OCT image is obtained.
- Modification 4 an invention in which A scan images are selectively arranged according to the position of a scanning point will be described with reference to FIG.
- the calculation unit 262 is provided with an image selection unit 268.
- the image selection unit 268 is closest to the original position of the scanning point Rij among the plurality of acquired A-scan images for each scanning point Rij based on the positional deviation amount calculated by the positional deviation amount calculation unit 264. Select an A-scan image.
- the original position of each scanning point Rij is set in advance.
- the position of each A-scan image is obtained based on the position of the corresponding scanning point Rij and the calculated positional deviation amount. That is, the image selection unit 268 sets the position where the position of the scanning point Rij is displaced by the amount of positional deviation as the position of the A scan image.
- the image selection unit 268 selects an A scan image at a position closest to the original position for each scanning point Rij. Note that when the amount of positional deviation is sufficiently small, the A-scan image corresponding to the scanning line Rij is selected, but when the amount of positional deviation is large, other A-scan images are selected. There is also.
- the image selection unit 268 is an example of the “image selection unit” of the present invention.
- the image forming unit 220 forms a tomographic image by arranging the A scan images selected for each scanning point Rij according to the arrangement of a plurality of scanning points.
- a tomographic image can be formed by selecting the A scan image closest to each scanning point Rij, so that a highly accurate OCT image can be acquired without performing scanning again. is there.
- Modification 5 There is a method of using the amount of positional deviation of the A scan image other than the position correction of the A scan image. In this modification, an example of a usage method other than position correction will be described.
- a positional deviation amount is used in a process of superimposing a plurality of tomographic images based on a plurality of scans performed along the same scanning line. This superimposing process is for improving the image quality.
- the signal light LS is scanned along a predetermined scanning line as described above.
- a scanning mode at this time for example, a radiation scan or a circular scan is applied.
- This fundus oculi observation device repeatedly scans the signal light LS along a predetermined scanning line.
- the image forming unit 220 repeatedly forms a plurality of A-scan images corresponding to a plurality of scanning points on the scanning line. Thereby, a tomographic image for each scan is obtained.
- the positional deviation amount calculation unit 264 repeatedly calculates the positional deviation amount of the A-scan image formed repeatedly.
- the calculation unit 262 is provided with a positional deviation amount determination unit 269 and an image superimposing unit 270.
- the positional deviation amount determination unit 269 determines whether or not each positional deviation amount that is repeatedly calculated by the positional deviation amount calculation unit 264 is included in a predetermined allowable range. As this permissible range, a range in which the positional deviation amount is smaller than a predetermined value is set in advance.
- the positional deviation amount determination unit 269 is an example of the “determination unit” of the present invention.
- the image superimposing unit 270 superimposes a set of A-scan images determined that the positional deviation amount is included in a predetermined allowable range. At this time, the image superimposing unit 270 forms a set of A scan images corresponding to each scanning point Rij, and superimposes the A scan images of each set.
- the image superimposing unit 270 is an example of the “image superimposing unit” of the present invention.
- the image forming unit 220 arranges a plurality of new A-scan images formed by the superimposition process according to the arrangement of the plurality of scanning points Rij. Thereby, a tomographic image along a predetermined scanning line is formed.
- the position of the fundus oculi Ef is detected based on the observation image K, but the detection means of the present invention is not limited to this. As long as the detection means can detect the position of the fundus oculi Ef at a predetermined time interval when the signal light LS is scanned, any configuration can be applied.
- the configuration described in this document includes a confocal tracking reflectometer, a dither scanner, and tracking galvanometers.
- the tracking beam follows the feature points of the fundus.
- the confocal tracking reflectometer is used so that the movement of the eye to be examined can be determined by the reflected light of the beam irradiated on the fundus.
- the beam has a predetermined resonance frequency (8 kHz) and a phase difference of 90 degrees between the x and y scanners, and the dither scanner is driven so that the beam draws a circle.
- the detection signal includes a signal having the resonance frequency, and the phase is proportional to the distance between the beam and the target.
- Detection of phase sensitivity using a lock-in amplifier generates an error signal, which is applied to the DSP feedback control loop. This control loop issues an instruction to the tracking galvanometer in response to the processed error signal so that the image is locked against the movement of the eye to be examined.
- the position of the reference mirror 114 is changed to change the optical path length difference between the optical path of the signal light LS and the optical path of the reference light LR, but the method of changing the optical path length difference is limited to this. Is not to be done.
- the optical path length difference can be changed by moving the fundus camera unit 2 or the OCT unit 100 with respect to the eye E to change the optical path length of the signal light LS. It is also effective to change the optical path length difference by moving the measurement object in the depth direction (z direction), particularly when the measurement object is not a living body part.
- the computer program in the above embodiment can be stored in any recording medium that can be read by the drive device of the computer.
- this recording medium for example, an optical disk, a magneto-optical disk (CD-ROM / DVD-RAM / DVD-ROM / MO, etc.), a magnetic storage medium (hard disk / floppy (registered trademark) disk / ZIP, etc.), etc. are used. Is possible. It can also be stored in a storage device such as a hard disk drive or memory. Furthermore, this program can be transmitted and received through a network such as the Internet or a LAN.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Surgery (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Veterinary Medicine (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Eye Examination Apparatus (AREA)
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP10764219.1A EP2420181B1 (en) | 2009-04-15 | 2010-04-02 | Eyeground observation device |
| EP14001193.3A EP2752151B1 (en) | 2009-04-15 | 2010-04-02 | Fundus observation apparatus |
| US13/264,117 US8573776B2 (en) | 2009-04-15 | 2010-04-02 | Fundus observation apparatus |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2009099447 | 2009-04-15 | ||
| JP2009-099447 | 2009-04-15 | ||
| JP2009223312A JP5437755B2 (ja) | 2009-04-15 | 2009-09-28 | 眼底観察装置 |
| JP2009-223312 | 2009-09-28 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2010119632A1 true WO2010119632A1 (ja) | 2010-10-21 |
Family
ID=42982307
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2010/002424 Ceased WO2010119632A1 (ja) | 2009-04-15 | 2010-04-02 | 眼底観察装置 |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US8573776B2 (enExample) |
| EP (2) | EP2752151B1 (enExample) |
| JP (1) | JP5437755B2 (enExample) |
| WO (1) | WO2010119632A1 (enExample) |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2012161382A (ja) * | 2011-02-03 | 2012-08-30 | Nidek Co Ltd | 眼科装置 |
| JP2012187228A (ja) * | 2011-03-10 | 2012-10-04 | Canon Inc | 撮像装置及び画像処理方法 |
| JP2012213449A (ja) * | 2011-03-31 | 2012-11-08 | Canon Inc | 医療システム |
| JP2012213448A (ja) * | 2011-03-31 | 2012-11-08 | Canon Inc | 眼科装置 |
| JP2014509544A (ja) * | 2011-03-30 | 2014-04-21 | カール ツアイス メディテック アクチエンゲゼルシャフト | 追跡を利用してヒト眼球の測定値を効率的に取得するためのシステムおよび方法 |
| JP2014140491A (ja) * | 2013-01-23 | 2014-08-07 | Nidek Co Ltd | 眼科撮影装置 |
| US20150173612A1 (en) * | 2011-03-10 | 2015-06-25 | Canon Kabushiki Kaisha | Photographing apparatus and photographing method |
| EP2638848A4 (en) * | 2010-11-09 | 2016-12-21 | Kk Topcon | BACKGROUND IMAGE PROCESSING APPARATUS AND BACKGROUND OBSERVATION DEVICE |
Families Citing this family (24)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7365856B2 (en) | 2005-01-21 | 2008-04-29 | Carl Zeiss Meditec, Inc. | Method of motion correction in optical coherence tomography imaging |
| US7805009B2 (en) | 2005-04-06 | 2010-09-28 | Carl Zeiss Meditec, Inc. | Method and apparatus for measuring motion of a subject using a series of partial images from an imaging system |
| JP5317830B2 (ja) * | 2009-05-22 | 2013-10-16 | キヤノン株式会社 | 眼底観察装置 |
| JP2012223428A (ja) * | 2011-04-21 | 2012-11-15 | Topcon Corp | 眼科装置 |
| WO2013004801A1 (en) | 2011-07-07 | 2013-01-10 | Carl Zeiss Meditec Ag | Improved data acquisition methods for reduced motion artifacts and applications in oct angiography |
| US9101294B2 (en) | 2012-01-19 | 2015-08-11 | Carl Zeiss Meditec, Inc. | Systems and methods for enhanced accuracy in OCT imaging of the cornea |
| EP2633802B1 (en) * | 2012-02-29 | 2021-08-11 | Nidek Co., Ltd. | Method for taking a tomographic image of an eye |
| JP6460618B2 (ja) * | 2013-01-31 | 2019-01-30 | キヤノン株式会社 | 光干渉断層撮像装置およびその制御方法 |
| JP5793156B2 (ja) * | 2013-03-01 | 2015-10-14 | キヤノン株式会社 | 眼科装置及びその制御方法 |
| JP6224908B2 (ja) * | 2013-04-17 | 2017-11-01 | キヤノン株式会社 | 撮像装置 |
| JP6402879B2 (ja) * | 2013-08-06 | 2018-10-10 | 株式会社ニデック | 眼科撮影装置 |
| EP2865323B1 (en) | 2013-10-23 | 2022-02-16 | Canon Kabushiki Kaisha | Retinal movement tracking in optical coherence tomography |
| JP6480104B2 (ja) * | 2014-03-11 | 2019-03-06 | 国立大学法人 筑波大学 | 光コヒーレンストモグラフィー装置及び光コヒーレンストモグラフィーによる変位測定方法 |
| JP6528932B2 (ja) * | 2014-12-26 | 2019-06-12 | 株式会社ニデック | 走査型レーザー検眼鏡 |
| CN104614834A (zh) * | 2015-02-04 | 2015-05-13 | 深圳市华星光电技术有限公司 | 曝光机自动更换滤波片装置及曝光机 |
| JP2016202453A (ja) * | 2015-04-20 | 2016-12-08 | 株式会社トプコン | 眼科手術用顕微鏡 |
| JP2017153543A (ja) | 2016-02-29 | 2017-09-07 | 株式会社トプコン | 眼科撮影装置 |
| JP2017158836A (ja) * | 2016-03-10 | 2017-09-14 | キヤノン株式会社 | 眼科装置および撮像方法 |
| US11452442B2 (en) * | 2016-06-15 | 2022-09-27 | Oregon Health & Science University | Systems and methods for automated widefield optical coherence tomography angiography |
| JP6776076B2 (ja) * | 2016-09-23 | 2020-10-28 | 株式会社トプコン | Oct装置 |
| JP6900651B2 (ja) * | 2016-10-27 | 2021-07-07 | 株式会社ニデック | Oct装置、およびoct制御プログラム |
| JP7013134B2 (ja) * | 2017-03-09 | 2022-01-31 | キヤノン株式会社 | 情報処理装置、情報処理方法及びプログラム |
| JP2019041841A (ja) | 2017-08-30 | 2019-03-22 | 株式会社トプコン | 眼科装置、及びその制御方法 |
| US20250017462A1 (en) * | 2021-12-15 | 2025-01-16 | Carl Zeiss Meditec, Inc. | System and method for assisting a subject with alignment to an ophthalmologic device |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH09276232A (ja) | 1996-04-12 | 1997-10-28 | Nikon Corp | 眼底カメラ |
| JPH11325849A (ja) | 1998-03-30 | 1999-11-26 | Carl Zeiss Jena Gmbh | スペクトル干渉に基づく光学・トモグラフィ―および光学表面プロファイル測定装置 |
| JP2002139421A (ja) | 2000-11-01 | 2002-05-17 | Fuji Photo Film Co Ltd | 光断層画像取得装置 |
| JP2006153838A (ja) | 2004-11-08 | 2006-06-15 | Topcon Corp | 光画像計測装置及び光画像計測方法 |
| JP2006212153A (ja) * | 2005-02-02 | 2006-08-17 | Nidek Co Ltd | 眼科撮影装置 |
| JP2007024677A (ja) | 2005-07-15 | 2007-02-01 | Sun Tec Kk | 光断層画像表示システム |
| JP2007130403A (ja) * | 2005-10-12 | 2007-05-31 | Topcon Corp | 光画像計測装置、光画像計測プログラム、眼底観察装置及び眼底観察プログラム |
| JP2008039651A (ja) * | 2006-08-09 | 2008-02-21 | Univ Of Tsukuba | 光断層画像の処理方法 |
| JP2008073099A (ja) | 2006-09-19 | 2008-04-03 | Topcon Corp | 眼底観察装置、眼底画像表示装置及び眼底観察プログラム |
| JP2008154939A (ja) * | 2006-12-26 | 2008-07-10 | Topcon Corp | 光画像計測装置及び光画像計測装置を制御するプログラム |
| JP2008267892A (ja) * | 2007-04-18 | 2008-11-06 | Topcon Corp | 光画像計測装置及びそれを制御するプログラム |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7113818B2 (en) * | 2002-04-08 | 2006-09-26 | Oti Ophthalmic Technologies Inc. | Apparatus for high resolution imaging of moving organs |
| JP4884777B2 (ja) * | 2006-01-11 | 2012-02-29 | 株式会社トプコン | 眼底観察装置 |
| JP4869756B2 (ja) * | 2006-03-24 | 2012-02-08 | 株式会社トプコン | 眼底観察装置 |
| JP4869757B2 (ja) * | 2006-03-24 | 2012-02-08 | 株式会社トプコン | 眼底観察装置 |
| JP4864516B2 (ja) * | 2006-04-07 | 2012-02-01 | 株式会社トプコン | 眼科装置 |
-
2009
- 2009-09-28 JP JP2009223312A patent/JP5437755B2/ja active Active
-
2010
- 2010-04-02 EP EP14001193.3A patent/EP2752151B1/en active Active
- 2010-04-02 US US13/264,117 patent/US8573776B2/en active Active
- 2010-04-02 EP EP10764219.1A patent/EP2420181B1/en active Active
- 2010-04-02 WO PCT/JP2010/002424 patent/WO2010119632A1/ja not_active Ceased
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH09276232A (ja) | 1996-04-12 | 1997-10-28 | Nikon Corp | 眼底カメラ |
| JPH11325849A (ja) | 1998-03-30 | 1999-11-26 | Carl Zeiss Jena Gmbh | スペクトル干渉に基づく光学・トモグラフィ―および光学表面プロファイル測定装置 |
| JP2002139421A (ja) | 2000-11-01 | 2002-05-17 | Fuji Photo Film Co Ltd | 光断層画像取得装置 |
| JP2006153838A (ja) | 2004-11-08 | 2006-06-15 | Topcon Corp | 光画像計測装置及び光画像計測方法 |
| JP2006212153A (ja) * | 2005-02-02 | 2006-08-17 | Nidek Co Ltd | 眼科撮影装置 |
| JP2007024677A (ja) | 2005-07-15 | 2007-02-01 | Sun Tec Kk | 光断層画像表示システム |
| JP2007130403A (ja) * | 2005-10-12 | 2007-05-31 | Topcon Corp | 光画像計測装置、光画像計測プログラム、眼底観察装置及び眼底観察プログラム |
| JP2008039651A (ja) * | 2006-08-09 | 2008-02-21 | Univ Of Tsukuba | 光断層画像の処理方法 |
| JP2008073099A (ja) | 2006-09-19 | 2008-04-03 | Topcon Corp | 眼底観察装置、眼底画像表示装置及び眼底観察プログラム |
| JP2008154939A (ja) * | 2006-12-26 | 2008-07-10 | Topcon Corp | 光画像計測装置及び光画像計測装置を制御するプログラム |
| JP2008267892A (ja) * | 2007-04-18 | 2008-11-06 | Topcon Corp | 光画像計測装置及びそれを制御するプログラム |
Non-Patent Citations (1)
| Title |
|---|
| DANIEL X. HAMMER: "Image stabilization for scanning laser ophthalmoscopy", OPTICS EXPRESS, vol. 10, no. 26, 30 December 2002 (2002-12-30), pages 1542 |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2638848A4 (en) * | 2010-11-09 | 2016-12-21 | Kk Topcon | BACKGROUND IMAGE PROCESSING APPARATUS AND BACKGROUND OBSERVATION DEVICE |
| JP2012161382A (ja) * | 2011-02-03 | 2012-08-30 | Nidek Co Ltd | 眼科装置 |
| JP2012187228A (ja) * | 2011-03-10 | 2012-10-04 | Canon Inc | 撮像装置及び画像処理方法 |
| US20150173612A1 (en) * | 2011-03-10 | 2015-06-25 | Canon Kabushiki Kaisha | Photographing apparatus and photographing method |
| US9687148B2 (en) * | 2011-03-10 | 2017-06-27 | Canon Kabushiki Kaisha | Photographing apparatus and photographing method |
| JP2014509544A (ja) * | 2011-03-30 | 2014-04-21 | カール ツアイス メディテック アクチエンゲゼルシャフト | 追跡を利用してヒト眼球の測定値を効率的に取得するためのシステムおよび方法 |
| US10092178B2 (en) | 2011-03-30 | 2018-10-09 | Carl Zeiss Meditec, Inc. | Systems and methods for efficiently obtaining measurements of the human eye using tracking |
| JP2012213449A (ja) * | 2011-03-31 | 2012-11-08 | Canon Inc | 医療システム |
| JP2012213448A (ja) * | 2011-03-31 | 2012-11-08 | Canon Inc | 眼科装置 |
| JP2014140491A (ja) * | 2013-01-23 | 2014-08-07 | Nidek Co Ltd | 眼科撮影装置 |
Also Published As
| Publication number | Publication date |
|---|---|
| US20120033181A1 (en) | 2012-02-09 |
| EP2752151B1 (en) | 2019-12-18 |
| EP2420181A4 (en) | 2013-08-07 |
| JP2010264225A (ja) | 2010-11-25 |
| EP2420181B1 (en) | 2018-03-07 |
| EP2420181A1 (en) | 2012-02-22 |
| US8573776B2 (en) | 2013-11-05 |
| JP5437755B2 (ja) | 2014-03-12 |
| EP2752151A1 (en) | 2014-07-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5437755B2 (ja) | 眼底観察装置 | |
| JP5867719B2 (ja) | 光画像計測装置 | |
| JP5912358B2 (ja) | 眼底観察装置 | |
| JP5432625B2 (ja) | 眼科観察装置 | |
| JP5543171B2 (ja) | 光画像計測装置 | |
| JP5628636B2 (ja) | 眼底画像処理装置及び眼底観察装置 | |
| JP5916110B2 (ja) | 画像表示装置、画像表示方法、及びプログラム | |
| JP5706506B2 (ja) | 眼科装置 | |
| JP5941761B2 (ja) | 眼科撮影装置及び眼科画像処理装置 | |
| JP2011092290A (ja) | 眼科観察装置 | |
| JP2013176497A (ja) | 眼底観察装置及び眼底画像解析装置 | |
| JP5378157B2 (ja) | 眼科観察装置 | |
| JP2022176282A (ja) | 眼科装置、及びその制御方法 | |
| JP5514026B2 (ja) | 眼底画像処理装置及び眼底観察装置 | |
| JP5837143B2 (ja) | 眼科観察装置、その制御方法、及びプログラム | |
| JP6099782B2 (ja) | 眼科撮影装置 | |
| JP6021289B2 (ja) | 血流情報生成装置、血流情報生成方法、及びプログラム | |
| JP2020103405A (ja) | 眼科装置、及びその制御方法 | |
| JP6254729B2 (ja) | 眼科撮影装置 | |
| JP6106300B2 (ja) | 眼科撮影装置 | |
| JP6106299B2 (ja) | 眼科撮影装置及び眼科画像処理装置 | |
| JP6527970B2 (ja) | 眼科装置 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10764219 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2010764219 Country of ref document: EP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 13264117 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |