JP6503665B2 - Optical coherence tomography apparatus and program - Google Patents

Optical coherence tomography apparatus and program Download PDF

Info

Publication number
JP6503665B2
JP6503665B2 JP2014186398A JP2014186398A JP6503665B2 JP 6503665 B2 JP6503665 B2 JP 6503665B2 JP 2014186398 A JP2014186398 A JP 2014186398A JP 2014186398 A JP2014186398 A JP 2014186398A JP 6503665 B2 JP6503665 B2 JP 6503665B2
Authority
JP
Japan
Prior art keywords
oct data
oct
segmentation
processing
example
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2014186398A
Other languages
Japanese (ja)
Other versions
JP2016055122A (en
JP2016055122A5 (en
Inventor
幸弘 樋口
幸弘 樋口
倫全 佐竹
倫全 佐竹
Original Assignee
株式会社ニデック
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
Application filed by 株式会社ニデック filed Critical 株式会社ニデック
Priority to JP2014186398A priority Critical patent/JP6503665B2/en
Publication of JP2016055122A publication Critical patent/JP2016055122A/en
Publication of JP2016055122A5 publication Critical patent/JP2016055122A5/ja
Application granted granted Critical
Publication of JP6503665B2 publication Critical patent/JP6503665B2/en
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=55756787&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=JP6503665(B2) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

  The present disclosure relates to optical coherence tomography to obtain OCT data of an object.

  An optical coherence tomography (OCT) is known as an apparatus for capturing a tomogram of a subject.

  In an optical tomographic interferometer, Fourier domain OCT is known which obtains a tomographic image of an object by Fourier analysis of spectrum information acquired by a light receiving element (see Patent Document 1). As Fourier domain OCT, SD-OCT which has a spectroscopy optical system in a light reception system, and SS-OCT which has a variable wavelength light source in a projection system are known.

  By the way, the tomographic image obtained by the interference optical system based on Fourier domain OCT has the highest sensitivity (interference sensitivity) at the depth position (zero delay position) where the optical path lengths of the measurement light and the reference light coincide. The sensitivity decreases with distance from the zero delay position. As a result, an image with high sensitivity and high resolution can be obtained for a portion close to the zero delay position, but the sensitivity and resolution of the image decrease for a portion distant from the zero delay position.

  In patent documents 1 to 3, a tomographic image (positive image) acquired in a state in which the fundus is disposed on the back side from the depth position where the optical path lengths of the measurement light and the reference light coincide, and the optical paths of the measurement light and the reference light It has a mode (retinal mode, choroidal mode) for outputting to the monitor a tomographic image (reverse image) acquired in a state where the fundus is arranged in front of the depth position where the lengths coincide with each other.

JP, 2010-29648, A JP 2007-215733 A JP, 2013-156229, A

  However, when an analysis result is obtained based on the acquired tomographic image, a good analysis result may not be obtained. Alternatively, sufficient analysis may not be performed on the acquired tomographic image.

  The present disclosure makes it a technical subject to solve at least one problem of the prior art.

  In order to solve the above-mentioned subject, this indication is characterized by having the following composition.

(1)
A Fourier domain OCT optical system for detecting an A scan signal due to interference between measurement light scanned by a scanning means on a subject and reference light corresponding to the measurement light;
Analysis processing means for acquiring OCT data based on the A-scan signal at each scanning position, performing segmentation processing on the acquired OCT data, and analyzing an object based on the result of the segmentation processing;
An optical coherence tomography apparatus comprising
The analysis processing means performs segmentation processing on the first OCT data, which is OCT data acquired in a state where the surface of the test object is disposed behind the zero delay position, and the back surface of the test object than the zero delay position. A segmentation process is changed in the segmentation process for the second OCT data which is the OCT data acquired in a state of being placed on the front side.
(2)
An OCT data processing program executed in an OCT data processing apparatus for processing OCT data obtained by an optical coherence tomography apparatus comprising Fourier domain OCT optical system, comprising:
By being executed by the processor of the OCT data processing apparatus,
An analysis processing step of performing segmentation processing on the OCT data and analyzing an object based on the result of the segmentation processing;
The segmentation process for the first OCT data, which is the OCT data acquired in the state where the object surface is disposed behind the zero delay position, and the object back surface is disposed forward of the zero delay position An analysis processing step of changing the segmentation processing in the segmentation processing on the second OCT data which is the OCT data acquired in the state;
Are executed by the OCT data processing apparatus.

FIG. 1 is a block diagram showing an example of an optical coherence tomography device (hereinafter sometimes referred to as the present device) according to the present embodiment. It is a figure shown about an example of the OCT optical system concerning this embodiment. It is a figure which shows an example of the OCT data acquired (formed) by OCT optical system. It is a figure which shows an example of the OCT data memorize | stored in the memory 72 as data for analysis.

  Hereinafter, one of the typical embodiments will be described with reference to the drawings. FIG. 1 is a block diagram showing an example of an optical coherence tomography apparatus (hereinafter sometimes referred to as the present apparatus) according to the present embodiment. As an example, the present apparatus 10 is applied to a fundus imaging apparatus that acquires a tomogram of the fundus of an eye to be examined.

First Embodiment
The OCT device 1 shown in FIG. 1 processes, for example, a detection signal acquired by the OCT optical system 100. The OCT optical system 100 obtains, for example, OCT data (for example, a fundus tomographic image) of the fundus oculi Ef of the eye to be examined E. The OCT optical system 100 is connected to, for example, the control unit 70.

  Next, an example of the OCT optical system 100 will be described based on FIG.

<OCT optical system>
The OCT optical system 100 has, for example, a device configuration of an optical coherence tomography (OCT), and may be provided to obtain OCT data of an eye to be examined. As a more detailed example of the configuration, the OCT optical system 100 splits the light emitted from the light source 102 into measurement light and reference light by a light splitter (for example, a coupler, a circulator) 104. The OCT optical system 100 guides the measurement light to the fundus oculi Ef of the eye E by the measurement optical system 106, and guides the reference light to the reference optical system 110. The OCT optical system 100 causes the detector (light receiving element) 120 to receive interference light resulting from combination of the measurement light reflected by the fundus oculi Ef and the reference light. The detector 120 detects an interference signal between the measurement light and the reference light.

  In the case of Fourier domain OCT, the spectral intensity of the interfering light (spectral interference signal) is detected by the detector 120, and a complex OCT signal is obtained by Fourier transformation on the spectral intensity data. For example, an A-scan signal (eg, depth profile) is obtained by calculating the absolute value of the amplitude in the complex OCT signal. The measurement light is scanned by the light scanner 108 on the fundus. OCT data (for example, tomographic image data) is acquired by arranging A scan signals at each scanning position.

  As the OCT optical system 100, a spectral domain type OCT optical system may be used, or a swept source type (SS-OCT) which detects the spectrum of interference light using a wavelength variable light source is used. May be

  In the case of SD-OCT, a low coherent light source (broadband light source) is used as the light source 102, and the detector 120 is provided with a spectroscopic optical system (spectrometer) that disperses interference light into each frequency component (each wavelength component). . The spectrometer comprises, for example, a diffraction grating and a line sensor.

  In the case of SS-OCT, a wavelength scanning light source (wavelength variable light source) that changes the emission wavelength at high speed temporally is used as the light source 102, and a single light receiving element is provided as the detector 120, for example. The light source 102 includes, for example, a light source, a fiber ring resonator, and a wavelength selection filter. Then, as the wavelength selection filter, for example, a combination of a diffraction grating and a polygon mirror, and one using a Fabry-Perot etalon can be mentioned.

  The light emitted from the light source 102 is divided by the light splitter 104 into measurement light and reference light. The measurement light may be emitted into the air after passing through the optical fiber. The measurement light may be focused on the fundus oculi Ef via the optical scanner 108 and other optical members of the measurement optical system 106. The light reflected by the fundus oculi Ef may be returned to the optical fiber through a similar optical path.

  The optical scanner 108 may scan the measurement light in the transverse direction on the fundus, or may two-dimensionally scan the measurement light on the fundus. The light scanner 108 may be disposed at a position substantially conjugate to the pupil. The light scanner 108 is, for example, two galvanometer mirrors, and the reflection angle thereof may be arbitrarily adjusted by the drive mechanism 50. As the optical scanner 108, for example, in addition to a reflection mirror (galvano mirror, polygon mirror, resonant scanner), an acoustooptic device (AOM) or the like for changing the traveling (deflection) direction of light is used.

  The reference optical system 110 can generate reference light that is combined with the reflected light acquired by the reflection of the measurement light at the fundus oculi Ef. The reference optical system 110 may be a Michelson type or a Mach-Zehnder type. The reference optical system 110 is formed by, for example, a reflection optical system (for example, a reference mirror), and the light from the light splitter 104 is returned to the light splitter 104 again by being reflected by the reflection optical system and guided to the detector 120 May be As another example, the reference optical system 110 may be formed by transmission optical system (for example, an optical fiber) and may be guided to the detector 120 by transmitting light from the light splitter 104 without returning it.

  The reference optical system 110 may include, for example, a drive unit 112 for changing an optical path length difference between the measurement light and the reference light by moving an optical member in the reference light path. For example, the reference mirror is moved in the optical axis direction. A configuration for changing the optical path length difference may be disposed in the measurement optical path of the measurement optical system 106. That is, the apparatus of the present embodiment may be provided with a structure for changing the optical path length of at least one of the measurement light and the reference light.

Front observation optical system
The front observation optical system 200 may be provided to obtain a front image of the fundus oculi Ef. The observation optical system 200 may have, for example, an apparatus configuration of a so-called ophthalmic scanning laser ophthalmoscope (SLO). The configuration of the observation optical system 200 may be a so-called fundus camera type configuration. Further, the OCT optical system 100 may double as the observation optical system 200.

<Fixation target projection unit>
The fixation target projection unit 300 may have an optical system for guiding the viewing direction of the eye E. The fixation target projection unit 300 has a fixation target presented to the eye E, and can guide the eye E in a plurality of directions.

<Control system>
The control unit 70 may include a CPU (processor), a RAM, a ROM, and the like. More specifically, the CPU of the control unit 70 controls the entire apparatus (OCT device 1, OCT optical system 100) such as each member of each configuration. The RAM temporarily stores various information. The ROM of the control unit 70 stores various programs for controlling the operation of the entire apparatus, initial values, and the like. The control unit 70 may be configured by a plurality of control units (that is, a plurality of processors).

  A nonvolatile memory (hereinafter, memory) 72 as an example of a storage unit, an operation unit (control unit) 76, a display unit (monitor) 75, and the like may be electrically connected to the control unit 70. The memory 72 may be a non-transitory storage medium capable of retaining stored contents even when the supply of power is shut off. For example, any of a hard disk drive, a flash ROM, an OCT device 1 and a USB memory may be used as the memory 72.

  The memory 72 may store an analysis processing program for analyzing the eye to be examined based on the OCT data obtained by the OCT device 1. Further, the memory 72 may store an imaging control program for controlling imaging of a front image and a tomographic image by the OCT optical system 100. The memory 72 may also store various information related to imaging, such as OCT data, a fundus front image, and information on the imaging position of a tomographic image. Various operation instructions from the examiner may be input to the operation unit 76.

  The operation unit 76 may output a signal corresponding to the input operation instruction to the control unit 70. For the operation unit 74, for example, at least one of a mouse, a joystick, a keyboard, and a touch panel may be used.

  The display unit 75 may be a display mounted on the apparatus main body, or may be a display connected to the main body. A display of a personal computer (hereinafter referred to as "PC") may be used. Multiple displays may be used together. In addition, the display unit 75 may be a touch panel. When the display unit 75 is a touch panel, the display unit 75 functions as an operation unit. The display unit 75 displays OCT data acquired by the OCT optical system 100, front image data, and the like.

  FIG. 3 is a diagram showing an example of OCT data acquired (formed) by the OCT optical system. The zero delay position S is a position corresponding to the optical path length of the reference light in the OCT data, and corresponds to a position where the optical path lengths of the measurement light and the reference light coincide. The OCT data is formed of a first image area G1 corresponding to the rear side of the zero delay position S and a second image area corresponding to the front side of the zero delay position S. The first image area G1 and the second image area G2 have a symmetrical relationship with respect to the zero delay position S.

  FIG. 3 is an example of OCT data in the case of performing dispersion correction processing by software. That is, the control unit 70 may perform dispersion correction processing by software on the spectrum data output from the detector 120, or may obtain an A-scan signal based on the spectrum data after dispersion correction. This results in differences in image quality between real and virtual images. In addition, please refer to, for example, Japanese Patent Application Laid-Open No. 2012-223264 for this point. In addition, it is not limited to the dispersion correction by software, Optical dispersion correction may be used. In this case, there is no difference in image quality between the real image and the virtual image.

  FIG. 3A is an example of OCT data when a normal image with high sensitivity on the retina side is acquired. In the above-described configuration, when the optical path length difference is adjusted such that the retina surface of the eye E is disposed farther to the rear than the zero delay position S, the first OCT data is acquired. The first OCT data includes, for example, a fundus tomographic image (correct image) having higher sensitivity on the retina surface Rt side than the choroidal Ch side portion.

  In this case, the tomographic images formed in the first image area and the second image area face each other. In this case, a real image R is formed in the first image area G1, and a virtual image M (mirror image) is formed in the second image area G2.

  FIG. 3 (b) is an example of OCT data when a reverse image with high sensitivity on the choroid side is obtained. The second OCT data is acquired when the optical path length difference is adjusted such that the back surface of the choroid is located forward of the zero delay position. The second OCT data includes, for example, a fundus tomographic image (reverse image) in which the sensitivity on the choroidal Ch side is higher than that on the retinal surface Rt side.

  In this case, the tomographic images formed in the first image area G1 and the second image area G2 are in the state in which they face in opposite directions. In this case, a real image R is formed in the second image area G2, and a virtual image M (mirror image) is included in the first image area G1.

  In the example of FIG. 3, the first OCT data and the second OCT data have a forward / reverse relationship of the fundus image to be acquired depending on whether the fundus is located on the near side or the far side with respect to the zero delay position S. It is different.

  The control unit 70, for example, extracts image information of either one of the first image region G1 or the second image region G2 in the OCT data, and displays the image information on the screen of the display unit 75. For example, the control unit 70 may cut out the image area from the OCT data, or may create an image again from the luminance information corresponding to the image area.

<Description of operation>
Next, an example of the operation of the apparatus according to the present embodiment will be described. The examiner instructs the subject to gaze at the fixation target, and then performs alignment on the fundus. When the fundus front image is displayed on the display unit 75, OCT data is acquired by the OCT optical system 100 based on a preset scanning pattern, and an OCT image is displayed on the display unit 75.

  The control unit 70 controls the drive of the drive mechanism 112 based on the detection signal output from the detector 120, and adjusts the difference in optical path length between the measurement light and the reference light so that a fundus tomographic image is obtained. The retina mode is set as an initial setting, and the control unit 70 adjusts the optical path length such that the retina Rt is disposed, for example, behind the zero delay position S. In this case, the first OCT data is acquired. On the other hand, when observing the fundus oculi slice in the choroidal film mode, the control unit 70 adjusts the optical path length difference between the measurement light and the reference light so that the choroidal film Ch is disposed on the front side of the zero delay position S. In this case, the second OCT data is acquired.

  <Storage of Tomographic Image> In a state where the first OCT data or the second OCT data is displayed as a moving image, a scanning position / pattern desired by the examiner is set. Thereafter, a predetermined trigger signal is output automatically or manually. Using this as a trigger, the control unit 70 controls the optical scanner 108 based on the set imaging condition (for example, scanning position / pattern). The control unit 70 acquires OCT data as a still image based on the output signal from the detector 120 at each scanning position. The control unit 70 stores the acquired OCT data in the memory 72. In this case, the control unit 70 may store, in the memory 72, determination information as to whether the OCT data is the first OCT data or the second OCT data in association with the stored OCT data. The discrimination information can be acquired by a retinal mode, a mode switching signal between choroidal modes, or an acquisition position of a real image.

  When storing the OCT data in the memory 72, for example, the control unit 70 extracts image information of either one of the first image area G1 or the second image area G2 in the OCT data, and stores the image information in the memory 72 It is also good. For example, in the first OCT data, the OCT data corresponding to the first image area G1 is stored in the memory 72 (see FIG. 4A), and in the second OCT data, it corresponds to the second image area G2 OCT data may be stored in the memory 72 (see FIG. 4B).

<Segmentation>
In the present embodiment, when analysis is performed, the control unit 70 may perform segmentation processing on the OCT data stored in the memory 72 and analyze the fundus oculi Ef based on the result of the segmentation processing. In this case, the control unit 70 is used as an example of an analysis processing unit. The segmentation processing is used, for example, as image processing for performing feature extraction in OCT data as preparation for obtaining an analysis result of the fundus.

  For example, the control unit 70 may detect (extract) at least one of boundaries of a plurality of layers as a segmentation process. More specifically, the control unit 70 may be a specific layer (for example, nerve fiber layer (NFL), ganglion cell layer (GCL), retinal pigment epithelium (RPE), etc. Layer boundary corresponding to choroid (Choroid) may be detected (extracted) by segmentation processing. When a layer boundary corresponding to a specific layer is detected, a detection method is set based on the position of the specific layer based on the anatomical point of view, the order of the layers, the luminance level in the A-scan signal, and the like. For example, edge detection is used for the segmentation.

<Change of segmentation process>
The control unit 70 may change at least a part of the segmentation process on the OCT data between the first OCT data and the second OCT data in which the positional relationship of the fundus with respect to the zero delay position S is different from each other. Here, the first OCT data is, for example, OCT data acquired in a state in which the surface of the fundus oculi is placed behind the zero delay position S. Further, the second OCT data is, for example, OCT data acquired in a state in which the back surface of the fundus is located on the front side of the zero delay position S.

  In the present embodiment, any of the first OCT data and the second OCT data includes OCT data related to the same region (for example, the fundus) of the eye to be examined. In the following description, the fundus surface (retina surface layer) side is defined as the upper direction (front side) and the fundus back surface (choroidal layer) side is defined as the downward direction (rear side) in the vertical direction in the OCT data.

  Here, for example, the control unit 70 may change the segmentation area. As one example, the control unit 70 may change the segmentation area in consideration of the difference in the noise position with respect to the fundus tomographic image between the first OCT data and the second OCT data. In this case, an area to be excluded from the segmentation process in OCT data may be changed.

  FIG. 4 is a view showing an example of OCT data stored in the memory as analysis data. In the acquired OCT data, in addition to the OCT data of the subject's eye such as a fundus tomographic image, a noise component NZ caused by the device may occur. Such noise components are noises which are not removed by noise removal processing such as DC subtraction, for example, and remain in OCT data. Possible causes of such noise generation include, for example, fluctuation of light from the light source 102, vibration and vibration of the device, reflection of an internal lens, and light from an external illumination.

  The noise component NZ often has a fixed periodicity, and is formed, for example, at a predetermined position relative to the zero delay position. As a result, the noise component NZ generated on the first OCT data is formed at a position separated by a predetermined distance D from the upper end (that is, the surface layer side end of the retina) of the OCT data. That is, the noise component NZ is generated above the OCT data of the fundus of the eye to be examined. The noise component NZ generated on the second OCT data is formed at a position separated by a predetermined distance D from the lower end (that is, the choroidal end) of the OCT data. That is, the noise component NZ is generated below the OCT data of the fundus of the eye to be examined.

  In the present embodiment, the region where the segmentation process is performed is changed in consideration of the difference in the generation position of the noise component NZ with respect to the OCT data of the fundus of the eye to be examined. In FIG. 4, a segmentation execution area (hereinafter, execution area) SG is an area where segmentation is performed on OCT data. When the execution area SG is set, for example, the coordinate position in the depth direction from the start point SG1 which starts the segmentation to the end point SG2 which ends the segmentation may be stored in the memory 72 in advance. As a result, segmentation is performed from the start point SG1 to the end point SG2 in each A-scan signal, and no segmentation is performed on other areas.

  When segmentation is performed on the first OCT data, for example, the start point SG1 is set below the predetermined distance D away from the upper end of the OCT data (that is, the surface layer side end of the retina). The lower end of the OCT data (that is, the choroidal end) is set to the end point SG2 (see FIG. 3A). As a result, segmentation is performed so as to exclude the noise component NZ included in the first OCT data.

  On the other hand, when segmentation is performed on the second OCT data, for example, the upper end (that is, the surface layer side of the retina) of the OCT data is set as the start point SG1 and the upper side from the lower end (that is, the choroidal side end) of the OCT data The end point SG2 is set on the upper side of the position separated by the predetermined distance D. As a result, segmentation is performed to exclude the noise component NZ included in the second OCT data.

  As described above, by performing the segmentation such that the noise component NZ is excluded according to the generation position of the noise component NZ with respect to the OCT data of the eye fundus of the eye to be examined, the influence of the noise can be obtained regardless of the characteristics of the OCT data. It can be mitigated properly. Subsequent analysis can be performed well.

<Analysis based on segmentation results>
In the case of the first OCT data, the surface of the fundus (for example, the retina) is clearly formed. As a result of the segmentation, the control unit 70 acquires layer boundary information in each layer of the retinal layer (for example, retinal surface layer to retinal pigment information layer), and calculates thickness information of each layer based on the acquired layer boundary information. You may When the choroid can be segmented, the thickness of the choroid may be determined in addition to the measurement of the thickness of the retinal layer.

  In the case of the second OCT data, the back surface side of the fundus (for example, the choroidal portion) is clearly formed. As a result of the segmentation, the control unit 70 may acquire layer boundary information in the choroidal layer, and may calculate thickness information of the choroidal layer based on the acquired layer boundary information. If the layers of the retina can be segmented, the thickness of the layers of the retina may be determined in addition to the measurement of the thickness of the choroid layer.

  The fundus analysis based on the result of the segmentation process is not limited to the above. For example, size measurement, lesion detection, and the like may be performed on a characteristic site of the fundus (eg, macula, papillae). In addition, the control unit 70 may obtain En-face images (OCT front images) in each layer using the segmentation result.

  Of course, the OCT data may be three-dimensional OCT data. Three-dimensional OCT data is acquired, for example, by two-dimensional scanning (for example, raster scan) of measurement light. Also, an analysis map may be obtained as a result of segmentation on the three-dimensional OCT data. As the analysis map, for example, a retinal thickness map, a choroidal thickness map, or the like can be considered.

  The control unit 70 may display the analysis result acquired as described above on the display unit 75.

<Modification example>
In the above embodiment, the surface layer side of the retina is set to the start point SG1. However, the present invention is not limited to this, and the choroid membrane side may be set to the start point SG1. Note that the execution area SG may be set at the manufacturing stage of the device. In addition, the position of the execution area SG may be set for each device in consideration of individual differences among the devices.

  In the above example, the region where segmentation is performed is changed according to the generation position of the noise component NZ with respect to the OCT data of the eye to be examined, but the method of changing the segmentation region is not limited to this. For example, even in the case of extracting OCT data for performing segmentation processing and performing segmentation on the extracted OCT data, the application of the present embodiment is possible. For example, the control unit 70 may change the extraction area for performing the segmentation process according to the generation position of the noise component NZ. In addition, the control unit 70 may exclude the noise component from the segmentation processing in advance by storing the OCT data in the memory 75 so that the noise component NZ is excluded. In this case, the storage area on the OCT data may be set in advance so that the noise component NZ is excluded.

  In the above description, although the image area stored in the memory 72 is changed between the first OCT data and the second OCT data, the present invention is not limited to this. The image areas stored in the memory 72 may be identical. Such a storage method may be used when obtaining OCT data by optical dispersion correction, or when switching dispersion correction data between the first OCT data and the second OCT data. In this case, the OCT data corrected by the software dispersion correction process may be switched between a real image and a virtual image (for details, refer to JP 2012-223264 A). Even in such a case, since the upper and lower positions of the noise with respect to the eye image to be examined are different between the first OCT data and the second OCT data, the application of the above embodiment is possible.

  The control unit 70 changes the display area of the OCT data between the first OCT data and the second OCT data when displaying a part of the OCT data based on the segmentation result. It is also good.

  For example, when displaying a three-dimensional graphic image on the display unit 75 based on three-dimensional OCT data, the control unit 70 changes the area to be displayed as a three-dimensional graphic image based on the layer boundary area detected by segmentation. You may do so. More specifically, when displaying the first three-dimensional OCT data, the control unit 70 removes the area above the surface layer of the retina, and displays the graphic image with respect to the area below the surface layer of the retina. May be In addition, when displaying the second three-dimensional OCT data, the control unit 70 may remove the area below the choroid layer and display the graphic image for the area above the choroid layer. As a result, unnecessary noise information in the three-dimensional graphic image is suitably reduced.

  Note that the processing of the present embodiment may be performed on full-range OCT data in which either a real image or a virtual image is removed by image processing.

  In the above description, at least a part (for example, segmentation region) of segmentation processing for OCT data is performed between the first OCT data and the second OCT data in which the positional relationship of the fundus to the zero delay position is different. Although it changed, it is not limited to this. In the present embodiment, at least a part of the segmentation process may be changed in accordance with the difference in the conditions under which the OCT data is acquired by the optical coherence tomography device.

  As a first example, the control unit 70 may change the segmentation process according to the difference in imaging method of the optical coherence tomography apparatus. For example, the first OCT data is OCT data acquired by spectral domain OCT (SD-OCT), and the second OCT data is OCT data acquired by swept source OCT (SS-OCT) In some cases, the brightness of the acquired image is different. Possible causes include differences in imaging principles, differences in light source wavelength, and the like. Therefore, the control unit 70 may change, for example, a threshold for performing segmentation between the first OCT data and the second OCT data. Thus, proper segmentation is performed. Such processing is also applicable to the first embodiment.

  In addition, the appearance image may be different due to the difference in sensitivity in the depth direction. The appearance image means an image appearing as an eye image in the OCT data, and the difference in appearance image means a difference in eye data appearing in the OCT data according to a difference in acquisition condition. Here, the appearance image may be defined, for example, as an image imaged to the extent that segmentation can be performed.

  Therefore, the control unit 70 may change the segmentation region between the first OCT data and the second OCT data. That is, in the present embodiment, the segmentation area may be changed according to the difference in the appearance image in the OCT data. This enables appropriate analysis processing. Such processing is also applicable to the first embodiment.

  As a second example, the control unit 70 may change the segmentation process in accordance with the difference in the exposure time of the light receiving element when acquiring the OCT data. For example, if the first OCT data is OCT data acquired at a first exposure time and the second OCT data is OCT data acquired at a second exposure time, Image brightness is different. Therefore, the control unit 70 may change the threshold when performing segmentation between the first OCT data and the second OCT data.

  In the above description, although the case of acquiring the form tomographic image of the eye to be examined has been described, the control unit 70 measures changes (for example, phase change, intensity change) between signals in a plurality of A scan signals. Thus, a blood flow measurement image (Doppler OCT image) may be acquired. In addition, the control unit 70 may acquire an image (polarization OCT image) indicating polarization characteristics of the subject's eye by measuring polarization components (S polarization, P polarization) in a plurality of A scan signals. That is, the present embodiment is also applicable to OCT such as Doppler OCT and polarization sensitive OCT.

  Further, in the above description, an apparatus for obtaining OCT data of the fundus has been described as an example. However, the present invention is not limited to this. It is possible. For example, it is used in an apparatus for obtaining OCT data of an anterior segment of an eye to be examined.

  Further, the application of the present embodiment is not limited to the application to an ophthalmologic imaging apparatus, and is applicable to an OCT apparatus for acquiring a tomogram of an organism other than the eye (for example, skin, blood vessel) or a sample other than an organism. It is possible.

1 OCT Device 70 Control Unit 100 OCT Optical System S Zero Delay Position NZ Noise SG Segmentation Execution Area

Claims (3)

  1. A Fourier domain OCT optical system for detecting an A scan signal due to interference between measurement light scanned by a scanning means on a subject and reference light corresponding to the measurement light;
    Analysis processing means for acquiring OCT data based on the A-scan signal at each scanning position, performing segmentation processing on the acquired OCT data, and analyzing an object based on the result of the segmentation processing;
    An optical coherence tomography apparatus comprising
    The analysis processing means performs segmentation processing on the first OCT data, which is OCT data acquired in a state where the surface of the test object is disposed behind the zero delay position, and the back surface of the test object than the zero delay position. What is claimed is: 1. An optical coherence tomography apparatus comprising: changing segmentation processing according to segmentation processing for second OCT data, which is OCT data acquired in a state of being placed on the front side.
  2.   The analysis processing means is characterized in that the region excluded from the segmentation processing is changed in consideration of the difference in noise position with respect to the object image between the first OCT data and the second OCT data. The optical coherence tomography device according to claim 1.
  3. An OCT data processing program executed in an OCT data processing apparatus for processing OCT data obtained by an optical coherence tomography apparatus comprising Fourier domain OCT optical system, comprising:
    By being executed by the processor of the OCT data processing apparatus,
    An analysis processing step of performing segmentation processing on the OCT data and analyzing an object based on the result of the segmentation processing;
    The segmentation process for the first OCT data, which is the OCT data acquired in the state where the object surface is disposed behind the zero delay position, and the object back surface is disposed forward of the zero delay position An analysis processing step of changing the segmentation processing in the segmentation processing on the second OCT data which is the OCT data acquired in the state;
    An OCT data processing apparatus to execute the program.
JP2014186398A 2014-09-12 2014-09-12 Optical coherence tomography apparatus and program Active JP6503665B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2014186398A JP6503665B2 (en) 2014-09-12 2014-09-12 Optical coherence tomography apparatus and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2014186398A JP6503665B2 (en) 2014-09-12 2014-09-12 Optical coherence tomography apparatus and program

Publications (3)

Publication Number Publication Date
JP2016055122A JP2016055122A (en) 2016-04-21
JP2016055122A5 JP2016055122A5 (en) 2017-10-19
JP6503665B2 true JP6503665B2 (en) 2019-04-24

Family

ID=55756787

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2014186398A Active JP6503665B2 (en) 2014-09-12 2014-09-12 Optical coherence tomography apparatus and program

Country Status (1)

Country Link
JP (1) JP6503665B2 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5331395B2 (en) * 2008-07-04 2013-10-30 株式会社ニデック Optical tomography system
JP5690193B2 (en) * 2011-04-18 2015-03-25 株式会社ニデック Optical tomography system

Also Published As

Publication number Publication date
JP2016055122A (en) 2016-04-21

Similar Documents

Publication Publication Date Title
JP5975126B2 (en) Fundus observation apparatus and fundus observation program
US20160302969A1 (en) Ophthalmic laser treatment apparatus
US10070780B2 (en) Ophthalmologic photographing apparatus and ophthalmologic photographing method
US8857988B2 (en) Data acquisition methods for reduced motion artifacts and applications in OCT angiography
JP5236089B1 (en) Optical coherence tomography apparatus, control method of optical coherence tomography apparatus, and program
US20180172426A1 (en) Optical coherence tomography device
US7828437B2 (en) Fundus oculi observation device and fundus oculi image processing device
JP5901124B2 (en) Imaging apparatus and control method thereof
JP5511437B2 (en) Optical tomography system
US8096658B2 (en) Fundus oculi observation device and program for controlling the same
JP5331395B2 (en) Optical tomography system
US7604351B2 (en) Optical image measurement device and optical image measurement method
US8177362B2 (en) Optical image measurement device
JP5061380B2 (en) Fundus observation apparatus, ophthalmologic image display apparatus, and program
JP6217085B2 (en) Ophthalmic imaging equipment
US8098278B2 (en) Optical image measurement device
US7954946B2 (en) Optical tomographic image photographing apparatus
JP5752955B2 (en) Optical tomography system
US8721078B2 (en) Fundus photographing apparatus
JP4890878B2 (en) Fundus observation device
US8939583B2 (en) Ophthalmic apparatus, method of controlling ophthalmic apparatus and storage medium
JP6007527B2 (en) Fundus photographing device
EP1842482A2 (en) Ophthalmologic apparatus
JP2008267892A (en) Optical image measuring device and program for controlling same
US20070236660A1 (en) Fundus Observation Device

Legal Events

Date Code Title Description
A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20170908

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20170908

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20180613

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20180731

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20180928

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20190226

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20190311

R150 Certificate of patent or registration of utility model

Ref document number: 6503665

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150