JP5631361B2 - Image processing apparatus, image processing method, and program - Google Patents

Image processing apparatus, image processing method, and program Download PDF

Info

Publication number
JP5631361B2
JP5631361B2 JP2012137263A JP2012137263A JP5631361B2 JP 5631361 B2 JP5631361 B2 JP 5631361B2 JP 2012137263 A JP2012137263 A JP 2012137263A JP 2012137263 A JP2012137263 A JP 2012137263A JP 5631361 B2 JP5631361 B2 JP 5631361B2
Authority
JP
Japan
Prior art keywords
tomographic image
image
eye
movement
amount
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2012137263A
Other languages
Japanese (ja)
Other versions
JP2012176291A (en
JP2012176291A5 (en
Inventor
好彦 岩瀬
好彦 岩瀬
片山 昭宏
昭宏 片山
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Priority to JP2012137263A priority Critical patent/JP5631361B2/en
Publication of JP2012176291A publication Critical patent/JP2012176291A/en
Publication of JP2012176291A5 publication Critical patent/JP2012176291A5/en
Application granted granted Critical
Publication of JP5631361B2 publication Critical patent/JP5631361B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a tomographic imaging apparatus, an image processing apparatus that processes a tomographic image captured by the tomographic imaging apparatus, and an image processing method.

  In an ophthalmic tomography apparatus such as an optical coherence tomography (OCT), interference light is acquired by causing interference between reflected light of near-infrared light irradiated on the retina and reference light. A tomographic image is generated based on the interference light. In general, the image quality of a tomographic image generated based on interference light in this way depends on the intensity of near-infrared light incident on the retina. For this reason, in order to improve the image quality of tomographic images, it is necessary to increase the intensity of near-infrared light that irradiates the retina. There is a certain limit.

For this reason, it is desired to generate a high-quality tomographic image while irradiating near-infrared light within a safe intensity range for safety. In response to such demands, efforts have been made mainly by the following two methods.
(I) Method of using oversampling (ii) Method of using superposition Hereafter, the approach by the two methods will be briefly described.

First, the oversampling method will be described with reference to FIG. FIG. 9A is a diagram illustrating an example of a tomographic image of the retina photographed by the tomographic image photographing apparatus. In FIG. 9A, T i represents a two-dimensional tomographic image (B-scan image), and A ij represents a scanning line (A-scan). As shown in FIG. 9A, the two-dimensional tomographic image T i is composed of a plurality of scanning lines A ij located on the same plane.

FIG. 9 (c), in the shooting of the two-dimensional tomographic images T i shown in FIG. 9 (a), irradiation in the case of near infrared light irradiated on the retina, looking toward the fundus oculi surface to the depth direction of the retina It is a figure which shows an example of distribution. In FIG. 9C, the ellipses indicated by A i1 to A im represent the beam diameter of near infrared light.

On the other hand, FIG. 9B is an example of a tomographic image of the retina taken by the tomographic imaging apparatus, and a two-dimensional tomogram when the same imaging range as that of FIG. 9A is taken with twice the number of scanning lines. It is a figure which shows image Ti '. FIG. 9D shows the near-infrared light irradiated on the retina viewed from the fundus surface toward the depth of the retina in the photographing of the two-dimensional tomographic image T i ′ shown in FIG. 9B. It is a figure which shows an example of the irradiation distribution in a case. In FIG. 9D , the ellipses shown in A i1 to A i2m represent the beam diameter of near infrared light.

  As can be seen from FIGS. 9A and 9B, when the imaging range is the same, the resolution of the two-dimensional tomographic image increases as the number of scanning lines increases. Further, as can be seen from FIGS. 9C and 9D, when increasing the number of scanning lines in order to increase the resolution, the near-infrared light irradiated on the retina is overlapped between adjacent beams. It is necessary to irradiate.

  A method of generating a high-resolution two-dimensional tomographic image by irradiating the adjacent beams so as to overlap each other is generally called an oversampling method.

  On the other hand, the superposition method generates a cross-sectional image with less noise by superimposing and synthesizing multiple tomographic images captured by scanning the same imaging range multiple times with the same number of scanning lines. (For example, refer to Patent Document 1 below).

JP 2008-237238 A

C. Tomasi and T. Kanade, "Detection and tracking of point features", Technical report, CMUCS-91-132, 1991

  However, the above two methods for generating a high-quality tomographic image have the following problems. For example, in the case of the superposition method disclosed in Patent Document 1, a plurality of tomographic images synthesized by superposition are tomographic images taken at different times. Since the respective tomographic images are used to average the pixel values of the corresponding pixels, it is effective in reducing noise included in each tomographic image. However, since the tomographic images themselves have the same resolution, it is difficult to generate a high-resolution tomographic image by combining them.

  On the other hand, in the case of the oversampling method, it is possible to generate a tomographic image with higher resolution by increasing the number of scanning lines and widening the overlap width. However, as the number of scanning lines increases, the time taken to capture a single tomographic image increases, and it is likely to be affected by eye movements of the eyeball, movements of the head, and the like. As a result, distortion occurs in the captured tomographic image.

  Therefore, in order to generate a high-quality tomographic image, it is possible to generate a high-resolution tomographic image with as little noise as possible under imaging conditions that are not easily influenced by eye movements of the eyeball or movement of the head. It is desirable to perform shooting. On the other hand, there are individual differences in the size of eye movements and head movements, and the shooting conditions that are not easily affected by eye movements or head movements are not always constant. .

  The present invention has been made in view of the above problems, and an object of the present invention is to generate a low-noise and high-resolution tomographic image in which the influences of eye movements of the eyeball and movement of the head are suppressed as much as possible. .

In order to achieve the above object, an image processing apparatus according to the present invention comprises the following arrangement. That is,
Detection means for detecting the amount of movement of the eye to be examined;
Determining means for determining the scanning speed so as to increase the scanning speed when acquiring each of a plurality of two-dimensional tomographic images of the eye to be examined, as the detected amount of movement is larger;
Obtaining means for obtaining a plurality of two-dimensional tomographic images of the eye to be examined at the determined scanning speed;
Selection means for selecting, from the plurality of acquired two-dimensional tomographic images, a two-dimensional tomographic image in which in-plane distortion in the two-dimensional tomographic image generated based on the movement of the eye to be examined is smaller than other two-dimensional tomographic images. With.

  According to the present invention, it is possible to generate a low-noise and high-resolution tomographic image in which the influences of eyeball fine movement and head movement are suppressed as much as possible.

It is a figure which shows the structure of an image processing system. It is a flowchart which shows the flow of the tomographic image process in an image processing apparatus. It is the schematic which shows an example of a tomographic image and a fundus image. It is the schematic which shows an example of the relationship between the number of scanning lines and the number of imaging | photography. It is a figure for demonstrating a tomographic image generation process. It is a figure which shows the structure of an image processing system. It is a flowchart which shows the flow of the tomographic image process in an image processing apparatus. It is a flowchart which shows the flow of the tomographic image process in an image processing apparatus. It is a figure for demonstrating the oversampling method.

[First Embodiment]
Hereinafter, a first embodiment of the present invention will be described with reference to the drawings. Note that the image processing apparatus according to the present embodiment detects the amount of movement of the subject's eye when performing imaging by the oversampling method or the superposition method, and performs imaging based on the imaging condition corresponding to the detected amount of movement. It is characterized by that.

  According to the present embodiment, since the photographing is performed by the oversampling method or the superposition method under the photographing condition corresponding to the amount of eye movement for each individual, the influence of eye movements of the eyeball or the movement of the head is minimized. It is possible to generate a tomographic image with low noise and high resolution.

  Hereinafter, details of the image processing system including the image processing apparatus according to the present embodiment will be described.

<Configuration of image processing system>
FIG. 1 is a diagram illustrating a configuration of an image processing system 100 including an image processing apparatus 110 according to the present embodiment. As shown in FIG. 1, the image processing system 100 is configured by connecting an image processing apparatus 110 to a tomographic image capturing apparatus 120 and a fundus image capturing apparatus 130 via an interface.

  The tomographic imaging apparatus 120 is an apparatus that captures a tomographic image of the eye, and examples of the apparatus include time domain OCT and Fourier domain OCT. Since the tomographic imaging apparatus 120 is a known apparatus, detailed description thereof is omitted, and here, the operation content is changed according to the number of scanning lines, the number of captured images, and the like set by an instruction from the image processing apparatus 110. Only the functions to be described will be described.

  In FIG. 1, a galvanometer mirror 121 controls the irradiation position of near infrared light. Further, the galvanometer mirror driving unit 122 defines the number of scanning lines in the plane direction (scanning speed in the plane direction).

  The parameter setting unit 123 sets various parameters used for controlling the operation of the galvano mirror 121 by the galvano mirror driving unit 122 in the galvano mirror driving unit 122. Based on the parameters set by the parameter setting unit 123, imaging conditions for tomographic image capturing by the tomographic image capturing apparatus 120 are determined. Specifically, the scanning speed in the main scanning direction in the planar direction and the sub-scanning direction in the planar direction is determined by the number of scanning lines and the number of captured images set in accordance with an instruction from the image processing apparatus 110.

  The fundus image capturing apparatus 130 is an apparatus that captures a fundus image of an eye part. Examples of the apparatus include a fundus camera and an SLO.

  The image processing apparatus 110 processes the tomographic image captured by the tomographic image capturing apparatus 120 and generates a tomographic image displayed on the display unit 117. The image processing apparatus 110 includes an image acquisition unit 111, a storage unit 112, a first motion detection unit 113, a second motion detection unit 114, a determination unit 115, an image creation unit 116, and a display unit 117. Is provided.

  The image acquisition unit 111 acquires a tomographic image and a fundus image captured by the tomographic image capturing device 120 and the fundus image capturing device 130 and stores them in the storage unit 112. The first motion detection unit 113 detects the amount of motion in the depth direction of the eye to be examined based on the reflected light intensity (signal intensity) measured at the time of imaging in the tomographic imaging apparatus 120. The second motion detection unit 114 detects the amount of motion in the planar direction of the subject's eye based on the fundus image captured by the fundus image capturing device 130.

  The determination unit 115 is a parameter (scanning) for performing imaging by the oversampling method or the superposition method based on the amount of movement of the eye to be examined detected by the first motion detection unit 113 and the second motion detection unit 114. (Number of lines, number of shots, etc.).

  The image creation unit 116 processes the tomographic image captured by the oversampling method or the superposition method under the parameters determined by the determination unit 115 and generates a tomographic image displayed on the display unit 117. The display unit 117 displays the tomographic image generated by the image creation unit 116.

<Flow of tomographic image processing in image processing apparatus>
Next, the flow of tomographic image processing in the image processing apparatus 110 of this embodiment will be described with reference to FIGS. 2 and 3.

  In step S <b> 201, based on an instruction from the image processing device 110, the tomographic image capturing device 120 and the fundus image capturing device 130 capture the eye to be examined in order to detect the amount of movement of the eye to be examined. The image acquisition unit 111 acquires a tomographic image captured by the tomographic image capturing device 120 and a fundus image captured by the fundus image capturing device 130 (these are collectively referred to as a motion detection image).

  FIG. 3 is a diagram illustrating an example of a motion detection image acquired by the image acquisition unit 111. 3A illustrates an example of a tomographic image captured by the tomographic image capturing apparatus 120, and FIG. 3B illustrates an example of a fundus image captured by the fundus image capturing apparatus 130. In FIG. 3B, the reference symbol F indicates the fundus.

  In step S202, the first motion detector 113 and the second motion detector 114 detect the amount of motion of the eye to be examined. In the first motion detection unit 113, the depth direction of the eye is based on the reflected light intensity (the intensity of the reflected signal) of near-infrared light irradiated to the eye when the tomographic image capturing apparatus 120 captures a tomographic image. The amount of motion in the z-axis direction (FIG. 3A) is detected.

  Further, the second motion detection unit 114 tracks feature points such as a blood vessel bifurcation in the fundus image acquired from the fundus image capturing apparatus 130, so that the planar direction of the eye (xy in FIG. 3B) is obtained. Axis direction) is detected. Note that feature point detection and tracking is performed using, for example, the KLT method (see Non-Patent Document 1) or the like (however, feature point detection and tracking is not limited to the KLT method). Absent).

  In step S <b> 203, the determination unit 115 sets the imaging conditions when the tomographic imaging apparatus 120 performs imaging according to the amount of eye movement detected by the first motion detection unit 113 and the second motion detection unit 114. Determine the parameters to configure. Further, the determined parameter is set in the parameter setting unit 123 of the tomographic imaging apparatus 120. Details of the parameter determination process in the determination unit 115 will be described later.

  In step S <b> 204, the image acquisition unit 111 acquires a tomographic image obtained by the tomographic imaging apparatus 120 performing imaging using the parameters determined by the determination unit 115 based on an instruction from the image processing apparatus 110. To do.

  In step S205, the image creating unit 116 processes the tomographic image acquired in step S204 (for example, calculates a pixel value of each pixel by performing an averaging process of a plurality of pixels) and displays it on the display unit 117. A tomographic image is generated. Details of the tomographic image generation processing in the image creation unit 116 will be described later. In step S <b> 206, the display unit 117 displays the tomographic image generated by the image creation unit 116.

<Details of processing in each part>
Next, details of processing of each unit constituting the image processing apparatus 110 will be described.

<Details of Parameter Determination Process in Determination Unit 115>
First, details of the parameter determination process in the determination unit 115 will be described. In the image processing apparatus 110 according to the present embodiment, the number of pixels n (n> 1) is used as the number of pixels used for the averaging process per pixel in order to generate a high-quality tomographic image.

  The determination unit 115 determines each parameter (the number of captured images and the number of scanning lines) so as to realize the addition averaging process with the number of pixels n and prevent distortion in one tomographic image. . Details will be described below.

When the lateral resolution during the tomographic image generation rx, the number of scanning lines of shots of the tomographic image at the same section in k, 1 tomographic image and A m, and lateral resolution rx, the number of scanning lines and number of shots k relationship between a m is as (1).

  On the other hand, when the frequency of the light source used in the tomographic image capturing apparatus 120 is f [Hz], the time t [s] required to capture one tomographic image can be obtained from the equation (2).

  Here, the lateral resolution ORx and the depth resolution ORz of the tomographic image can be obtained based on the wavelength of the light source used in the tomographic imaging apparatus 120.

  Then, in order to prevent distortion from occurring in one tomographic image, the determination unit 115 calculates the average value or the center of the amount of movement of the eye to be detected detected during the time for photographing one tomographic image. The parameter is determined so that the value does not exceed the resolution.

That is, if the time required for the eye to move the horizontal resolution ORx is t ORx [s] and the time required for the depth resolution ORz to move is t ORz [s], t ORx and t ORz are t. Determine the parameters not to exceed. Specifically, the number of scanning lines Am is obtained from equation (3) using equation (2).

  Further, the number of shots k is obtained from equation (4) using equations (1) and (3).

Next, with reference to FIG. 4, the number of scanning lines A m in one tomographic image, the relationship between the number of shots k in the same cross-section will be described. The vertical scale in FIG. 4 is a numerical value when the horizontal resolution rx at the time of tomographic image shooting is 512 and the number of pixels n used for the averaging process is 4.

4, the solid line of the vertical axis and the graph on the left represents the number of scanning lines A m, is a broken line of the vertical axis and the graph of the right represents the number of shots k. The horizontal axis represents the amount of eye movement. As shown in FIG. 4, if the parameter so as not to distort is determined in one tomographic image, as the motion amount of the eye increases, the number of the number A m scanning lines decreases, shooting The number of sheets will increase.

In FIG. 4, since the number of scanning lines A m three stages of 512, 1024, 2048, graph has become step functions like, the number of scanning lines A m is not limited thereto . For example, it may be configured to be a linear function or a non-linear function with a downward slope corresponding to an arbitrary number of scanning lines.

<Details of Tomographic Image Generation Processing in Image Creation Unit 116>
Next, details of the tomographic image generation processing in the image creation unit 116 will be described. FIG. 5A is a diagram for explaining tomographic image generation processing (addition averaging processing of scanning lines located on the same tomographic image) for processing a tomographic image captured by the oversampling method to generate a tomographic image. is there.

  FIG. 5B is a tomographic image generation process for processing a plurality of tomographic images photographed by the superposition method to generate a tomographic image (addition average of scanning lines photographed at different times and located on different tomographic images. It is a figure for demonstrating a process.

  FIG. 5C shows a tomographic image generation process for generating a tomographic image by processing a tomographic image captured by a combination of the oversampling method and the superposition method (scan lines located on the same tomographic image and different tomographic images). It is a figure for demonstrating an addition average process. FIG. 5D shows a tomographic image generated by the tomographic image generation process. Details of each process will be described below.

(1) Tomographic image generation process based on a tomographic image captured by the oversampling method First, a tomographic image generation process based on a tomographic image captured by the oversampling method will be described with reference to FIG. Here, an example will be described in which shooting is performed at a resolution twice the horizontal resolution rx.

In FIG. 5A, A i2j ′ and A i2j + 1 ′ represent each scanning line. A i2j + 1 ′ indicates a scanning line captured 1 / f [s] after capturing A i2j ′. FIG. 5D is a tomographic image generated by performing an averaging process using n pixels for each pixel.

That is, in FIG. 5D, A ij is a new scanning line calculated by performing the averaging process on the corresponding scanning line. In the case of FIG. 5A, A ij is calculated by performing an averaging process on the scanning lines of A i2j ′ and A i2j + 1 ′. Note that the method of tomographic image generation processing for a tomographic image photographed by the oversampling method is not limited to addition averaging processing, and median calculation processing, weighted addition averaging processing, or the like may be used.

(2) Tomographic image generation processing for a tomographic image photographed by the superposition method Next, a tomographic image generation processing based on a tomographic image photographed by the superposition method will be described with reference to FIG. Here, a case where the number of shots k in the same cross section is 2 will be described.

When performing superimposition processing based on a plurality of tomographic images, it is necessary to align the tomographic images (T i ″ and T i + 1 ″) in advance in an alignment unit (not shown). . As the alignment processing between tomographic images, for example, an evaluation function representing the similarity between two tomographic images is defined in advance, and the tomographic image is deformed so that the value of this evaluation function becomes the best. As the evaluation function, for example, a method of evaluating with a pixel value is cited (for example, a method of performing evaluation using mutual information amount is cited). In addition, as the tomographic image deformation process, for example, a process of performing translation or rotation using affine transformation or changing an enlargement ratio can be cited. In the following description, it is assumed that the alignment process between tomographic images has already been completed in the overlay process with a plurality of tomographic images.

In FIG. 5B, T i ″ and T i + 1 ″ are tomographic images taken at the same time at the same cross section. A ij ″ and A (i + 1) j ″ represent respective scanning lines in the tomographic images T i ″ and T i + 1 ″. A (i + 1) j ″ represents a scanning line photographed after A m / f + β [s] in which A ij ″ is photographed. Here, β is a time for returning the position of the scanning line from the last position (A im in FIG. 5B) to the first position (A i1 in FIG. 5B) of the tomographic image.

When the tomographic image of FIG. 5 (d) is generated from the tomographic image of FIG. 5 (b), A ij in FIG. 5 (d) is an average of the scanning lines of A ij ″ and A (i + 1) j ″. Calculated by performing processing.

(3) Tomographic image generation processing for a tomographic image photographed by a combination of the oversampling method and the superposition method Next, using FIG. 5C, the image was photographed by a combination of the oversampling method and the superposition method. A tomographic image generation process based on a tomographic image will be described. Here, a case will be described in which the number n of superpositions per pixel is 4, the lateral resolution is twice the resolution of rx, and the number of shots k in the same cross section is 2.

In FIG. 5C, A i2j ″ ″ and A i2j + 1 ″ ″ represent respective scanning lines in the tomographic image T i ″ ″, and A (i + 1) 2j ′ ″, A (i + 1) 2j + 1 ″. 'Represents each scanning line in the tomographic image T i + 1 '''.

When the tomographic image of FIG. 5 (d) is generated from the tomographic image of FIG. 5 (c), A ij in FIG. 5 (d) is A i2j ′ ″, A i2j + 1 ′ ″, A (i + 1) 2j ′. ″, A (i + 1) 2j + 1 ′ ″ is calculated by performing an averaging process on the scanning line.

  As is clear from the above description, in the present embodiment, shooting conditions are set after detecting the amount of eye movement for each individual, and shooting is performed using the oversampling method or the superposition method under the shooting conditions. The tomographic image obtained in the above is processed.

  As a result, it has become possible to generate a low-noise and high-resolution tomographic image in which the influences of eye movement and head movement are suppressed as much as possible.

  In the present embodiment, the method for generating one high-quality two-dimensional tomographic image has been described. However, the present invention is not limited to this. For example, a similar method may be used to generate a three-dimensional tomographic image. Furthermore, a high-quality tomographic image can be generated by a similar method even in a tomographic image scanned in a radial or circular shape.

[Second Embodiment]
In the first embodiment, the configuration is such that imaging for detecting the amount of motion is performed, imaging conditions are set, and a tomographic image is captured again. However, the present invention is not limited to this. For example, it may be configured to perform processing according to the amount of movement of the eye to be examined when imaging is performed under a predetermined imaging condition and the acquired tomographic image is processed. Details of this embodiment will be described below.

<Configuration of image processing system>
FIG. 6 is a diagram illustrating a configuration of an image processing system 600 including the image processing apparatus 610 according to the present embodiment. As shown in FIG. 6, the functional configuration of an image processing apparatus 610 is different from the image processing system 100 described in the first embodiment. Therefore, in the following, the difference will be mainly described.

  As shown in FIG. 6, the image processing apparatus 610 includes an image acquisition unit 111, a storage unit 112, a first motion detection unit 613, a second motion detection unit 614, an image creation unit 616, and a display unit. 117 and a determination unit 615. Among these, the image acquisition unit 111, the storage unit 112, and the display unit 117 have the same functions as those in the first embodiment, and thus description thereof is omitted here.

  The first motion detection unit 613 detects the amount of motion in the depth direction of the eye to be examined based on the reflected light intensity (signal intensity) measured at the time of imaging in the tomographic imaging apparatus 120. Further, when a motion amount exceeding the depth direction resolution ORz obtained from the wavelength of the light source used in the tomographic image capturing apparatus 120 is detected while capturing one tomographic image, the time at the time of detection is stored in the storage unit 112. Record.

  The second motion detection unit 614 detects the amount of motion in the planar direction of the eye to be inspected based on the fundus image captured by the fundus image capturing device 130. Further, while the tomographic image capturing apparatus 120 captures a single tomographic image, a motion amount exceeding the lateral resolution ORx obtained from the wavelength of the light source used in the tomographic image capturing apparatus 120 is detected based on the fundus image. In such a case, the time at the time of detection is recorded in the storage unit 112.

  The determining unit 615 selects a cross-sectional image with a small amount of motion as a reference cross-sectional image, and determines the presence or absence of a pixel whose amount of motion exceeds a predetermined threshold based on the recording result of the storage unit 112. Further, when there is a pixel whose motion amount exceeds a predetermined threshold, a scanning line used for the averaging process is selected for the pixel.

  Of the cross-sectional images recorded in the storage unit 112, the image creating unit 616 selects a cross-sectional image selected as the reference cross-sectional image by the determining unit 615 for pixels whose movement amount exceeds a predetermined threshold. Addition averaging processing is performed using the scanned lines.

<Flow of tomographic image processing in image processing apparatus>
Next, the flow of tomographic image processing in the image processing apparatus 610 of this embodiment will be described with reference to FIG.

In step S701, based on an instruction from the image processing device 610, the tomographic image photographing device 120 and the fundus image photographing device 130 each photograph the eye to be examined. In tomographic imaging apparatus 120 performs imaging of the subject's eye by using the parameters set in advance in the parameter setting section 123 (e.g., taking a number of scanning lines A m 2048, as 4 of shots k in the same cross-sectional I do).

  In step S702, the first motion detector 613 and the second motion detector 614 detect the amount of motion of the eye to be examined. Since the motion amount detection method has already been described in the first embodiment, description thereof is omitted here. In the first motion detection unit 613 and the second motion detection unit 614, a motion amount exceeding the lateral resolution ORx and the depth direction resolution ORz obtained from the wavelength of the light source is obtained while capturing one tomographic image. If there is, this is detected, and the detected time is recorded in the storage unit 112.

  In step S703, based on the amount of motion detected in step S702 and the tomographic image captured in step S701, the determination unit 615 performs a composite image selection process that selects pixels for which the averaging process is performed. Here, the details of the composite pixel selection process (step S703) will be described with reference to the flowchart of FIG.

Incidentally, when the description of the composite pixel selecting process (step S703), 4 is the number of pixels n used in the averaging process in the selected pixels, lateral resolution rx 512, the scanning line number A m is 2048, in the same cross-sectional It is assumed that the number of shots k is 4.

  In step S710, the determination unit 615 selects a reference tomographic image from a plurality of tomographic images. In step S702, the reference tomographic image is selected by selecting a tomographic image having no eye movement amount exceeding the lateral resolution ORx and the depth resolution ORz within the time to capture one tomographic image. If there is no tomographic image that satisfies the conditions, a tomographic image with the smallest maximum motion amount or a tomographic image with the smallest average motion amount is selected within the time to capture one tomographic image. Alternatively, each tomographic image is set as a reference tomographic image, and alignment is performed between the other tomographic images and the respective tomographic images, and the tomographic image having a high average value of the alignment evaluation values with the other tomographic images. Finally select.

  In step S720, it is determined whether the reference tomographic image satisfies a predetermined condition. Specifically, it is determined whether or not the amount of movement of the eye to be examined in the reference tomographic image selected in step S710 exceeds the horizontal resolution and depth resolution. If it is determined in step S720 that the horizontal resolution and the depth resolution are not exceeded, the composite pixel selection process is terminated.

  On the other hand, if it is determined that the amount of movement of the eye in the reference tomographic image exceeds the lateral resolution or depth resolution, the scanning line of the tomographic image taken by the combination of the oversampling method and the overlay method is added. In order to perform the averaging process, the process proceeds to step S730.

  In step S730, the determination unit 615 associates the time at which each scanning line is captured with the time at which a motion amount exceeding the horizontal resolution ORx or the depth direction resolution ORz is detected in each scanning line of the reference tomographic image. A scan line of the reference tomographic image taken at the time when the amount of motion exceeding the lateral resolution or the depth direction resolution is detected, and a reference tomographic image other than the reference tomographic image on the same cross section that performs an averaging process between the scan line A scan line of a tomographic image is selected.

  Returning to the description of FIG. In step S <b> 704, the image creation unit 616 processes the tomographic image captured by the tomographic image capturing apparatus 120. Here, the averaging process is performed on the reference tomographic image selected in step S710 using the scanning line selected in step S730, and a tomographic image displayed on the display unit 117 is generated. Note that the averaging process is performed for each scanning line based on the method shown in step S205.

  As is clear from the above description, in the present embodiment, the amount of movement of the eye to be examined is detected while photographing the eye to be examined, and processing corresponding to the detected amount of movement is performed on the acquired tomographic image. The configuration is to be performed.

  As a result, it has become possible to generate a low-noise and high-resolution tomographic image in which the influences of eye movement and head movement are suppressed as much as possible.

[Third Embodiment]
In the first embodiment, a parameter is determined based on a tomographic image and a fundus image acquired in imaging for detecting the amount of movement of the eye to be examined, and acquired by imaging using the determined parameter. The tomographic image is processed. However, the present invention is not limited to this. For example, the amount of movement is detected during shooting using the determined parameter, and when a movement amount equal to or greater than a predetermined threshold is detected, the parameter is determined again and the shooting is automatically performed again. You may comprise as follows.

  With such a configuration, even if a large change such as blinking or microsaccade occurs during shooting, the parameters are reset from the amount of motion detected during shooting, so a high-quality tomographic image Can be maintained.

  Hereinafter, details of the present embodiment will be described with reference to FIG. Note that the functional configuration of the image processing apparatus of the present embodiment is the same as the functional configuration of the image processing apparatus of the first embodiment. Of the tomographic image processing (FIG. 8) in the image processing apparatus of the present embodiment, the processing from step S801 to step S804 is the step of tomographic image processing (FIG. 2) in the image processing apparatus of the first embodiment. This is the same as the processing from S201 to step S204. Furthermore, the processes of step S805, step S807, and step S808 are the same as the processes of step S202, step S205, and step S206. For this reason, only the process of step S806 will be described below.

  In step S806, the determination unit 115 determines whether to perform imaging again when the amount of movement of the eye to be examined exceeds a certain threshold value. Specifically, when a situation occurs in which the position of the eye is greatly shifted due to the subject's blinking or microsaccade while taking one tomographic image, the determination unit 115 performs steps Returning to S803, a determination is made to reset the parameters. When a position shift of the eye to be examined occurs, the position is also shifted from the tomographic image of the same cross section taken so far. For this reason, the parameters are set again and photographing is performed (step S804).

  As is clear from the above description, in the present embodiment, the amount of motion is detected even during tomographic image capturing, and when the detected amount of motion exceeds a predetermined threshold, capturing is performed again. As a result, even if a major change such as blinking or microsaccade occurs during imaging, re-imaging is performed automatically, so it is possible to maintain the generation of high-quality tomographic images. It becomes.

[Other Embodiments]
The present invention can also be realized by executing the following processing. That is, software (program) that realizes the functions of the above-described embodiments is supplied to a system or apparatus via a network or various storage media, and a computer (or CPU, MPU, etc.) of the system or apparatus reads the program. It is a process to be executed.

Claims (21)

  1. Detection means for detecting the amount of movement of the eye to be examined;
    Determining means for determining the scanning speed so as to increase the scanning speed when acquiring each of a plurality of two-dimensional tomographic images of the eye to be examined, as the detected amount of movement is larger;
    Obtaining means for obtaining a plurality of two-dimensional tomographic images of the eye to be examined at the determined scanning speed;
    Selection means for selecting, from the plurality of acquired two-dimensional tomographic images, a two-dimensional tomographic image in which in-plane distortion in the two-dimensional tomographic image generated based on the movement of the eye to be examined is smaller than other two-dimensional tomographic images. An image processing apparatus comprising:
  2.   The image processing apparatus according to claim 1, wherein the detection unit detects a length of movement of the eye to be examined within a predetermined time as the amount of movement.
  3.   The image processing apparatus according to claim 2, wherein the predetermined time is a time for acquiring one two-dimensional tomographic image in a plurality of two-dimensional tomographic images of the eye to be examined.
  4.   4. The detection unit according to claim 1, wherein the detection unit again detects a movement amount of the eye to be examined while a plurality of two-dimensional tomographic images of the eye to be examined are acquired at the determined scanning speed. The image processing apparatus according to claim 1.
  5. The determining means determines the scanning speed again when the amount of motion detected again is equal to or greater than a threshold,
    The image processing apparatus according to claim 4, wherein the acquisition unit acquires a plurality of two-dimensional tomographic images of the eye to be examined at the scanning speed determined again.
  6.   6. The image according to claim 4, wherein the determination unit re-determines the scanning speed when the re-detected movement amount is a movement amount corresponding to a micro saccade of the eye to be examined. Processing equipment.
  7.   The selecting means selects a two-dimensional tomographic image in which the re-detected amount of motion of each of the acquired plurality of two-dimensional tomographic images is minimized, and the influence of the movement of the eye to be examined is more than that of other two-dimensional tomographic images. The image processing apparatus according to claim 4, wherein the image processing apparatus is selected as a small two-dimensional tomographic image.
  8.   The selecting means aligns the plurality of acquired two-dimensional tomographic images, and selects a two-dimensional tomographic image whose alignment evaluation value is higher than other two-dimensional tomographic images from the influence of the movement of the eye to be examined. The image processing apparatus according to claim 1, wherein the two-dimensional tomographic image is selected as a smaller two-dimensional tomographic image than other two-dimensional tomographic images.
  9.   9. The processing device according to claim 1, further comprising processing means for performing an averaging process by aligning the selected two-dimensional tomographic image and at least one of the other two-dimensional tomographic images. The image processing apparatus according to any one of the above.
  10.   The processing means, for the pixel acquired when the amount of motion exceeding a predetermined threshold is detected among the pixels included in the selected two-dimensional tomographic image, The image according to claim 9, wherein the averaging process is performed using pixel values of pixels included in the two-dimensional tomographic image or pixel values of pixels corresponding to a plurality of scanning lines overlapping with the pixels. Processing equipment.
  11.   The determination means includes a resolution obtained by dividing a range to be scanned to acquire each two-dimensional tomographic image by the number of A-scan images constituting each two-dimensional tomographic image, and each two-dimensional tomographic image. The scanning speed is determined based on a value obtained by dividing the time required for acquisition by the number of A-scan images constituting each two-dimensional tomographic image. The image processing apparatus according to any one of 10.
  12.   The detection means detects a movement amount in a plane direction of the eye to be inspected based on a signal obtained when acquiring a two-dimensional tomographic image and a fundus image of the eye to be inspected. The image processing apparatus according to any one of 11.
  13.   The determining means determines the scanning speed so that a movement amount of at least one of a depth direction and a planar direction of the eye to be examined does not exceed a resolution when acquiring the two-dimensional tomographic image. The image processing apparatus according to claim 1, wherein the image processing apparatus is characterized in that:
  14.   The detection means detects the amount of movement in the depth direction of the subject eye based on the intensity of the reflected light from the subject eye irradiated with light through the scanning means, and the feature region of the fundus image of the subject eye The image processing apparatus according to claim 1, wherein a movement amount in a planar direction of the eye to be examined is detected based on the image.
  15.   The image according to any one of claims 1 to 14, wherein the detection unit detects a movement amount exceeding a resolution in a depth direction corresponding to a wavelength of a light source as the movement amount of the eye to be examined. Processing equipment.
  16.   When the detection means detects a movement amount exceeding the resolution in the depth direction corresponding to the wavelength of the light source as the movement amount of the eye to be examined, the detection means further comprises recording means for recording the detected time. The image processing apparatus according to claim 15.
  17. An image processing apparatus for processing an OCT tomographic image of an eyeball,
    The amount of movement in the depth direction of the eyeball during the photographing of one OCT tomographic image is detected, and the movement of the fundus in the planar direction during the photographing of the one OCT tomographic image is detected using the photographed fundus image. Detecting means for detecting the amount;
    Of the plurality of OCT tomographic images of the eyeball photographed for the same cross section, the amount of movement of the eyeball in the depth direction detected by the detection means during the photographing is the smallest, and the amount of movement of the fundus in the plane direction Selecting means for selecting the OCT tomographic image having the smallest OCT tomographic image as a reference OCT tomographic image in which the distortion in the OCT tomographic image is smaller than that of other OCT tomographic images ;
    Alignment means for aligning the OCT tomographic image;
    A scanning line of the reference OCT tomographic image taken when a motion amount exceeding the lateral resolution or depth resolution is detected, and a scanning line of an OCT tomographic image other than the reference OCT tomographic image in the same section are selected. And generating means for generating an OCT tomographic image by performing an averaging process between a scanning line of the reference OCT tomographic image and a scanning line of an OCT tomographic image other than the reference OCT tomographic image. A featured image processing apparatus.
  18.   The tomographic image capturing apparatus that captures the OCT tomographic image and the fundus image capturing apparatus that captures the fundus image are processed, and the OCT tomographic image captured by the tomographic image capturing apparatus is processed. Item 18. The image processing device according to Item 17.
  19.   The program for functioning a computer as each means of the image processing apparatus of any one of Claims 1 thru | or 18.
  20. Detecting the amount of movement of the eye to be examined;
    Determining the scanning speed so as to increase the scanning speed when acquiring each of a plurality of two-dimensional tomographic images of the eye to be examined, as the detected amount of movement is larger;
    Obtaining a plurality of two-dimensional tomographic images of the eye to be examined at the determined scanning speed;
    Selecting a two-dimensional tomographic image in which in-plane distortion in the two-dimensional tomographic image generated based on the movement of the eye to be examined is smaller than other two-dimensional tomographic images from the plurality of acquired two-dimensional tomographic images; An image processing method comprising:
  21. An image processing method for processing an OCT tomographic image of an eyeball,
    The amount of movement in the depth direction of the eyeball during the photographing of one OCT tomographic image is detected, and the movement of the fundus in the planar direction during the photographing of the one OCT tomographic image is detected using the photographed fundus image. Detecting the amount;
    Of the plurality of OCT tomographic images of the eyeball imaged on the same cross section, the amount of movement in the depth direction of the eyeball detected in the detecting step during the imaging is the smallest, and the movement of the fundus in the plane direction Selecting an OCT tomographic image having the smallest amount as a reference OCT tomographic image in which distortion in the OCT tomographic image is smaller than other OCT tomographic images ;
    Aligning the OCT tomographic image;
    A scanning line of the reference OCT tomographic image taken when a motion amount exceeding the lateral resolution or depth resolution is detected, and a scanning line of an OCT tomographic image other than the reference OCT tomographic image in the same section are selected. And generating an OCT tomographic image by performing an averaging process between a scanning line of the reference OCT tomographic image and a scanning line of an OCT tomographic image other than the reference OCT tomographic image. An image processing method.
JP2012137263A 2012-06-18 2012-06-18 Image processing apparatus, image processing method, and program Active JP5631361B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2012137263A JP5631361B2 (en) 2012-06-18 2012-06-18 Image processing apparatus, image processing method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2012137263A JP5631361B2 (en) 2012-06-18 2012-06-18 Image processing apparatus, image processing method, and program

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
JP2009278945 Division 2009-12-08

Publications (3)

Publication Number Publication Date
JP2012176291A JP2012176291A (en) 2012-09-13
JP2012176291A5 JP2012176291A5 (en) 2013-02-21
JP5631361B2 true JP5631361B2 (en) 2014-11-26

Family

ID=46978554

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2012137263A Active JP5631361B2 (en) 2012-06-18 2012-06-18 Image processing apparatus, image processing method, and program

Country Status (1)

Country Link
JP (1) JP5631361B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180158182A1 (en) * 2015-04-29 2018-06-07 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Image enhancement using virtual averaging
JP6486427B2 (en) * 2017-08-25 2019-03-20 キヤノン株式会社 Optical coherence tomography apparatus and control method thereof

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001289781A (en) * 2000-04-07 2001-10-19 Japan Science & Technology Corp Light wave vertical cross section tomography observation apparatus
JP4822969B2 (en) * 2006-07-27 2011-11-24 株式会社ニデック Ophthalmic imaging equipment
JP5089940B2 (en) * 2006-08-29 2012-12-05 株式会社トプコン Eye movement measuring device, eye movement measuring method, and eye movement measuring program
JP4921201B2 (en) * 2007-02-23 2012-04-25 株式会社トプコン Optical image measurement device and program for controlling optical image measurement device
JP5523658B2 (en) * 2007-03-23 2014-06-18 株式会社トプコン Optical image measuring device
JP5448353B2 (en) * 2007-05-02 2014-03-19 キヤノン株式会社 Image forming method using optical coherence tomography and optical coherence tomography apparatus
JP2010012109A (en) * 2008-07-04 2010-01-21 Nidek Co Ltd Ocular fundus photographic apparatus
JP5340693B2 (en) * 2008-11-05 2013-11-13 株式会社ニデック Ophthalmic imaging equipment
JP5543171B2 (en) * 2009-10-27 2014-07-09 株式会社トプコン Optical image measuring device

Also Published As

Publication number Publication date
JP2012176291A (en) 2012-09-13

Similar Documents

Publication Publication Date Title
US9872614B2 (en) Image processing apparatus, method for image processing, image pickup system, and computer-readable storage medium
KR101477591B1 (en) Optical tomographic image photographing apparatus and control method therefor
US9430825B2 (en) Image processing apparatus, control method, and computer readable storage medium for analyzing retina layers of an eye
KR101373935B1 (en) Ophthalmic apparatus, control method of an ophthalmic apparatus and computer readable storage medium
JP5904711B2 (en) Image processing apparatus, image processing method, and program
JP5955163B2 (en) Image processing apparatus and image processing method
US9398846B2 (en) Image processing apparatus, image processing system, image processing method, and image processing computer program
JP5743425B2 (en) Ophthalmic apparatus and method for controlling ophthalmic apparatus
Szkulmowska et al. Three-dimensional quantitative imaging of retinal and choroidal blood flow velocity using joint Spectral and Time domain Optical Coherence Tomography
US9330299B2 (en) Fundus image acquiring apparatus and control method therefor
EP2420181B1 (en) Eyeground observation device
JP4819851B2 (en) Diagnosis support apparatus and method, program, and recording medium
EP2512343B1 (en) X-ray image processing apparatus, x-ray image processing method, and storage medium for computer program
KR101496245B1 (en) Imaging apparatus and imaging method
JP6057567B2 (en) Imaging control apparatus, ophthalmic imaging apparatus, imaging control method, and program
KR101450110B1 (en) Image processing apparatus, control method, and optical coherence tomography system
US8926097B2 (en) Imaging control apparatus for capturing tomogram of fundus, imaging apparatus, imaging control method, program, and storage medium
KR101267755B1 (en) Image processing apparatus, image processing method, tomogram capturing apparatus, and program recording medium
US8970849B2 (en) Tomography apparatus and tomogram correction processing method
EP2462863B1 (en) Image processing apparatus for processing tomographic image of subject&#39;s eye, imaging system, method for processing image, and program
JP4940069B2 (en) Fundus observation apparatus, fundus image processing apparatus, and program
CN104125798B (en) Image display apparatus, image display method and imaging system
US20110137157A1 (en) Image processing apparatus and image processing method
JP2015131107A (en) Optical coherence tomography device, and program
US9514532B2 (en) Image processing apparatus ophthalmologic imaging system and image processing method

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20121210

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20121228

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20140407

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20140605

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20140908

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20141007

R151 Written notification of patent or utility model registration

Ref document number: 5631361

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151