US20130195340A1 - Image processing system, processing method, and storage medium - Google Patents
Image processing system, processing method, and storage medium Download PDFInfo
- Publication number
- US20130195340A1 US20130195340A1 US13/748,766 US201313748766A US2013195340A1 US 20130195340 A1 US20130195340 A1 US 20130195340A1 US 201313748766 A US201313748766 A US 201313748766A US 2013195340 A1 US2013195340 A1 US 2013195340A1
- Authority
- US
- United States
- Prior art keywords
- image
- analysis
- tomographic image
- feature amount
- retinal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G06K9/00617—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0025—Operational features thereof characterised by electronic signal processing, e.g. eye models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/64—Analysis of geometric attributes of convexity or concavity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Definitions
- the present invention relates to an image processing system, processing method, and storage medium.
- a tomography apparatus using an OCT which utilizes interference caused by low coherent light, is known.
- OCT Optical Coherence Tomography
- Imaging by the tomography apparatus is receiving a lot of attention since it is a technique helpful to give more adequate diagnoses of diseases.
- a TD-OCT Time Domain OCT
- the TD-OCT measures interfering light with backscattered light of a signal arm by scanning a delay of a reference arm, thus obtaining depth resolution information.
- an SD-OCT Spectrum Domain OCT
- SS-OCT Single-channel photodetector using a fast wavelength swept light source
- Japanese Patent Laid-Open No. 2008-073099 discloses a technique for detecting boundaries of respective layers of a retina from a tomographic image and measuring thicknesses of the layers based on the detection result using a computer, so as to quantitatively measure the shape change of the retina.
- the present invention has been made in consideration of the aforementioned problems, and provides a technique which allows to display a shape feature amount of an eye to be examined as a diagnosis target together with an index used as a criterion as to whether or not the eye to be examined suffers a disease.
- an image processing system comprising: an analysis unit configured to obtain information indicating a degree of curvature of a retina from a tomographic image of an eye to be examined; and an obtaining unit configured to obtain a category of the eye to be examined based on an analysis result.
- a shape feature amount of an eye to be examined can be displayed together with an index used as a criterion as to whether or not the eye to be examined suffers a disease.
- FIG. 1 is a block diagram showing an example of the arrangement of an image processing system 10 according to one embodiment of the present invention
- FIG. 2 is a view showing an example of a tomographic image capturing screen 60 ;
- FIGS. 3A and 3B are flowcharts showing an example of the sequence of processing of an image processing apparatus 30 shown in FIG. 1 ;
- FIG. 4 is a view for explaining an overview of three-dimensional shape analysis processing
- FIGS. 5A to 5C are views for explaining an overview of three-dimensional shape analysis processing
- FIG. 6 is a view showing an example of a tomographic image observation screen 80 ;
- FIGS. 7A to 7C are views showing an example of respective components of the tomographic image observation screen 80 ;
- FIGS. 9A and 9B are views showing an example of respective components of the tomographic image observation screen 80 ;
- FIG. 10 is a view showing an example of the tomographic image observation screen 80 ;
- FIG. 11 is a view showing an example of respective components of the tomographic image observation screen 80 ;
- FIG. 12 is a view showing an example of the tomographic image observation screen 80 ;
- FIG. 13 is a view showing an example of a tomographic image observation screen 400 ;
- FIG. 14 is a view showing an example of respective components of the tomographic image observation screen 400 ;
- FIG. 15 is a view showing an example of the tomographic image observation screen 400 ;
- FIG. 16 is a view showing an example of respective components of the tomographic image observation screen 400 ;
- FIG. 17 is a block diagram showing an example of the arrangement of an image processing system 10 according to the third embodiment.
- FIG. 18 is a flowchart showing an example of the sequence of processing of an image processing apparatus 30 according to the third embodiment.
- FIGS. 19A and 19B are views showing an overview of a method of obtaining feature amounts which represent shape features of retinal layers
- FIGS. 20A and 20B are views showing an overview of a method of obtaining feature amounts which represent shape features of retinal layers
- FIG. 21 is a view showing an example of the tomographic image observation screen 80 ;
- FIG. 22 is a block diagram showing an example of the arrangement of an image processing system 10 according to the fourth embodiment.
- FIG. 23 is a view showing an example of a tomographic image observation screen 900 .
- FIG. 1 is a block diagram showing an example of the arrangement of an image processing system 10 according to an embodiment of the present invention.
- the image processing system 10 includes an image processing apparatus 30 , tomography apparatus 20 , fundus image capturing device 51 , external storage device 52 , display device 53 , and input device 54 .
- the tomography apparatus 20 is implemented by, for example, an SD-OCT or SS-OCT, and captures a tomography image indicating a three-dimensional shape of a fundus using an OCT using interference caused by low coherent light.
- the tomography apparatus 20 includes a galvanometer mirror 21 , driving control unit 22 , parameter setting unit 23 , vision fixation lamp 24 , and coherence gate stage 25 .
- the galvanometer mirror 21 has a function of two-dimensionally scanning measurement light (irradiation light) on a fundus, and defines an imaging range of a fundus by the tomography apparatus 20 .
- the galvanometer mirror 21 includes, for example, two mirrors, that is, X- and Y-scan mirrors, and scans measurement light on a plane orthogonal to an optical axis with respect to a fundus of an eye to be examined.
- the driving control unit 22 controls a driving (scanning) range and speed of the galvanometer mirror 21 .
- a driving (scanning) range and speed of the galvanometer mirror 21 controls a driving (scanning) range and speed of the galvanometer mirror 21 .
- the parameter setting unit 23 sets various parameters used in driving control of the galvanometer mirror 21 by the driving control unit 22 . These parameters decide imaging conditions of a tomographic image by the tomography apparatus 20 . For example, scan positions of scan lines, the number of scan lines, the number of images to be captured, and the like are decided. In addition, a position of the vision fixation lamp, scanning range, and scanning pattern, coherence gate position, and the like are also set. Note that the parameters are set based on an instruction from the image processing apparatus 30 .
- the vision fixation lamp 24 suppresses movement of a viewpoint by placing a bright spot in a visual field so as to prevent an eyeball motion during imaging of a tomographic image.
- the vision fixation lamp 24 includes an indication unit 24 a and lens 24 b .
- the indication unit 24 a is realized by disposing a plurality of light-emitting diodes (LDs) in a matrix. Lighting positions of the light-emitting diodes are changed in correspondence with a portion as an imaging target under the control of the driving control unit 22 .
- Light from the indication unit 24 a is guided to the eye to be examined through the lens 24 b .
- Light emerging from the indication unit 24 a has a wavelength of, for example, 520 nm, and a desired pattern is indicated (lighted) under the control of the driving control unit 22 .
- the coherence gate stage 25 is arranged to cope with, for example, a different ophthalmic axis length of an eye to be examined. More specifically, an optical path length of reference light (to be interfered with measurement light) is controlled to adjust an imaging position along a depth direction (optical axis direction) of a fundus. Thus, optical path lengths of reference light and measurement light can be matched even for an eye to be examined having a different ophthalmic axis length. Note that the coherence gate stage 25 is controlled by the driving control unit 22 .
- a coherence gate indicates a position where optical distances of measurement light and reference light are equal to each other in the tomography apparatus 20 .
- imaging on the side of retinal layers or that of an EDI (Enhanced Depth Imaging) method on the side deeper than the retinal layers is switched.
- the coherence gate position is set on the side deeper than the retinal layers.
- the fundus image capturing device 51 is implemented by, for example, a fundus camera, SLO (Scanning Laser Ophthalmoscope), or the like, and captures a (two-dimensional) fundus image of a fundus.
- SLO Sccanning Laser Ophthalmoscope
- the external storage device 52 is implemented by, for example, an HDD (Hard Disk Drive) or the like, and stores various data.
- the external storage device 52 holds captured image data, imaging parameters, image analysis parameters, and parameters set by an operator in association with information (a patient name, age, gender, etc.) related to an eye to be examined.
- the input device 54 is implemented by, for example, a mouse, keyboard, touch operation screen, and the like, and allows an operator to input various instructions. For example, the operator inputs various instructions, settings, and the like for the image processing apparatus 30 , tomography apparatus 20 , and fundus image capturing device 51 via the input device 54 .
- the display device 53 is implemented by, for example, a liquid crystal display or the like, and displays (presents) various kinds of information for the operator.
- the image processing apparatus 30 is implemented by, for example, a personal computer or the like, and processes various images. That is, the image processing apparatus 30 incorporates a computer.
- the computer includes a main control unit such as a CPU (Central Processing Unit), storage units such as a ROM (Read Only Memory) and RAM (Random Access Memory), and the like.
- main control unit such as a CPU (Central Processing Unit)
- storage units such as a ROM (Read Only Memory) and RAM (Random Access Memory), and the like.
- the image processing apparatus 30 includes, as its functional units, an image obtaining unit 31 , storage unit 32 , image processing unit 33 , instruction unit 34 , and display control unit 35 .
- the units other than the storage unit 32 are implemented, for example, when the CPU reads out and executes a program stored in the ROM or the like.
- the image processing unit 33 includes a detection unit 41 , determination unit 43 , retinal layer analysis unit 44 , and alignment unit 47 .
- the image obtaining unit 31 obtains a tomographic image captured by the tomography apparatus 20 and a fundus image captured by the fundus image capturing device 51 , and stores these images in the storage unit 32 .
- the storage unit 32 is implemented by, for example, the ROM, RAM, and the like.
- the detection unit 41 detects retinal layers from the tomographic image stored in the storage unit 32 .
- the retinal layer analysis unit 44 analyzes the retinal layers to be analyzed.
- the retinal layer analysis unit 44 includes an analysis unit 42 , analysis result generation unit 45 , and shape data generation unit 46 .
- the determination unit 43 determines whether or not three-dimensional shape analysis processing of retinal layers is to be executed according to an imaging mode (a myopia analysis imaging mode and non-myopia analysis imaging mode). Note that the three-dimensional shape analysis indicates processing for generating three-dimensional shape data, and executing shape analysis of retinal layers using the shape data.
- an imaging mode a myopia analysis imaging mode and non-myopia analysis imaging mode.
- the analysis unit 42 applies analysis processing to retinal layers to be analyzed based on the determination result of the determination unit 43 .
- the analysis result generation unit 45 generates various data required to present the analysis result (information indicating states of retinal layers).
- the shape data generation unit 46 aligns a plurality of tomographic images obtained by imaging, thereby generating three-dimensional shape data. That is, the three-dimensional shape data is generated based on layer information of retinal layers.
- the alignment unit 47 performs alignment between the analysis result and fundus image, that between fundus images, and the like.
- the instruction unit 34 instructs information such as imaging parameters according to the imaging mode set in the tomography apparatus 20 .
- the functional units arranged in the aforementioned apparatuses need not always be implemented, as shown in FIG. 1 , and all or some of these units need only be implemented in any apparatus in the system.
- the external storage device 52 , display device 53 , and input device 54 are arranged outside the image processing apparatus 30 .
- these devices may be arranged inside the image processing apparatus 30 .
- the image processing apparatus and tomography apparatus 20 may be integrated.
- FIG. 2 An example of a tomographic image capturing screen 60 displayed on the display device 53 shown in FIG. 1 will be described below with reference to FIG. 2 . Note that this screen is displayed when a tomographic image is to be captured.
- the tomographic image capturing screen 60 includes a tomographic image display field 61 , fundus image display field 62 , combo box 63 used to set an imaging mode, and capture button 64 used to instruct to capture an image.
- reference numeral 65 in the fundus image display field 62 denotes a mark which indicates an imaging region, and is superimposed on a fundus image.
- Reference symbol M denotes a macular region; D, an optic papilla; and V, a blood vessel.
- the combo box 63 allows the user to set, for example, an imaging mode for myopia analysis of (a macular region) or that for non-myopia analysis of (the macular region). That is, the combo box 63 has an imaging mode selection function. In this case, the imaging mode for myopia analysis is set.
- the tomographic image display field 61 displays a tomographic image of a fundus.
- Reference numeral L 1 denotes an inner limiting membrane (ILM); L 2 , a boundary between a nerve fiber layer (NFL) and ganglion cell layer (GCL); and L 3 , an inner segment outer segment junction (ISOS) of a photoreceptor cell.
- reference numeral L 4 denotes a pigmented retinal layer (RPE); and L 5 , a Bruch's membrane (BM).
- the aforementioned detection unit 41 detects any of boundaries of L 1 to L 5 .
- FIG. 3A An example of the sequence of processing of the image processing apparatus 30 shown in FIG. 1 will be described below with reference to FIGS. 3A and 3B .
- the sequence of the overall processing at the time of capturing of a tomographic image will be described first with reference to FIG. 3A .
- the image processing apparatus 30 externally obtains a patient identification number as information required to identify an eye to be examined. Then, the image processing apparatus 30 obtains information related to the eye to be examined, which information is held by the external storage device 52 , based on the patient identification number, and stores the obtained information in the storage unit 32 .
- the image processing apparatus 30 instructs the image obtaining unit 31 to obtain a fundus image from the fundus image capturing device 51 and a tomographic image from the tomography apparatus 20 as pre-scan images used to confirm an imaging position at the imaging timing.
- the image processing apparatus 30 sets an imaging mode.
- the imaging mode is set based on a choice of the operator from the combo box 63 used to set the imaging mode, as described in FIG. 2 . A case will be described below wherein imaging is to be done in the imaging mode for myopia analysis.
- the image processing apparatus 30 instructs the instruction unit 34 to issue an imaging parameter instruction according to the imaging mode set from the combo box 63 to the tomography apparatus 20 .
- the tomography apparatus 20 controls the parameter setting unit 23 to set the imaging parameters according to the instruction. More specifically, the image processing apparatus 30 instructs to set at least one of the position of the vision fixation lamp, scanning range, and scanning pattern, and coherence gate position.
- the position of the vision fixation lamp 24 is set to be able to capture an image of the center of a macular region.
- the image processing apparatus 30 instructs the driving control unit 22 to control the light-emitting diodes of the indication unit 24 a according to the imaging parameters.
- the position of the vision fixation lamp 24 may be controlled to set the center between a macular region and optic papilla as that of imaging. The reason why such control is executed is to capture an image of a region including a macular region so as to execute shape analysis of retinal layers in a myopia.
- a range of 9 to 15 mm is set as limit values of an imaging range of the apparatus. These values are merely an example, and may be changed as needed according to the specifications of the apparatus. Note that the imaging range is a broader region so as to detect shape change locations without any omission.
- a raster scan or radial scan is set so as to be able to capture a three-dimensional shape of retinal layers.
- the gate position is set so as to allow imaging based on the EDI method.
- a degree of curvature of retinal layers becomes strong, and an image of retinal layers is unwantedly captured beyond an upper portion side of a tomographic image.
- retinal layers beyond the upper portion of the tomographic image are folded back and appear in the tomographic image, and such parameter setting is required to prevent this.
- the SS-OCT of the large invasion depth is used as the tomography apparatus 20 , if the position of retinal layers is distant from the gate position, a satisfactory tomographic image can be obtained.
- imaging based on the EDI method is not always done.
- the image processing apparatus 30 instructs the instruction unit 34 to issue an imaging instruction of the eye to be examined to the tomography apparatus 20 .
- This instruction is issued, for example, when the operator presses the capture button 64 of the tomographic image capturing screen 60 via the input device 54 .
- the tomography apparatus 20 controls the driving controller 22 based on the imaging parameters set by the parameter setting unit 23 .
- the galvanometer mirror 21 is activated to capture a tomographic image.
- the galvanometer mirror 21 includes an X-scanner for a horizontal direction, and a Y scanner for a vertical direction. For this reason, by changing the directions of these scanners, respectively, a tomographic image can be captured along the horizontal direction (X) and vertical direction (Y) on an apparatus coordinate system. By simultaneously changing the directions of these scanners, a scan can be made in a synthesized direction of the horizontal and vertical directions. Hence, imaging along an arbitrary direction on a fundus plane can be done. At this time, the image processing apparatus 30 instructs the display control unit 35 to display the captured tomographic image on the display device 53 . Thus, the operator can confirm the imaging result.
- the image processing apparatus 30 instructs the image processing unit 33 to detect/analyze retinal layers from the tomographic image stored in the storage unit 32 . That is, the image processing apparatus 30 applies detection/analysis processing of retinal layers to the tomographic image captured in the process of step S 105 .
- the image processing apparatus 30 determines whether or not to end imaging of tomographic images. This determination is made based on an instruction from the operator via the input device 54 . That is, the image processing apparatus 30 determines whether or not to end imaging of tomographic images based on whether or not the operator inputs an end instruction.
- the image processing apparatus 30 ends this processing. On the other hand, when imaging is to be continued without ending processing, the image processing apparatus 30 executes processes in step S 102 and subsequent steps.
- the image processing apparatus 30 saves the imaging parameters changed according to such modifications in the external storage device 52 upon ending imaging.
- a confirmation dialog as to whether or not to save the changed parameters may be displayed to issue an inquiry about whether or not to change the imaging parameters to the operator.
- step S 106 of FIG. 3A The detection/analysis processing of retinal layers in step S 106 of FIG. 3A will be described below with reference to FIG. 3B .
- the image processing apparatus 30 instructs the detection unit 41 to detect retinal layers from a tomographic image.
- This processing will be practically described below using a tomographic image (display field 61 ) shown in FIG. 2 .
- the detection unit 41 applies a median filter and Sobel filter to the tomographic image to generate images (to be respectively referred to as a median image and Sobel image hereinafter).
- the detection unit 41 generates profiles for each A-scan from the generated median image and Sobel image.
- a luminance value profile is generated from the median image, and a gradient profile is generated from the Sobel image.
- the detection unit 41 detects peaks in the profile generated from the Sobel image.
- the detection unit 41 refers to the profile of the median image corresponding to portions before and after the detected peaks and those between adjacent peaks, thus detecting boundaries of respective regions of the retinal layers. That is, L 1 (ILM), L 2 (boundary between the NFL and GCL), L 3 (ISOS), L 4 (RPE), L 5 (BM), and the like are detected. Note that the following description of this embodiment will be given under the assumption that an analysis target layer is the RPE.
- the image processing unit 30 controls the determination unit 43 to determine whether or not to execute three-dimensional shape analysis of the retinal layer. More specifically, if imaging is done in the imaging mode for myopia analysis, it is determined that the three-dimensional shape analysis of the retinal layer is to be executed. If imaging is done without using the myopia analysis mode (in the imaging mode for non-myopia analysis), it is determined that the three-dimensional shape analysis is not to be executed. In this case, analysis based on the detection result of the detection unit 41 is performed (analysis without using the three-dimensional shape data). Note that even in the imaging mode for myopia analysis, it is determined based on a tomographic image whether or not a macular region is included in the tomographic image. If no macular region is included in the tomographic image (for example, only an optic papilla is included), it may be determined that (three-dimensional) shape analysis of the retinal layer is not to be executed.
- the present invention is not limited to this. That is, the image processing apparatus 30 need only execute shape analysis of a retinal layer upon reception of a tomographic image of a macular region using a scanning pattern required to obtain a three-dimensional shape, and such integrated system need not always be adopted. For this reason, the image processing apparatus 30 can execute shape analysis of a retinal layer for a tomographic image captured by an apparatus other than the tomography apparatus 20 based on information at the time of imaging. However, when shape analysis is not required, the shape analysis processing may be skipped.
- the image processing apparatus 30 instructs the shape data generation unit 46 to generate three-dimensional shape data.
- the three-dimensional shape data is generated to execute shape analysis based on the detection result of the retinal layer in the process of step S 201 .
- the scanning pattern at the time of imaging is, for example, a raster scan
- a plurality of adjacent tomographic images are aligned.
- an evaluation function which represents a similarity between two tomographic images is defined in advance, and tomographic images are deformed to maximize this evaluation function value.
- the evaluation function for example, a method of evaluating pixel values (for example, a method of making evaluation using correlation coefficients) may be used.
- processing for making translation and rotation using affine transformation may be used.
- the shape data generation unit 46 After completion of the alignment processing of the plurality of tomographic images, the shape data generation unit 46 generates three-dimensional shape data of a layer as a shape analysis target.
- the three-dimensional shape data can be generated by preparing, for example, 512 ⁇ 512 ⁇ 500 voxel data, and assigning labels to positions corresponding to coordinate values of layer data of the detected retinal layer.
- the shape data generation unit 46 aligns tomographic images, and then generates three-dimensional shape data in the same manner as described above.
- alignment in a depth direction Z direction of the tomographic image (display field 61 ) shown in FIG. 2 ) is made using only region information near the centers of adjacent tomographic images. This is because in case of the radial scan, even adjacent tomographic images include coarse information at two ends compared to the vicinity of each center thereof, shape changes are large, and such information is not used as alignment information.
- the alignment method the aforementioned method can be used.
- the shape data generation unit 46 After completion of the alignment processing, the shape data generation unit 46 generates three-dimensional shape data of a layer as a shape analysis target.
- 512 ⁇ 512 ⁇ 500 voxel data are prepared, and layer data as a shape analysis target of respective tomographic images are evenly circularly rotated and expanded.
- interpolation processing is executed between adjacent shape data in the circumferential direction.
- shape data at non-captured positions are generated.
- processing such as linear interpolation or nonlinear interpolation may be applied.
- the three-dimensional shape data can be generated by assigning labels to positions corresponding to coordinate values obtained by interpolating between layer data of the detected retinal layer.
- voxel data described above are merely an example, and can be changed as needed depending on the number of A-scans at the time of imaging and the memory size of the apparatus which executes the processing. Since large voxel data have a high resolution, they can accurately express shape data. However, such voxel data suffers a low execution speed, and have a large memory consumption amount. On the other hand, although small voxel data have a low resolution, they can assure a high execution speed and have a small memory consumption amount.
- the image processing apparatus 30 instructs the analysis unit 42 to execute the three-dimensional shape analysis of the retinal layer.
- shape analysis method a method of measuring an area and volume of the retinal layer will be exemplified below.
- FIG. 4 illustrates three-dimensional shape data (RPE), measurement surface (MS), area (Area), and volume (Volume).
- a flat (planar) measurement surface (MS) is prepared at a place of layer data located at the deepest portion in the Z direction (optical axis direction). Then, the measurement surface (MS) is moved at given intervals in a shallow direction (an origin direction of the Z-axis) from there. When the measurement surface (MS) is moved from the deep portion of the layer in the shallow direction, it traverses a boundary line of the RPE.
- An area (Area) is obtained by measuring an internal planar region bounded by the measurement surface (MS) and the boundary line with the RPE. More specifically, the area (Area) is obtained by measuring an area of an intersection region between the measurement surface (MS) and boundary line with the RPE. In this manner, the area (Area) is a cross-sectional area of the three-dimensional retinal layer shape data. Upon measuring a cross-sectional area at a position of a reference portion, when a curvature of the retinal layer is strong, a small area is obtained; when the curvature of the retinal layer is moderate, a large area is obtained.
- a volume (Volume) can be obtained by measuring a whole internal region bounded by the measurement surface (MS) and the boundary line with the RPE using the measurement surface (MS) used in the measurement of the area (Area).
- the reference position can be set at an Bruch's membrane opening position with reference to a portion.
- a given height such as 100 ⁇ m or 500 ⁇ m from the deepest position of the RPE may be used as a reference. Note that when the number of voxels included in a region to be measured is counted upon measuring the area or volume, the area or volume is calculated by multiplying the number of voxels by a physical size per voxel.
- the image processing apparatus 30 instructs the analysis result generation unit 45 to generate an analysis result (for example, a map, graph, or numerical value information) based on the three-dimensional shape analysis result.
- an analysis result for example, a map, graph, or numerical value information
- contour line map is generated as the three-dimensional shape analysis result.
- the contour line map is used when the measurement results of the area and volume are to be displayed.
- FIG. 5A shows an example of a contour line map 71 .
- the contour line map 71 is an overall contour line map.
- Reference numeral 72 denotes contour lines drawn at given intervals; and 73 , a portion located at the deepest portion in the Z direction of the three-dimensional retinal layer shape data.
- a lookup table for a contour line map is prepared, and the map is color-coded according to the volumes with reference to the table.
- the lookup table for the contour line map may be prepared according to the volumes.
- the contour lines 72 are drawn at given intervals to have the portion 73 in FIG. 5A as a bottom, and the colors of the map are set according to the volumes. For this reason, when the height (depth) of the measurement surface is changed from the portion located at the deepest position in the Z direction, the operator can recognize how to increase a volume. More specifically, when it is set to color-code 1 mm 3 as blue and 2 mm 3 as yellow, the operator can recognize the relationship between the shape and volume by checking whether blue corresponds to the height (depth) of 100 ⁇ m or 300 ⁇ m of the measurement surface from the portion located at the deepest position in the Z direction. Therefore, the operator can recognize the overall volume upon measuring the shape of the retinal layer to be measured by confirming a color of an outermost contour of the map.
- the operator can confirm a volume value corresponding to a height (depth) by confirming a color near each internal contour line.
- the lookup table used upon setting colors of the contour line map may be prepared according to areas in place of the volumes.
- the lookup table may be prepared according to heights (depths) to the portion 73 located at the deepest position in the Z direction.
- numerical values may be displayed together on respective contour lines so as to allow the operator to understand the heights (depths) of the contour lines.
- an interval of a distance which expresses a contour line for example, a 100- ⁇ m interval along the Z direction is set.
- the contour line map may be either a color or grayscale map, but visibility is high in case of the color map.
- FIG. 5B shows an example of a contour line map 74 when the height (depth) of the measurement surface (MS) is changed.
- a curvature of the retinal layer is to be measured as the three-dimensional shape analysis will be described below using the tomographic image (display field 61 ) shown in FIG. 2 .
- the abscissa is defined as an x-coordinate axis
- the ordinate is defined as a z-coordinate axis
- a curvature of a boundary line of the layer (RPE) as an analysis target is calculated.
- a curvature K can be obtained by calculating, at respective points of the boundary line:
- the sign of the curvature ⁇ reveals that the shape is upward or downward convex, and the magnitude of a numeral value reveals a curved degree of the shape. For this reason, if upward convex is expressed by “+” and downward convex is expressed by “ ⁇ ”, if each tomographic image includes a ⁇ region, + region, and ⁇ region as the signs of the curvature, the layer has a W-shape.
- the present invention is not limited to such specific curvature calculation, and three-dimensional curvatures may be calculated from the three-dimensional shape data.
- the image processing apparatus 30 instructs the analysis result generation unit 45 to generate a curvature map based on the analysis result.
- FIG. 5C shows an example of the curvature map.
- a portion having a strong curvature is expressed by a dark color
- that having a moderate curvature is expressed by a light color.
- the color density is changed depending on the curvatures.
- the operator can recognize whether or not the retina shape is smooth and whether it is an upward or downward convex shape by checking the map.
- the image processing apparatus 30 instructs the display control unit 35 to display a tomographic image, the detection result of the layer (RPE) detected by the detection unit 41 , and various shape analysis results (map, graph, and numerical value information) generated by the analysis result generation unit 45 on the display device 53 .
- RPE detection result of the layer
- FIG. 6 shows an example of a tomographic image observation screen 80 displayed on the display device 53 shown in FIG. 1 .
- This screen is displayed after completion of the analysis of tomographic images (that is, it is displayed by the process of step S 206 ).
- the tomographic image observation screen 80 includes a tomographic image display section 91 including a tomographic image display field 81 , and a fundus image display section 94 including a fundus image display field 82 .
- the tomographic image observation screen 80 also includes a first analysis result display section 96 including a first analysis result 84 , and a second analysis result display section 98 including second analysis results 85 and 86 .
- the tomographic image display field 81 displays segmentation results (L 1 to L 5 ) obtained by detecting the respective layers of the retinal layers and the measurement surface (MS) which are superimposed on the captured tomographic image.
- the tomographic image display field 81 highlights the segmentation result of the retinal layer (RPE (L 4 ) in this embodiment) as an analysis target.
- a hatched region 81 a bounded by the measurement surface (MS) and the retinal layer (RPE (L 4 )) as an analysis target is a measurement target region of the area and volume.
- a color according to the volume measurement result is displayed to have a predetermined transparency ⁇ .
- the same color to be set as that in the lookup table for the contour line map can be used.
- the transparency ⁇ is, for example, 0.5.
- a combo box 92 is provided to allow the operator to select whether the tomographic image is displayed at an OCT ratio or 1:1 ratio.
- the OCT ratio is that expressed by a resolution in the horizontal direction (X direction) and that in the vertical direction (Y direction), which are obtained based on the number of A-scans at the time of imaging.
- the 1:1 ratio is that which adjusts a physical size per pixel in the horizontal direction and that per pixel in the vertical direction, which are obtained based on the number of A-scans used to capture a given range (mm).
- a combo box 93 is provided to allow the operator to switch a two-dimensional (2D)/three-dimensional (3D) display mode.
- 2D display mode one slice of the tomographic image is displayed; in the 3D display mode, the three-dimensional shape of the retinal layers generated from the boundary line data of the retinal layers is displayed.
- a tomographic image shown in one of FIGS. 7A to 7C is displayed in the tomographic image display field 81 .
- FIG. 7A shows a mode when the RPE is displayed at the OCT ratio in the 3D display mode.
- the measurement surface (MS) is simultaneously displayed in the 3D display mode.
- check boxes 101 to 104 corresponding to the respective layers of the retina are displayed below the tomographic image display field 81 . More specifically, the check boxes corresponding to the ILM, RPE, BM, and MS are displayed, and the operator can switch display/non-display states of the respective layers using these check boxes.
- the measurement surface (MS) When the measurement surface (MS) is expressed by a plane, its transparency ⁇ assumes a value which is larger than 0 and is smaller than 1. In a state in which the transparency is 1, and when the retinal layer shape and measurement surface are overlaid, the shape is unwantedly covered, and the three-dimensional shape of the retinal layers cannot be recognized from the upper side.
- the measurement surface (MS) may be expressed by a grid pattern in place of the plane. In case of the grid pattern, the transparency ⁇ of the measurement surface (MS) may be set to be 1.
- a color of the measurement surface (MS) a color according to a measurement value (area or volume) at the location of the measurement surface (MS) need only be selected with reference to the lookup table for the contour line map.
- the operator can move the position of the measurement surface (MS) via the input device 54 .
- the image processing apparatus 30 changes the contour line map shape in synchronism with that change, as described in FIGS. 5A and 5B .
- a text box 105 is an item used to designate a numerical value.
- the operator inputs, to the text box 105 , a numerical value indicating a height (depth) of the measurement surface (MS) from the portion at the deepest position in the Z direction via the input device 54 .
- a numerical value such as 100 ⁇ m or 300 ⁇ m
- the measurement surface (MS) is moved to that position, and the contour line map is changed accordingly.
- the operator can simultaneously recognize the position in the three-dimensional shape and the contour line map at that time, and can also recognize a volume value and area value at that time.
- the operator may input a volume value in the text box 105 .
- a volume value such as 1 mm 3 or 2 mm 3
- the operator can simultaneously recognize the position in the three-dimensional shape and the contour line map at that time, which correspond to that volume.
- the operator can also recognize a height (depth) of the measurement surface (MS) from the portion at the deepest position in the Z direction at that time.
- FIG. 7B shows a display mode when the measurement surface (MS) is set in a non-display state.
- Other display items are the same as those in FIG. 7A .
- the check box 102 is selected to display the RPE alone. In this manner, only the three-dimensional shape of a fundus can be displayed.
- FIG. 7B shows the 3D display mode of the RPE at the OCT ratio
- FIG. 7C shows a mode when the RPE is displayed at the 1:1 ratio in the 3D display mode. That is, the 1:1 ratio is selected at the combo box 92 .
- the RPE shape is displayed in the 2D/3D display mode.
- the present invention is not limited to this.
- the Bruch's membrane (BM) is selected as an analysis target, the Bruch's membrane (BM) is displayed in the 2D/3D display mode.
- the position measured by the measurement surface may be schematically displayed on the tomographic image display field 81 .
- a display mode in this case will be described below with reference to FIGS. 8A to 8D .
- FIG. 8A shows a mode in which an object 110 indicating the position measured by the measurement surface in the currently displayed tomographic image is superimposed on the tomographic image.
- Reference symbol MS′ denotes a measurement surface (schematic measurement surface) on the object 110 .
- the three-dimensional shape of the retinal layer is rotated in the upper, lower, right, and left directions by an instruction input by the operator via the input device 54 . For this reason, in order to allow the operator to recognize the positional relationship between the measurement surface (MS) and retinal layer, the object 110 and schematic measurement surface MS′ present an index of the positional relationship.
- the displayed three-dimensional shape of the retinal layer is also changed in synchronism with that change.
- the region of the retinal layer as the analysis target is also changed, the first analysis result 84 and second analysis results 85 and 86 are changed in synchronism with that change.
- FIGS. 8B and 8C Some display modes of the object 110 will be exemplified below.
- tomographic images corresponding to respective section positions may be displayed.
- tomographic images at vertical and horizontal positions, which intersect the central position in consideration of a three-dimensional shape are displayed.
- FIG. 8D an abbreviation such as “S” or “I” indicating “superior” or “inferior” may be displayed.
- the fundus image display section 94 including the fundus image display field 82 shown in FIG. 6 will be described below.
- an imaging position and its scanning pattern mark 83 are superimposed on the fundus image.
- the fundus image display section 94 is provided with a combo box 95 which allows the operator to switch a display format of the fundus image. In this case, an SLO image is displayed as the fundus image.
- SLO image+map a case in which “SLO image+map” are simultaneously displayed and a case in which “fundus photo (second fundus image)+SLO image (first fundus image)+map” are simultaneously displayed as the display formats of the fundus image
- the SLO image (first fundus image) can be a two-dimensional fundus image captured simultaneously with a tomographic image, and for example, it may be an integrated image generated by integrating tomographic images in the depth direction.
- the fundus photo (second fundus image) may be a two-dimensional fundus image captured at a timing different from a tomographic image, and for example, a contrast radiographic image or the like may be used.
- FIG. 9A shows a display mode when the operator selects “SLO image+map” from the combo box 95 .
- Reference numeral 201 denotes an SLO image; and 200 , a map.
- the SLO image 201 and map 200 are aligned by the alignment unit 47 described using FIG. 1 .
- the SLO image and map are aligned by setting the position and size of the map based on the position of the vision fixation lamp and the scanning range at the time of imaging.
- the transparency ⁇ of the SLO image 201 to be displayed is set to be 1, and that of the map 200 to be displayed is set to be smaller than 1 (for example, 0.5).
- These parameters of the transparencies ⁇ are those to be set when the operator selects target data for the first time. The operator can change these parameters via the input device 54 as needed. The parameters changed by the operator are stored in, for example, the external storage device 52 . When the same target data is opened for the next or subsequent time, display processing is performed according to the parameters previously set by the operator.
- FIG. 9B shows a display mode when the operator selects “fundus photo+SLO image+map” from the combo box 95 .
- Reference numeral 202 denotes a fundus photo (second fundus image).
- the SLO image 201 is used. This is because the fundus photo 202 is captured at a timing different from a tomographic image, and the imaging position and range cannot be recognized based on the map 200 alone. Hence, using the SLO image 201 , the fundus photo 202 and map 200 can be aligned.
- the fundus photo 202 and SLO image 201 are aligned by the alignment unit 47 described using FIG. 1 .
- a blood vessel feature may be used.
- a detection method of blood vessels since each blood vessel has a thin linear structure, blood vessels are extracted using a filter used to emphasize the linear structure.
- a filter used to emphasize the linear structure when a line segment is defined as a structural element, a filter which calculates a difference between an average value of image density values in the structural element and that in a local region which surrounds the structural element may be used.
- the present invention is not limited to such specific filter, and a difference filter such as a Sobel filter may be used.
- eigenvalues of a Hessian matrix may be calculated for each pixel of a density value image, and a line segment-like region may be extracted based on combinations of two eigenvalues obtained as calculation results.
- the alignment unit 47 aligns the fundus photo 202 and SLO image 201 using blood vessel position information detected by these methods.
- the fundus photo 202 and map 200 can also be consequently aligned.
- the transparency ⁇ of the fundus photo 202 to be displayed is set to be 1.
- the transparency of the SLO image 201 to be displayed and that of the map 200 to be displayed are set to be smaller than 1 (for example, 0.5).
- the values of the transparencies ⁇ of the SLO image 201 and map 200 need not always be the same.
- the value of the transparency ⁇ of the SLO image 201 may be set to be 0.
- the alignment processing by the alignment unit 47 may often fail.
- An alignment failure is determined when a maximum similarity does not become equal to or larger than a threshold upon calculation of an inter-image similarity. Even when the maximum similarity becomes equal to or larger than the threshold, the end of alignment processing at an anatomically abnormal position determines a failure.
- the SLO image 201 and map 200 need only be displayed at a position (for example, the center of an image) and initial transparencies of initial parameters, which are set in advance. A failure message of the alignment processing is displayed, thus prompting the operator to execute position correction processing via the input device 54 .
- the operator modifies the position and changes parameters of the transparencies ⁇
- the map 200 on the SLO image 201 is simultaneously moved, enlarged/reduced, and rotated. That is, the SLO image 201 and map 200 operate as a single image.
- the transparencies ⁇ of the SLO image 201 and map 200 are independently set.
- the parameters which are changed by the operator via the input device 54 are stored in the external storage device 52 , and the next or subsequent display operation is made according to the set parameters.
- the fundus image display field 82 in the fundus image display section 94 displays the two-dimensional fundus image, the map superimposed on the fundus image, and the like.
- FIGS. 9A and 9B have explained the case in which the contour line map is superimposed in association with a corresponding position on the fundus image.
- the present invention is not limited to this. That is, a curvature map, layer thickness map, and the like may be displayed.
- the operator can select, for example, the segmentation results (L 1 to L 5 ) from the tomographic image on the display field 61 shown in FIG. 2 via the input device 54 .
- the image processing apparatus 30 instructs the display control unit 35 to normally display the segmentation result layer highlighted so far, and to highlight a new analysis target layer.
- the analysis results of an arbitrary layer can be displayed.
- the first analysis result 84 displays a shape analysis map generated by the analysis result generation unit 45 .
- a combo box 97 allows the operator to select a map type of the first analysis result 84 .
- the shape analysis map indicated by the first analysis result 84 is a contour line map.
- the type of the shape analysis map indicated as the first analysis result 84 and that of the shape analysis map superimposed on the fundus image described using FIGS. 9A and 9B can be changed in synchronism with each other by designating the type from the combo box 97 . Furthermore, the displayed contents on the second analysis result display section 98 to be described later are also changed in synchronism with such designation.
- results displayed on the tomographic image display section 91 , fundus image display section 94 , first analysis result display section 96 , and second analysis result display section 98 have contents shown in FIG. 10 .
- the segmentation results (L 1 to L 5 ) of the respective detected layers of the retinal layers are superimposed on a captured tomographic image.
- a curvature map is displayed as the first analysis result 84
- a curvature graph is displayed as the second analysis result 88 .
- an image obtained by superimposing the SLO image (first fundus image) 201 , the fundus photo (second fundus image) 202 , and a curvature map 203 is displayed.
- a shape analysis graph generated by the analysis result generation unit 45 is displayed.
- a graph obtained by measuring the area and volume is displayed.
- the abscissa plots the height (depth), and the ordinate plots the volume.
- a solid curve 87 represents the volume.
- a shape analysis result is displayed as a table.
- the table displays an area and volume at a height (for example, 100 ⁇ m, 500 ⁇ m, or the like) of a given reference value, and an area and volume corresponding to a height (a position of the Bruch's membrane opening) when a certain portion is used as a reference.
- a case will be described below with reference to FIG. 11 wherein area and volume results are displayed on one graph (second analysis result 85 ).
- the absc issa plots the height (depth), the ordinate on the left side of the graph plots the volume, and that on the right side of the graph plots the area.
- a broken curve 89 is a graph indicating the area
- the solid curve 87 is a graph indicating the volume.
- the image processing apparatus 30 instructs the analysis unit 42 to execute shape analysis based on the detection result of the retinal layer.
- analysis using the detection result of the retinal layer is executed without generating three-dimensional shape data, and the like.
- the analysis of a layer thickness or the like is executed.
- the image processing apparatus 30 instructs the analysis result generation unit 45 to generate analysis results (for example, a map, graph, and numerical value information) based on the analysis result.
- analysis results for example, a map, graph, and numerical value information
- the image processing apparatus 30 displays a tomographic image, the detection results of layers (RPE, ILM, and the like) detected by the detection unit 41 , and analysis results (map, graph, and numerical value information) generated by the analysis result generation unit 45 on the display device 53 .
- FIG. 12 shows an example of the tomographic image observation screen 80 displayed on the display device 53 by the process of step S 209 .
- a layer thickness map is displayed as a first analysis result 302
- a layer thickness graph is displayed as a second analysis result 301 . That is, the analysis results using the detection result of the retinal layer are displayed as the first and second analysis results 302 and 301 .
- step S 206 parameters changed by the operator via the input device 54 are stored in the external storage device 52 , and the next or subsequent display operation is made according to the set parameters.
- a plurality of imaging modes including that for myopia analysis are provided, and the analysis processing can be selectively executed according to the imaging mode set when a tomographic image is captured.
- the imaging mode for myopia analysis at least the imaging mode for myopia analysis and that for non-myopia analysis are provided.
- the imaging mode for myopia analysis a tomographic image suited to three-dimensional shape analysis (analysis of a macular region in a myopia) is captured.
- three-dimensional shape data is generated, and three-dimensional shape analysis is executed based on the shape data.
- three-dimensional shape data is not generated, and analysis processing based on the detection result of a retinal layer is executed.
- the second embodiment will be described below.
- the second embodiment will explain a case in which a tomographic image is displayed simultaneously in 2D and 3D display modes. More specifically, the second embodiment will explain a case in which a tomographic image, three-dimensional shape data, and analysis results are displayed side by side. Note that the second embodiment will exemplify, as a tomographic image, that captured in a radial scan.
- the tomographic image observation screen 400 includes a first tomographic image display section 410 including a two-dimensional tomographic image display field 411 , and a second tomographic image display section 430 including a three-dimensional tomographic image display field 431 . Furthermore, the tomographic image observation screen 400 includes a first analysis result display section 440 including a first analysis result 442 , and a second analysis result display section 420 including second analysis results 421 and 422 .
- first analysis result display section 440 and second analysis result display section 420 are the same as those shown in FIGS. 6 and 9A used to explain the aforementioned first embodiment, a description thereof will not be repeated. Also, since the second tomographic image display section 430 is the same as that shown in FIGS. 7A to 7C used to explain the first embodiment, a description thereof will not be repeated. In this embodiment, the first tomographic image display section 410 will be mainly described.
- the first tomographic image display section 410 includes a slider bar 412 used to change a viewpoint position (slice position) of a tomographic image, and an area 413 used to display a slice number of a tomographic image in addition to the two-dimensional tomographic image display field 411 .
- Reference numeral 500 denotes an overview of a fundus when the three-dimensional shape of the tomographic image is viewed from the above in a depth direction (Z direction).
- Radial lines 501 indicate imaging slice positions of tomographic images.
- a broken line 502 indicates a slice position corresponding to the currently displayed tomographic image (display field 411 ) of the imaging slice positions 501 , and this slice position is orthogonal to the viewpoint direction 503 . More specifically, the tomographic image at the position 502 is that displayed on the tomographic image display field 411 .
- an image processing apparatus 30 changes the viewpoint position of the three-dimensional tomographic image (three-dimensional shape data) currently displayed on the display field 431 . Also, the image processing apparatus 30 changes the slice position of the two-dimensional tomographic image currently displayed on the display field 411 to a viewpoint position corresponding to a tomographic image designated by the slider bar 412 .
- the image processing apparatus 30 displays so that the three-dimensional shape data of the currently displayed tomographic image is rotated with reference to the center in the vertical direction.
- FIG. 15 shows an example of a screen display of the two-dimensional tomographic image on the display field 411 and the three-dimensional tomographic image (three-dimensional shape data) on the display field 431 when the position of the slider bar 412 is moved to one end.
- the relationship between a viewpoint direction 507 in the three-dimensional shape and the two-dimensional tomographic image (display field 411 ) becomes that shown in FIG. 16 .
- the image processing apparatus 30 changes the slice position of the two-dimensional tomographic image currently displayed on the display field 411 , and also changes the viewpoint position of the three-dimensional tomographic image (three-dimensional shape data) corresponding to that slice position.
- the viewpoint position of the three-dimensional shape data can be changed according to the operation of the slider bar 412 used to change the slice position of the two-dimensional tomographic image, and vice versa. That is, when the operator changes the viewpoint position of the three-dimensional shape data via the input device 54 , the slice position of the two-dimensional tomographic image and the position of the slider bar 412 may be changed accordingly.
- the two-dimensional tomographic image and three-dimensional tomographic image are simultaneously displayed, and their display modes can be changed in synchronism with each other in response to an operation by the operator.
- the third embodiment will be described below.
- the third embodiment will explain a case in which both the analysis result of shapes of retinal layers and a statistics database held in a database are comparably displayed.
- FIG. 17 shows an example of the arrangement of an image processing system 10 according to the third embodiment. In this embodiment, differences from the arrangement shown in FIG. 1 used to explain the first embodiment will be mainly described.
- An image processing unit 33 of an image processing apparatus 30 according to the third embodiment newly includes a feature amount obtaining unit 601 and comparison calculation unit 602 in addition to the arrangement of the first embodiment.
- the feature amount obtaining unit 601 calculates feature amounts (shape feature amounts) indicating shape features of retinal layers based on three-dimensional shape analysis.
- the comparison calculation unit 602 compares shape feature amounts (first feature amounts) with those held in a statistics database 55 stored in an external storage device 52 , and outputs the comparison result.
- the statistics database 55 is generated based on a large number of eye data by integrating race-dependent and age-dependent data. That is, the statistics database 55 holds statistical feature amounts obtained from a plurality of eyes to be examined. Note that in the ophthalmic field, data in the database may be classified based on parameters unique to eyes such as right/left eye-dependent parameters and ophthalmic axis length-dependent parameters.
- the image processing apparatus 30 executes the processes of steps S 201 to S 204 shown in FIG. 3B used to explain the first embodiment.
- retinal layers are detected from a tomographic image, and three-dimensional shape analysis of the detected retinal layer is executed.
- the image processing apparatus 30 instructs the feature amount obtaining unit 601 to calculate shape feature amounts based on the three-dimensional shape analysis executed in the process of step S 304 .
- shape features of the retinal layer include, for example, feature amounts of an area and volume of the retinal layer, a feature amount indicating a degree (of irregularity) of circularity of the retinal layer, a feature amount of a curvature of the shape of the retinal layer, and the like.
- the method of calculating the feature amounts of the area and volume of the retinal layer will be described first.
- the feature amounts of the area and volume of the retinal layer can be obtained by calculating increasing/decreasing rates (typically, increasing rates) of the area and volume of the retinal layer.
- the area and volume of the retinal layer can be calculated by the same method as in the aforementioned first embodiment.
- the feature amount obtaining unit 601 Upon calculating the increasing/decreasing rates of the area and volume, the feature amount obtaining unit 601 respectively calculates areas and volumes at given heights (depths) such as 100 ⁇ m, 200 ⁇ , and 300 ⁇ m from the deepest portion of the RPE based on the analysis result of the three-dimensional shape analysis by an analysis unit 42 . Then, based on the results, the increasing/decreasing rates of the area and volume are calculated as shape features of the retinal layer.
- the feature amount indicating the degree (of irregularity) of circularity of the retinal layer is calculated from, for example, the sectional shape of the retinal layer to be (finally) used as an area measurement target.
- the calculation method of the feature amount indicating the degree (of irregularity) of circularity of the retinal layer will be described below with reference to FIGS. 19A and 19B while taking a practical example.
- FIG. 19A shows a horizontal maximum chord length (CHORD H), vertical maximum chord length (CHORD V), and absolute maximum length (MAXIMUM LENGTH).
- FIG. 19B shows an area (AREA) and perimeter (PERIMETER). A degree C L of circularity can be calculated from these values. An equation of the degree of circularity is described as:
- A is an area (AREA)
- L is a perimeter (PERIMETER).
- the feature amount obtaining unit 601 calculates curvatures at respective portions of the retinal layer, and calculates an average, variance, and standard deviation of the curvatures at all the positions.
- the image processing apparatus 30 instructs the comparison calculation unit 602 to compare the feature amounts obtained by the process of step S 305 and those in the statistics database 55 stored in the external storage device 52 .
- the statistics database 55 holds information in which a 95% range of normal data is a normal range, a 4% range is border line range, and the remaining 1% range is an abnormal range.
- the comparison calculation unit 602 compares in which of ranges the analysis results and feature amount values calculated by the processes of steps S 304 and S 305 are located with respect to feature amounts held in the statistics database 55 . For example, only the area and volume of the analysis results may be compared, or the values of the area and volume and feature amount values are plotted on two axes, and comparison may be made to decide their locations. Furthermore, when the feature amount is a curvature, the average value and variance value of the curvature values are plotted on two axes, and comparison may be made to decide their ranges.
- a rhombus 701 indicates a curvature value of a retinal layer free from any symptom of posterior staphyloma in a myopia
- a triangle 702 indicates a curvature value of a retinal layer which suffers a mild symptom of posterior staphyloma in a myopia
- a rectangle 703 indicates a curvature value of a retinal layer which suffers a symptom of posterior staphyloma in a myopia.
- posterior staphyloma means a state of an eyeball, the shape of which is deformed to be projected backward.
- a plurality of plot points indicate the number of cases, and this graph indicates plots of several ten cases.
- a description of the shape feature amounts obtained by the process of step S 305 will not be given, and these shape feature amounts are plotted on the graph in a format (for example, a display format (for example, circle) is changed) identifiable from feature amounts held in the statistics database 55 . That is, the shape feature amounts obtained by the process of step S 305 are displayed to be comparable with feature amounts held in the statistics database 55 .
- a region 711 indicates a curvature value region free from a symptom of posterior staphyloma
- a region 712 indicates a curvature range region suffering a mild symptom of posterior staphyloma
- a region 713 indicates a curvature value region suffering a symptom of posterior staphyloma. Therefore, for example, when a curvature value of an eye to be examined as a target is measured, if the curvature value of that eye to be examined is located in the region 711 , it is determined that the eye is free from posterior staphyloma; if the curvature value is located in the region 713 , it is determined that the eye suffers posterior staphyloma.
- the graph is classified into a plurality of regions which indicate possibility levels indicating if an eye to be examined suffers a disease stepwise (that is, a display mode indicating stepwise categories of symptoms of a disease of an eyes to be examined), and curvature values (feature amounts) are plotted on the graph to be displayed. Also, depending on a region to which a curvature value belongs, a display format (rhombus, triangle, or rectangle) of an object indicating a curvature value is changed and displayed. Note that the graph is classified into three regions for the sake of simplicity, but the present invention is not limited to this. For example, each of the regions may be divided into a plurality of sub-regions to be classified into a total of 10 levels, so as to allow the operator to determine a degree of posterior staphyloma.
- the image processing apparatus 30 instructs an analysis result generation unit 45 to generate an analysis result based on the result of the process of step S 305 .
- the analysis result is generated in a format which allows the operator to recognize a location of analysis values of an eye to be examined as a target in the statistics database 55 .
- a graph on which curvature values of the eye to be examined are plotted to indicate their regions may be used.
- a color bar (bar graph) which expresses features of the statistics database 55 by one dimension may be generated, and an analysis result indicating a position on that color bar (bar graph) may be generated.
- one value corresponds to each position like curvature values, it may be compared with an arbitrary curvature value held in the statistics database 55 at that position to generate a two-dimensional map which allows to recognize locations in the statistics database 55 .
- an average value of those within a given range may be compared to the statistics database 55 in place of comparison at each position.
- the image processing apparatus 30 instructs a display control unit 35 to display a tomographic image, the detection result of the layers detected by a detection unit 41 , and analysis results (map, graph, and numerical value information) generated by the analysis result generation unit 45 on a display device 53 .
- analysis results the map, graph, and the like generated by the process of step S 307 may be displayed. More specifically, a tomographic image observation screen 80 shown in FIG. 21 is displayed.
- a curvature map of the retinal layer is displayed as the comparison results with the feature amounts (curvatures) in the statistics database 55 .
- This curvature map presents the comparison results of average values of given ranges with the statistics database 55 .
- a darker color portion indicates that a curvature value falls outside a normal value range of the statistics database 55
- a lighter color portion indicates that a curvature value falls within the normal value range of the statistics database 55 .
- a second analysis result display section 98 displays a graph 730 indicating a position of an average value and variance value of curvatures in the statistics database 55 .
- a circular point 722 in the graph 730 is a plot of the analysis value of the eye to be examined as a target in this graph.
- an area/volume map and the curvature map may be displayed side by side in a first analysis result display section 96 .
- an area/volume graph and the curvature graph may be displayed side by side in the second analysis result display section 98 .
- shape feature amounts first feature amounts
- shape feature amounts second feature amounts
- the shape feature amounts of an eye to be examined as a diagnosis target can be presented together with an index as a criterion as to whether or not the eye to be examined suffers a disease.
- the fourth embodiment will be described below.
- the fourth embodiment will explain the following case. That is, feature amounts held in a statistics database are searched for using shape features (shape feature amounts) of a retinal layer and image features (image feature amounts) of a tomographic image or fundus image. Then, a tomographic image, fundus image, and diagnosis information of an eye to be examined having feature amounts close to the shape features of the current eye to be examined are displayed.
- FIG. 22 shows an example of the arrangement of an image processing system 10 according to the fourth embodiment.
- An image processing apparatus 30 according to the fourth embodiment newly includes a search unit 36 in addition to the arrangement of the third embodiment.
- An external storage device 52 stores feature amounts of retinal layers, tomographic images, fundus images, and diagnosis information in association with each other as a statistics database 55 .
- the search unit 36 searches the external storage device 52 (that is, the statistics database 55 ) using shape features of a retinal layer and image features in a tomographic image or fundus image.
- the image processing apparatus 30 instructs a feature amount obtaining unit 601 to extract image feature amounts from a tomographic image or fundus image in addition to shape feature amounts described in the aforementioned third embodiment.
- the image feature amounts include, for example, edge features, color features, histogram features, SIFT (Scale-Invariant Feature Transform) feature amounts, and the like.
- the image processing apparatus 30 instructs the search unit 36 to express the shape feature amounts and image feature amounts obtained by the process of step S 305 as a multi-dimensional vector, and to compare these feature amounts with data held in the statistics database 55 in the external storage device 52 . Thus, an image having similar feature amounts is searched for.
- a nearest neighbor search may be used. Note that an approximate nearest neighbor search or the like may be used to speed up the processing.
- the image processing apparatus 30 instructs a display control unit 35 to display a screen including a captured tomographic image and a similar tomographic image obtained by the search of a search unit 36 on a display device 53 .
- the tomographic image observation screen 900 includes a first analysis result display section 930 for displaying current image analysis results, and a second analysis result display section 940 for displaying previous image analysis results.
- the first analysis result display section 930 displays a tomographic image 901 , SLO image 903 , fundus photo 902 , (three-dimensional shape) analysis result map 904 , and analysis graph 905 .
- a circular point 906 in the analysis graph 905 indicates a plot result of the analysis value of the eye to be examined as a target in the graph.
- star-like points 907 indicate plot results of the analysis values of previous eyes to be examined displayed on the second analysis result display section 940 in the graph.
- the abscissa plots the shape feature amounts, and the ordinate plots the image feature amounts.
- the present invention is not limited to this.
- a graph indicating curvature values described in the third embodiment or the like may be displayed.
- the second analysis result display section 940 displays tomographic images 911 and 921 , SLO images 913 and 923 , fundus photos 912 and 922 , (three-dimensional shape) analysis result maps 914 and 924 , and pieces of diagnosis information 915 and 925 of eyes to be examined.
- the diagnosis information is information indicating diagnosis processes, and includes, for example, an age, diagnostic name, treatment history, visual acuity, refractometric value, keratometric value, ophthalmic axis length, and the like.
- images and diagnosis information of an identical eye to be examined are displayed along a vertical column.
- images and diagnosis information of an identical eye to be examined may be displayed side by side along a horizontal row.
- the operator When there are a plurality of images similar to the current eye to be examined, the operator operates a slider bar 950 via an input device 54 to selectively display these results. Note that previous similar images to be displayed may be switched using a member other than a slider bar 950 . For example, when the operator clicks a star-like point 907 on the analysis graph 905 via the input device (for example, a mouse) 54 , an image having that feature amount may be displayed beside the current image.
- the input device for example, a mouse
- the statistics database is searched for tomographic images similar to a captured tomographic image of an eye to be examined and associated information based on feature amounts (shape feature amounts) indicating shape features of a retinal layer and image feature amounts of a tomographic image or fundus image. Then, the similar tomographic images and associated information are displayed side by side together with the captured tomographic image.
- shape feature amounts of an eye to be examined as a diagnosis target can be presented together with an index used as a criterion as to whether or not the eye to be examined suffers a disease.
- buttons, list boxes, buttons, and the like may be changed as needed as usage.
- components described as the combo boxes in the aforementioned description may be implemented as list boxes.
- aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiment(s), and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment(s).
- the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (for example, computer-readable medium).
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Ophthalmology & Optometry (AREA)
- Signal Processing (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- Animal Behavior & Ethology (AREA)
- Biomedical Technology (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Eye Examination Apparatus (AREA)
Abstract
An image processing system includes: an analysis unit configured to obtain information indicating a degree of curvature of a retina from a tomographic image of an eye to be examined; and an obtaining unit configured to obtain a category of the eye to be examined based on an analysis result.
Description
- 1. Field of the Invention
- The present invention relates to an image processing system, processing method, and storage medium.
- 2. Description of the Related Art
- A tomography apparatus using an OCT (Optical Coherence Tomography), which utilizes interference caused by low coherent light, is known. By capturing an image of a fundus by such tomography apparatus, the state of interior of retinal layers can be three-dimensionally observed.
- Imaging by the tomography apparatus is receiving a lot of attention since it is a technique helpful to give more adequate diagnoses of diseases. As a mode of such OCT, for example, a TD-OCT (Time Domain OCT) as a combination of a broadband light source and Michelson interferometer is known. The TD-OCT measures interfering light with backscattered light of a signal arm by scanning a delay of a reference arm, thus obtaining depth resolution information.
- However, it is difficult for the TD-OCT to obtain an image fast. For this reason, as a method of obtaining an image faster, an SD-OCT (Spectral Domain OCT) which obtains an interferogram by a spectroscope using a broadband light source is known. Also, an SS-OCT (Swept Source OCT) which measures spectral interference by a single-channel photodetector using a fast wavelength swept light source is known.
- In this case, if a shape change of a retina can be measured in a tomographic image captured by each of these OCTs, a degree of progress of a disease such as glaucoma and a degree of recovery after treatment can be quantitatively diagnosed. In association with such technique, Japanese Patent Laid-Open No. 2008-073099 discloses a technique for detecting boundaries of respective layers of a retina from a tomographic image and measuring thicknesses of the layers based on the detection result using a computer, so as to quantitatively measure the shape change of the retina.
- With the technique of Japanese Patent Laid-Open No. 2008-073099 described above, a tomographic image within a range designated on a two-dimensional image is obtained, and layer boundaries are detected to calculate layer thicknesses. However, three-dimensional shape analysis of retinal layers is not applied, and the shape analysis result is not effectively displayed.
- The present invention has been made in consideration of the aforementioned problems, and provides a technique which allows to display a shape feature amount of an eye to be examined as a diagnosis target together with an index used as a criterion as to whether or not the eye to be examined suffers a disease.
- According to one aspect of the present invention, there is provided an image processing system comprising: an analysis unit configured to obtain information indicating a degree of curvature of a retina from a tomographic image of an eye to be examined; and an obtaining unit configured to obtain a category of the eye to be examined based on an analysis result.
- According to the present invention, a shape feature amount of an eye to be examined can be displayed together with an index used as a criterion as to whether or not the eye to be examined suffers a disease.
- Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
-
FIG. 1 is a block diagram showing an example of the arrangement of animage processing system 10 according to one embodiment of the present invention; -
FIG. 2 is a view showing an example of a tomographicimage capturing screen 60; -
FIGS. 3A and 3B are flowcharts showing an example of the sequence of processing of animage processing apparatus 30 shown inFIG. 1 ; -
FIG. 4 is a view for explaining an overview of three-dimensional shape analysis processing; -
FIGS. 5A to 5C are views for explaining an overview of three-dimensional shape analysis processing; -
FIG. 6 is a view showing an example of a tomographicimage observation screen 80; -
FIGS. 7A to 7C are views showing an example of respective components of the tomographicimage observation screen 80; -
FIGS. 8A to 8D are views showing an example of respective components of the tomographicimage observation screen 80; -
FIGS. 9A and 9B are views showing an example of respective components of the tomographicimage observation screen 80; -
FIG. 10 is a view showing an example of the tomographicimage observation screen 80; -
FIG. 11 is a view showing an example of respective components of the tomographicimage observation screen 80; -
FIG. 12 is a view showing an example of the tomographicimage observation screen 80; -
FIG. 13 is a view showing an example of a tomographicimage observation screen 400; -
FIG. 14 is a view showing an example of respective components of the tomographicimage observation screen 400; -
FIG. 15 is a view showing an example of the tomographicimage observation screen 400; -
FIG. 16 is a view showing an example of respective components of the tomographicimage observation screen 400; -
FIG. 17 is a block diagram showing an example of the arrangement of animage processing system 10 according to the third embodiment; -
FIG. 18 is a flowchart showing an example of the sequence of processing of animage processing apparatus 30 according to the third embodiment; -
FIGS. 19A and 19B are views showing an overview of a method of obtaining feature amounts which represent shape features of retinal layers; -
FIGS. 20A and 20B are views showing an overview of a method of obtaining feature amounts which represent shape features of retinal layers; -
FIG. 21 is a view showing an example of the tomographicimage observation screen 80; -
FIG. 22 is a block diagram showing an example of the arrangement of animage processing system 10 according to the fourth embodiment; and -
FIG. 23 is a view showing an example of a tomographicimage observation screen 900. - Embodiments according to the present invention will be described in detail hereinafter with reference to the accompanying drawings.
-
FIG. 1 is a block diagram showing an example of the arrangement of animage processing system 10 according to an embodiment of the present invention. - The
image processing system 10 includes animage processing apparatus 30,tomography apparatus 20, fundusimage capturing device 51,external storage device 52,display device 53, andinput device 54. - The
tomography apparatus 20 is implemented by, for example, an SD-OCT or SS-OCT, and captures a tomography image indicating a three-dimensional shape of a fundus using an OCT using interference caused by low coherent light. Thetomography apparatus 20 includes agalvanometer mirror 21,driving control unit 22,parameter setting unit 23,vision fixation lamp 24, andcoherence gate stage 25. - The
galvanometer mirror 21 has a function of two-dimensionally scanning measurement light (irradiation light) on a fundus, and defines an imaging range of a fundus by thetomography apparatus 20. Thegalvanometer mirror 21 includes, for example, two mirrors, that is, X- and Y-scan mirrors, and scans measurement light on a plane orthogonal to an optical axis with respect to a fundus of an eye to be examined. - The
driving control unit 22 controls a driving (scanning) range and speed of thegalvanometer mirror 21. Thus, an imaging range in a planar direction (a direction orthogonal to an optical axis direction of measurement light) and the number of scan lines (scanning speed in the planar direction) on a fundus are defined. - The
parameter setting unit 23 sets various parameters used in driving control of thegalvanometer mirror 21 by the drivingcontrol unit 22. These parameters decide imaging conditions of a tomographic image by thetomography apparatus 20. For example, scan positions of scan lines, the number of scan lines, the number of images to be captured, and the like are decided. In addition, a position of the vision fixation lamp, scanning range, and scanning pattern, coherence gate position, and the like are also set. Note that the parameters are set based on an instruction from theimage processing apparatus 30. - The
vision fixation lamp 24 suppresses movement of a viewpoint by placing a bright spot in a visual field so as to prevent an eyeball motion during imaging of a tomographic image. Thevision fixation lamp 24 includes anindication unit 24 a andlens 24 b. Theindication unit 24 a is realized by disposing a plurality of light-emitting diodes (LDs) in a matrix. Lighting positions of the light-emitting diodes are changed in correspondence with a portion as an imaging target under the control of the drivingcontrol unit 22. Light from theindication unit 24 a is guided to the eye to be examined through thelens 24 b. Light emerging from theindication unit 24 a has a wavelength of, for example, 520 nm, and a desired pattern is indicated (lighted) under the control of the drivingcontrol unit 22. - The
coherence gate stage 25 is arranged to cope with, for example, a different ophthalmic axis length of an eye to be examined. More specifically, an optical path length of reference light (to be interfered with measurement light) is controlled to adjust an imaging position along a depth direction (optical axis direction) of a fundus. Thus, optical path lengths of reference light and measurement light can be matched even for an eye to be examined having a different ophthalmic axis length. Note that thecoherence gate stage 25 is controlled by the drivingcontrol unit 22. - In this case, a coherence gate indicates a position where optical distances of measurement light and reference light are equal to each other in the
tomography apparatus 20. By controlling the coherence gate position, imaging on the side of retinal layers or that of an EDI (Enhanced Depth Imaging) method on the side deeper than the retinal layers is switched. When imaging is done using the EDI method, the coherence gate position is set on the side deeper than the retinal layers. Hence, when an image of the retinal layers is captured beyond an upper portion side of a tomographic image, the retinal layers can be prevented from appearing in the tomographic image while being folded back. - The fundus
image capturing device 51 is implemented by, for example, a fundus camera, SLO (Scanning Laser Ophthalmoscope), or the like, and captures a (two-dimensional) fundus image of a fundus. - The
external storage device 52 is implemented by, for example, an HDD (Hard Disk Drive) or the like, and stores various data. Theexternal storage device 52 holds captured image data, imaging parameters, image analysis parameters, and parameters set by an operator in association with information (a patient name, age, gender, etc.) related to an eye to be examined. - The
input device 54 is implemented by, for example, a mouse, keyboard, touch operation screen, and the like, and allows an operator to input various instructions. For example, the operator inputs various instructions, settings, and the like for theimage processing apparatus 30,tomography apparatus 20, and fundusimage capturing device 51 via theinput device 54. Thedisplay device 53 is implemented by, for example, a liquid crystal display or the like, and displays (presents) various kinds of information for the operator. - The
image processing apparatus 30 is implemented by, for example, a personal computer or the like, and processes various images. That is, theimage processing apparatus 30 incorporates a computer. The computer includes a main control unit such as a CPU (Central Processing Unit), storage units such as a ROM (Read Only Memory) and RAM (Random Access Memory), and the like. - In this embodiment, the
image processing apparatus 30 includes, as its functional units, animage obtaining unit 31,storage unit 32,image processing unit 33,instruction unit 34, anddisplay control unit 35. Note that the units other than thestorage unit 32 are implemented, for example, when the CPU reads out and executes a program stored in the ROM or the like. - The
image processing unit 33 includes adetection unit 41,determination unit 43, retinallayer analysis unit 44, andalignment unit 47. - The
image obtaining unit 31 obtains a tomographic image captured by thetomography apparatus 20 and a fundus image captured by the fundusimage capturing device 51, and stores these images in thestorage unit 32. Note that thestorage unit 32 is implemented by, for example, the ROM, RAM, and the like. - The
detection unit 41 detects retinal layers from the tomographic image stored in thestorage unit 32. - The retinal
layer analysis unit 44 analyzes the retinal layers to be analyzed. The retinallayer analysis unit 44 includes ananalysis unit 42, analysisresult generation unit 45, and shapedata generation unit 46. - The
determination unit 43 determines whether or not three-dimensional shape analysis processing of retinal layers is to be executed according to an imaging mode (a myopia analysis imaging mode and non-myopia analysis imaging mode). Note that the three-dimensional shape analysis indicates processing for generating three-dimensional shape data, and executing shape analysis of retinal layers using the shape data. - The
analysis unit 42 applies analysis processing to retinal layers to be analyzed based on the determination result of thedetermination unit 43. Note that this embodiment will explain a case of macula analysis of a myopia as the three-dimensional shape analysis processing. The analysisresult generation unit 45 generates various data required to present the analysis result (information indicating states of retinal layers). The shapedata generation unit 46 aligns a plurality of tomographic images obtained by imaging, thereby generating three-dimensional shape data. That is, the three-dimensional shape data is generated based on layer information of retinal layers. - The
alignment unit 47 performs alignment between the analysis result and fundus image, that between fundus images, and the like. Theinstruction unit 34 instructs information such as imaging parameters according to the imaging mode set in thetomography apparatus 20. - The example of the arrangement of the
image processing system 10 has been described. Note that the functional units arranged in the aforementioned apparatuses need not always be implemented, as shown inFIG. 1 , and all or some of these units need only be implemented in any apparatus in the system. For example, inFIG. 1 , theexternal storage device 52,display device 53, andinput device 54 are arranged outside theimage processing apparatus 30. However, these devices may be arranged inside theimage processing apparatus 30. Also, for example, the image processing apparatus andtomography apparatus 20 may be integrated. - An example of a tomographic
image capturing screen 60 displayed on thedisplay device 53 shown inFIG. 1 will be described below with reference toFIG. 2 . Note that this screen is displayed when a tomographic image is to be captured. - The tomographic
image capturing screen 60 includes a tomographicimage display field 61, fundusimage display field 62,combo box 63 used to set an imaging mode, andcapture button 64 used to instruct to capture an image. Note thatreference numeral 65 in the fundusimage display field 62 denotes a mark which indicates an imaging region, and is superimposed on a fundus image. Reference symbol M denotes a macular region; D, an optic papilla; and V, a blood vessel. - The
combo box 63 allows the user to set, for example, an imaging mode for myopia analysis of (a macular region) or that for non-myopia analysis of (the macular region). That is, thecombo box 63 has an imaging mode selection function. In this case, the imaging mode for myopia analysis is set. - The tomographic
image display field 61 displays a tomographic image of a fundus. Reference numeral L1 denotes an inner limiting membrane (ILM); L2, a boundary between a nerve fiber layer (NFL) and ganglion cell layer (GCL); and L3, an inner segment outer segment junction (ISOS) of a photoreceptor cell. Also, reference numeral L4 denotes a pigmented retinal layer (RPE); and L5, a Bruch's membrane (BM). Theaforementioned detection unit 41 detects any of boundaries of L1 to L5. - An example of the sequence of processing of the
image processing apparatus 30 shown inFIG. 1 will be described below with reference toFIGS. 3A and 3B . The sequence of the overall processing at the time of capturing of a tomographic image will be described first with reference toFIG. 3A . - [Step S101]
- The
image processing apparatus 30 externally obtains a patient identification number as information required to identify an eye to be examined. Then, theimage processing apparatus 30 obtains information related to the eye to be examined, which information is held by theexternal storage device 52, based on the patient identification number, and stores the obtained information in thestorage unit 32. - [Step S102]
- The
image processing apparatus 30 instructs theimage obtaining unit 31 to obtain a fundus image from the fundusimage capturing device 51 and a tomographic image from thetomography apparatus 20 as pre-scan images used to confirm an imaging position at the imaging timing. - [Step S103]
- The
image processing apparatus 30 sets an imaging mode. The imaging mode is set based on a choice of the operator from thecombo box 63 used to set the imaging mode, as described inFIG. 2 . A case will be described below wherein imaging is to be done in the imaging mode for myopia analysis. - [Step S104]
- The
image processing apparatus 30 instructs theinstruction unit 34 to issue an imaging parameter instruction according to the imaging mode set from thecombo box 63 to thetomography apparatus 20. Thus, thetomography apparatus 20 controls theparameter setting unit 23 to set the imaging parameters according to the instruction. More specifically, theimage processing apparatus 30 instructs to set at least one of the position of the vision fixation lamp, scanning range, and scanning pattern, and coherence gate position. - The parameter settings in the imaging mode for myopia analysis will be described below. In a parameter setting of the position of the vision fixation lamp, the position of the
vision fixation lamp 24 is set to be able to capture an image of the center of a macular region. Theimage processing apparatus 30 instructs the drivingcontrol unit 22 to control the light-emitting diodes of theindication unit 24 a according to the imaging parameters. Note that in case of an apparatus with an arrangement which allows to assure a sufficiently broad imaging range, the position of thevision fixation lamp 24 may be controlled to set the center between a macular region and optic papilla as that of imaging. The reason why such control is executed is to capture an image of a region including a macular region so as to execute shape analysis of retinal layers in a myopia. - In a parameter setting of the scanning range, for example, a range of 9 to 15 mm is set as limit values of an imaging range of the apparatus. These values are merely an example, and may be changed as needed according to the specifications of the apparatus. Note that the imaging range is a broader region so as to detect shape change locations without any omission.
- In a parameter setting of the scanning pattern, for example, a raster scan or radial scan is set so as to be able to capture a three-dimensional shape of retinal layers.
- In a parameter setting of the coherence gate position, the gate position is set so as to allow imaging based on the EDI method. In a high myopia, a degree of curvature of retinal layers becomes strong, and an image of retinal layers is unwantedly captured beyond an upper portion side of a tomographic image. In this case, retinal layers beyond the upper portion of the tomographic image are folded back and appear in the tomographic image, and such parameter setting is required to prevent this. Note that when the SS-OCT of the large invasion depth is used as the
tomography apparatus 20, if the position of retinal layers is distant from the gate position, a satisfactory tomographic image can be obtained. Hence, imaging based on the EDI method is not always done. - [Step S105]
- The
image processing apparatus 30 instructs theinstruction unit 34 to issue an imaging instruction of the eye to be examined to thetomography apparatus 20. This instruction is issued, for example, when the operator presses thecapture button 64 of the tomographicimage capturing screen 60 via theinput device 54. In response to this instruction, thetomography apparatus 20 controls the drivingcontroller 22 based on the imaging parameters set by theparameter setting unit 23. Thus, thegalvanometer mirror 21 is activated to capture a tomographic image. - As described above, the
galvanometer mirror 21 includes an X-scanner for a horizontal direction, and a Y scanner for a vertical direction. For this reason, by changing the directions of these scanners, respectively, a tomographic image can be captured along the horizontal direction (X) and vertical direction (Y) on an apparatus coordinate system. By simultaneously changing the directions of these scanners, a scan can be made in a synthesized direction of the horizontal and vertical directions. Hence, imaging along an arbitrary direction on a fundus plane can be done. At this time, theimage processing apparatus 30 instructs thedisplay control unit 35 to display the captured tomographic image on thedisplay device 53. Thus, the operator can confirm the imaging result. - [Step S106]
- The
image processing apparatus 30 instructs theimage processing unit 33 to detect/analyze retinal layers from the tomographic image stored in thestorage unit 32. That is, theimage processing apparatus 30 applies detection/analysis processing of retinal layers to the tomographic image captured in the process of step S105. - [Step S107]
- The
image processing apparatus 30 determines whether or not to end imaging of tomographic images. This determination is made based on an instruction from the operator via theinput device 54. That is, theimage processing apparatus 30 determines whether or not to end imaging of tomographic images based on whether or not the operator inputs an end instruction. - When the imaging end instruction is input, the
image processing apparatus 30 ends this processing. On the other hand, when imaging is to be continued without ending processing, theimage processing apparatus 30 executes processes in step S102 and subsequent steps. - Note that when the operator manually modifies the detection result of retinal layers and the positions of a fundus image and map by processes of steps S206 and S209 (to be described later), the
image processing apparatus 30 saves the imaging parameters changed according to such modifications in theexternal storage device 52 upon ending imaging. At this time, a confirmation dialog as to whether or not to save the changed parameters may be displayed to issue an inquiry about whether or not to change the imaging parameters to the operator. - The detection/analysis processing of retinal layers in step S106 of
FIG. 3A will be described below with reference toFIG. 3B . - [Step S201]
- When the detection/analysis processing of retinal layers is started, the
image processing apparatus 30 instructs thedetection unit 41 to detect retinal layers from a tomographic image. This processing will be practically described below using a tomographic image (display field 61) shown inFIG. 2 . In case of a macular region, thedetection unit 41 applies a median filter and Sobel filter to the tomographic image to generate images (to be respectively referred to as a median image and Sobel image hereinafter). Subsequently, thedetection unit 41 generates profiles for each A-scan from the generated median image and Sobel image. A luminance value profile is generated from the median image, and a gradient profile is generated from the Sobel image. Then, thedetection unit 41 detects peaks in the profile generated from the Sobel image. Finally, thedetection unit 41 refers to the profile of the median image corresponding to portions before and after the detected peaks and those between adjacent peaks, thus detecting boundaries of respective regions of the retinal layers. That is, L1 (ILM), L2 (boundary between the NFL and GCL), L3 (ISOS), L4 (RPE), L5 (BM), and the like are detected. Note that the following description of this embodiment will be given under the assumption that an analysis target layer is the RPE. - [Step S202]
- The
image processing unit 30 controls thedetermination unit 43 to determine whether or not to execute three-dimensional shape analysis of the retinal layer. More specifically, if imaging is done in the imaging mode for myopia analysis, it is determined that the three-dimensional shape analysis of the retinal layer is to be executed. If imaging is done without using the myopia analysis mode (in the imaging mode for non-myopia analysis), it is determined that the three-dimensional shape analysis is not to be executed. In this case, analysis based on the detection result of thedetection unit 41 is performed (analysis without using the three-dimensional shape data). Note that even in the imaging mode for myopia analysis, it is determined based on a tomographic image whether or not a macular region is included in the tomographic image. If no macular region is included in the tomographic image (for example, only an optic papilla is included), it may be determined that (three-dimensional) shape analysis of the retinal layer is not to be executed. - The following description of this embodiment will be given under the assumption that a series of processes from imaging to analysis are executed in the
image processing system 10 which integrates thetomography apparatus 20 andimage processing apparatus 30. However, the present invention is not limited to this. That is, theimage processing apparatus 30 need only execute shape analysis of a retinal layer upon reception of a tomographic image of a macular region using a scanning pattern required to obtain a three-dimensional shape, and such integrated system need not always be adopted. For this reason, theimage processing apparatus 30 can execute shape analysis of a retinal layer for a tomographic image captured by an apparatus other than thetomography apparatus 20 based on information at the time of imaging. However, when shape analysis is not required, the shape analysis processing may be skipped. - [Step S203]
- The
image processing apparatus 30 instructs the shapedata generation unit 46 to generate three-dimensional shape data. The three-dimensional shape data is generated to execute shape analysis based on the detection result of the retinal layer in the process of step S201. - In this case, when the scanning pattern at the time of imaging is, for example, a raster scan, a plurality of adjacent tomographic images are aligned. In alignment of tomographic images, for example, an evaluation function which represents a similarity between two tomographic images is defined in advance, and tomographic images are deformed to maximize this evaluation function value.
- As the evaluation function, for example, a method of evaluating pixel values (for example, a method of making evaluation using correlation coefficients) may be used. As the deformation processing of images, processing for making translation and rotation using affine transformation may be used. After completion of the alignment processing of the plurality of tomographic images, the shape
data generation unit 46 generates three-dimensional shape data of a layer as a shape analysis target. The three-dimensional shape data can be generated by preparing, for example, 512×512×500 voxel data, and assigning labels to positions corresponding to coordinate values of layer data of the detected retinal layer. - On the other hand, when the scanning pattern at the time of imaging is a radial scan, the shape
data generation unit 46 aligns tomographic images, and then generates three-dimensional shape data in the same manner as described above. However, in this case, alignment in a depth direction (Z direction of the tomographic image (display field 61) shown inFIG. 2 ) is made using only region information near the centers of adjacent tomographic images. This is because in case of the radial scan, even adjacent tomographic images include coarse information at two ends compared to the vicinity of each center thereof, shape changes are large, and such information is not used as alignment information. As the alignment method, the aforementioned method can be used. After completion of the alignment processing, the shapedata generation unit 46 generates three-dimensional shape data of a layer as a shape analysis target. - In case of the radial scan, for example, 512×512×500 voxel data are prepared, and layer data as a shape analysis target of respective tomographic images are evenly circularly rotated and expanded. After that, in the expanded layer data, interpolation processing is executed between adjacent shape data in the circumferential direction. With the interpolation processing, shape data at non-captured positions are generated. As the interpolation processing method, processing such as linear interpolation or nonlinear interpolation may be applied. The three-dimensional shape data can be generated by assigning labels to positions corresponding to coordinate values obtained by interpolating between layer data of the detected retinal layer.
- Note that numerical values of the voxel data described above are merely an example, and can be changed as needed depending on the number of A-scans at the time of imaging and the memory size of the apparatus which executes the processing. Since large voxel data have a high resolution, they can accurately express shape data. However, such voxel data suffers a low execution speed, and have a large memory consumption amount. On the other hand, although small voxel data have a low resolution, they can assure a high execution speed and have a small memory consumption amount.
- [Step S204]
- The
image processing apparatus 30 instructs theanalysis unit 42 to execute the three-dimensional shape analysis of the retinal layer. As the shape analysis method, a method of measuring an area and volume of the retinal layer will be exemplified below. - This processing will be described below with reference to
FIG. 4 .FIG. 4 illustrates three-dimensional shape data (RPE), measurement surface (MS), area (Area), and volume (Volume). - An area measurement will be described first. In the three-dimensional shape data of the RPE generated in the process of step S203, a flat (planar) measurement surface (MS) is prepared at a place of layer data located at the deepest portion in the Z direction (optical axis direction). Then, the measurement surface (MS) is moved at given intervals in a shallow direction (an origin direction of the Z-axis) from there. When the measurement surface (MS) is moved from the deep portion of the layer in the shallow direction, it traverses a boundary line of the RPE.
- An area (Area) is obtained by measuring an internal planar region bounded by the measurement surface (MS) and the boundary line with the RPE. More specifically, the area (Area) is obtained by measuring an area of an intersection region between the measurement surface (MS) and boundary line with the RPE. In this manner, the area (Area) is a cross-sectional area of the three-dimensional retinal layer shape data. Upon measuring a cross-sectional area at a position of a reference portion, when a curvature of the retinal layer is strong, a small area is obtained; when the curvature of the retinal layer is moderate, a large area is obtained.
- A volume (Volume) can be obtained by measuring a whole internal region bounded by the measurement surface (MS) and the boundary line with the RPE using the measurement surface (MS) used in the measurement of the area (Area). Upon measuring a volume in a downward direction along the Z direction from a position of a reference position, when a curvature of the retinal layer is strong, a large volume is obtained; when the curvature of the retinal layer is moderate, a small volume is obtained. In this case, the reference position can be set at an Bruch's membrane opening position with reference to a portion. Alternatively, a given height such as 100 μm or 500 μm from the deepest position of the RPE may be used as a reference. Note that when the number of voxels included in a region to be measured is counted upon measuring the area or volume, the area or volume is calculated by multiplying the number of voxels by a physical size per voxel.
- [Step S205]
- After the shape analysis, the
image processing apparatus 30 instructs the analysisresult generation unit 45 to generate an analysis result (for example, a map, graph, or numerical value information) based on the three-dimensional shape analysis result. - A case will be described below wherein a contour line map is generated as the three-dimensional shape analysis result. The contour line map is used when the measurement results of the area and volume are to be displayed.
-
FIG. 5A shows an example of acontour line map 71. Thecontour line map 71 is an overall contour line map.Reference numeral 72 denotes contour lines drawn at given intervals; and 73, a portion located at the deepest portion in the Z direction of the three-dimensional retinal layer shape data. - When the
contour line map 71 is to be generated, a lookup table for a contour line map is prepared, and the map is color-coded according to the volumes with reference to the table. In this case, the lookup table for the contour line map may be prepared according to the volumes. Thus, the operator can recognize changes of the shape and volume at a glance on the map. - More specifically, the
contour lines 72 are drawn at given intervals to have theportion 73 inFIG. 5A as a bottom, and the colors of the map are set according to the volumes. For this reason, when the height (depth) of the measurement surface is changed from the portion located at the deepest position in the Z direction, the operator can recognize how to increase a volume. More specifically, when it is set to color-code 1 mm3 as blue and 2 mm3 as yellow, the operator can recognize the relationship between the shape and volume by checking whether blue corresponds to the height (depth) of 100 μm or 300 μm of the measurement surface from the portion located at the deepest position in the Z direction. Therefore, the operator can recognize the overall volume upon measuring the shape of the retinal layer to be measured by confirming a color of an outermost contour of the map. - Also, the operator can confirm a volume value corresponding to a height (depth) by confirming a color near each internal contour line. Note that the lookup table used upon setting colors of the contour line map may be prepared according to areas in place of the volumes. Alternatively, the lookup table may be prepared according to heights (depths) to the
portion 73 located at the deepest position in the Z direction. Although not shown, numerical values may be displayed together on respective contour lines so as to allow the operator to understand the heights (depths) of the contour lines. As an interval of a distance which expresses a contour line, for example, a 100-μm interval along the Z direction is set. Note that the contour line map may be either a color or grayscale map, but visibility is high in case of the color map. - An outer shape size of the contour line map changes depending on the height (depth) from the portion located at the deepest position in the Z direction to the measurement surface (MS).
FIG. 5B shows an example of acontour line map 74 when the height (depth) of the measurement surface (MS) is changed. - A case in which a curvature of the retinal layer is to be measured as the three-dimensional shape analysis will be described below using the tomographic image (display field 61) shown in
FIG. 2 . In the following description, a case will be explained wherein the abscissa is defined as an x-coordinate axis, the ordinate is defined as a z-coordinate axis, and a curvature of a boundary line of the layer (RPE) as an analysis target is calculated. A curvature K can be obtained by calculating, at respective points of the boundary line: -
- The sign of the curvature κ reveals that the shape is upward or downward convex, and the magnitude of a numeral value reveals a curved degree of the shape. For this reason, if upward convex is expressed by “+” and downward convex is expressed by “−”, if each tomographic image includes a − region, + region, and − region as the signs of the curvature, the layer has a W-shape.
- Note that the case has been explained wherein the curvature of the boundary line of the tomographic image is calculated in this case. However, the present invention is not limited to such specific curvature calculation, and three-dimensional curvatures may be calculated from the three-dimensional shape data. In this case, after the shape analysis, the
image processing apparatus 30 instructs the analysisresult generation unit 45 to generate a curvature map based on the analysis result. -
FIG. 5C shows an example of the curvature map. In this case, a portion having a strong curvature is expressed by a dark color, and that having a moderate curvature is expressed by a light color. More specifically, the color density is changed depending on the curvatures. Note that colors to be set in the curvature map may be changed depending on positive and negative curvature values with reference to a curvature value=0. Thus, the operator can recognize whether or not the retina shape is smooth and whether it is an upward or downward convex shape by checking the map. - [Step S206]
- The
image processing apparatus 30 instructs thedisplay control unit 35 to display a tomographic image, the detection result of the layer (RPE) detected by thedetection unit 41, and various shape analysis results (map, graph, and numerical value information) generated by the analysisresult generation unit 45 on thedisplay device 53. -
FIG. 6 shows an example of a tomographicimage observation screen 80 displayed on thedisplay device 53 shown inFIG. 1 . This screen is displayed after completion of the analysis of tomographic images (that is, it is displayed by the process of step S206). - The tomographic
image observation screen 80 includes a tomographicimage display section 91 including a tomographicimage display field 81, and a fundusimage display section 94 including a fundusimage display field 82. The tomographicimage observation screen 80 also includes a first analysisresult display section 96 including afirst analysis result 84, and a second analysisresult display section 98 including second analysis results 85 and 86. - Details of the tomographic
image display section 91 including the tomographicimage display field 81 will be described first. The tomographicimage display field 81 displays segmentation results (L1 to L5) obtained by detecting the respective layers of the retinal layers and the measurement surface (MS) which are superimposed on the captured tomographic image. The tomographicimage display field 81 highlights the segmentation result of the retinal layer (RPE (L4) in this embodiment) as an analysis target. - On the tomographic
image display field 81, a hatchedregion 81 a bounded by the measurement surface (MS) and the retinal layer (RPE (L4)) as an analysis target is a measurement target region of the area and volume. At this time, on the hatchedregion 81 a, a color according to the volume measurement result is displayed to have a predetermined transparency α. The same color to be set as that in the lookup table for the contour line map can be used. The transparency α is, for example, 0.5. - A
combo box 92 is provided to allow the operator to select whether the tomographic image is displayed at an OCT ratio or 1:1 ratio. In this case, the OCT ratio is that expressed by a resolution in the horizontal direction (X direction) and that in the vertical direction (Y direction), which are obtained based on the number of A-scans at the time of imaging. The 1:1 ratio is that which adjusts a physical size per pixel in the horizontal direction and that per pixel in the vertical direction, which are obtained based on the number of A-scans used to capture a given range (mm). - A
combo box 93 is provided to allow the operator to switch a two-dimensional (2D)/three-dimensional (3D) display mode. In the 2D display mode, one slice of the tomographic image is displayed; in the 3D display mode, the three-dimensional shape of the retinal layers generated from the boundary line data of the retinal layers is displayed. - More specifically, when the operator selects the 3D display mode at the
combo box 93, a tomographic image shown in one ofFIGS. 7A to 7C is displayed in the tomographicimage display field 81. -
FIG. 7A shows a mode when the RPE is displayed at the OCT ratio in the 3D display mode. In this case, the measurement surface (MS) is simultaneously displayed in the 3D display mode. - In this case, check
boxes 101 to 104 corresponding to the respective layers of the retina are displayed below the tomographicimage display field 81. More specifically, the check boxes corresponding to the ILM, RPE, BM, and MS are displayed, and the operator can switch display/non-display states of the respective layers using these check boxes. - When the measurement surface (MS) is expressed by a plane, its transparency α assumes a value which is larger than 0 and is smaller than 1. In a state in which the transparency is 1, and when the retinal layer shape and measurement surface are overlaid, the shape is unwantedly covered, and the three-dimensional shape of the retinal layers cannot be recognized from the upper side. Alternatively, the measurement surface (MS) may be expressed by a grid pattern in place of the plane. In case of the grid pattern, the transparency α of the measurement surface (MS) may be set to be 1. As for a color of the measurement surface (MS), a color according to a measurement value (area or volume) at the location of the measurement surface (MS) need only be selected with reference to the lookup table for the contour line map.
- In this case, the operator can move the position of the measurement surface (MS) via the
input device 54. For this reason, when the position of the measurement surface (MS) is changed, theimage processing apparatus 30 changes the contour line map shape in synchronism with that change, as described inFIGS. 5A and 5B . - A
text box 105 is an item used to designate a numerical value. The operator inputs, to thetext box 105, a numerical value indicating a height (depth) of the measurement surface (MS) from the portion at the deepest position in the Z direction via theinput device 54. For example, when the operator inputs a numerical value such as 100 μm or 300 μm, the measurement surface (MS) is moved to that position, and the contour line map is changed accordingly. Thus, the operator can simultaneously recognize the position in the three-dimensional shape and the contour line map at that time, and can also recognize a volume value and area value at that time. - As another example, the operator may input a volume value in the
text box 105. In this case, when the operator inputs a numerical value such as 1 mm3 or 2 mm3, he or she can simultaneously recognize the position in the three-dimensional shape and the contour line map at that time, which correspond to that volume. Furthermore, the operator can also recognize a height (depth) of the measurement surface (MS) from the portion at the deepest position in the Z direction at that time. - Subsequently,
FIG. 7B shows a display mode when the measurement surface (MS) is set in a non-display state. Other display items are the same as those inFIG. 7A . In case ofFIG. 7B , thecheck box 102 is selected to display the RPE alone. In this manner, only the three-dimensional shape of a fundus can be displayed. -
FIG. 7B shows the 3D display mode of the RPE at the OCT ratio, whileFIG. 7C shows a mode when the RPE is displayed at the 1:1 ratio in the 3D display mode. That is, the 1:1 ratio is selected at thecombo box 92. - Note that since this embodiment has explained the RPE as the analysis target, when the 2D/3D display mode is switched at the
combo box 93, the RPE shape is displayed in the 2D/3D display mode. However, the present invention is not limited to this. For example, when the Bruch's membrane (BM) is selected as an analysis target, the Bruch's membrane (BM) is displayed in the 2D/3D display mode. - In this case, the position measured by the measurement surface may be schematically displayed on the tomographic
image display field 81. A display mode in this case will be described below with reference toFIGS. 8A to 8D . -
FIG. 8A shows a mode in which anobject 110 indicating the position measured by the measurement surface in the currently displayed tomographic image is superimposed on the tomographic image. Reference symbol MS′ denotes a measurement surface (schematic measurement surface) on theobject 110. The three-dimensional shape of the retinal layer is rotated in the upper, lower, right, and left directions by an instruction input by the operator via theinput device 54. For this reason, in order to allow the operator to recognize the positional relationship between the measurement surface (MS) and retinal layer, theobject 110 and schematic measurement surface MS′ present an index of the positional relationship. - Note that when the operator changes the positional relationship between the
object 110 and schematic measurement surface MS′ via theinput device 54, the displayed three-dimensional shape of the retinal layer is also changed in synchronism with that change. In this case, since the region of the retinal layer as the analysis target is also changed, thefirst analysis result 84 and second analysis results 85 and 86 are changed in synchronism with that change. - Some display modes of the
object 110 will be exemplified below. For example, as shown inFIGS. 8B and 8C , tomographic images corresponding to respective section positions may be displayed. In this case, in theobject 110, tomographic images at vertical and horizontal positions, which intersect the central position in consideration of a three-dimensional shape, are displayed. Alternatively, as shown inFIG. 8D , an abbreviation such as “S” or “I” indicating “superior” or “inferior” may be displayed. - Details of the fundus
image display section 94 including the fundusimage display field 82 shown inFIG. 6 will be described below. On The fundusimage display field 82, an imaging position and itsscanning pattern mark 83 are superimposed on the fundus image. The fundusimage display section 94 is provided with acombo box 95 which allows the operator to switch a display format of the fundus image. In this case, an SLO image is displayed as the fundus image. - A case will be described below with reference to
FIGS. 9A and 9B wherein the operator switches the display format of the fundus image from thecombo box 95. In this case, a case in which “SLO image+map” are simultaneously displayed and a case in which “fundus photo (second fundus image)+SLO image (first fundus image)+map” are simultaneously displayed as the display formats of the fundus image will be exemplified. Note that the SLO image (first fundus image) can be a two-dimensional fundus image captured simultaneously with a tomographic image, and for example, it may be an integrated image generated by integrating tomographic images in the depth direction. The fundus photo (second fundus image) may be a two-dimensional fundus image captured at a timing different from a tomographic image, and for example, a contrast radiographic image or the like may be used. -
FIG. 9A shows a display mode when the operator selects “SLO image+map” from thecombo box 95.Reference numeral 201 denotes an SLO image; and 200, a map. TheSLO image 201 and map 200 are aligned by thealignment unit 47 described usingFIG. 1 . The SLO image and map are aligned by setting the position and size of the map based on the position of the vision fixation lamp and the scanning range at the time of imaging. - When a superimposing result of the
map 200 on theSLO image 201 is displayed as the fundus image, the transparency α of theSLO image 201 to be displayed is set to be 1, and that of themap 200 to be displayed is set to be smaller than 1 (for example, 0.5). These parameters of the transparencies α are those to be set when the operator selects target data for the first time. The operator can change these parameters via theinput device 54 as needed. The parameters changed by the operator are stored in, for example, theexternal storage device 52. When the same target data is opened for the next or subsequent time, display processing is performed according to the parameters previously set by the operator. - Subsequently,
FIG. 9B shows a display mode when the operator selects “fundus photo+SLO image+map” from thecombo box 95.Reference numeral 202 denotes a fundus photo (second fundus image). - In this case, in order to superimpose the
map 200 on the fundus photo 202 (that is, to superimpose themap 200 on the second fundus image), theSLO image 201 is used. This is because thefundus photo 202 is captured at a timing different from a tomographic image, and the imaging position and range cannot be recognized based on themap 200 alone. Hence, using theSLO image 201, thefundus photo 202 and map 200 can be aligned. Thefundus photo 202 andSLO image 201 are aligned by thealignment unit 47 described usingFIG. 1 . - As the alignment method, for example, a blood vessel feature may be used. As a detection method of blood vessels, since each blood vessel has a thin linear structure, blood vessels are extracted using a filter used to emphasize the linear structure. As the filter used to emphasize the linear structure, when a line segment is defined as a structural element, a filter which calculates a difference between an average value of image density values in the structural element and that in a local region which surrounds the structural element may be used. Of course, the present invention is not limited to such specific filter, and a difference filter such as a Sobel filter may be used. Alternatively, eigenvalues of a Hessian matrix may be calculated for each pixel of a density value image, and a line segment-like region may be extracted based on combinations of two eigenvalues obtained as calculation results. The
alignment unit 47 aligns thefundus photo 202 andSLO image 201 using blood vessel position information detected by these methods. - Since the
SLO image 201 and map 200 can be aligned by the aforementioned method, thefundus photo 202 and map 200 can also be consequently aligned. When a superimposing result of themap 200 on thefundus photo 202 is displayed on thedisplay field 82 of the fundusimage display section 94, the transparency α of thefundus photo 202 to be displayed is set to be 1. Also, the transparency of theSLO image 201 to be displayed and that of themap 200 to be displayed are set to be smaller than 1 (for example, 0.5). Of course, the values of the transparencies α of theSLO image 201 and map 200 need not always be the same. For example, the value of the transparency α of theSLO image 201 may be set to be 0. - Note that when an eye to be examined is a diseased eye, the alignment processing by the
alignment unit 47 may often fail. An alignment failure is determined when a maximum similarity does not become equal to or larger than a threshold upon calculation of an inter-image similarity. Even when the maximum similarity becomes equal to or larger than the threshold, the end of alignment processing at an anatomically abnormal position determines a failure. In this case, theSLO image 201 and map 200 need only be displayed at a position (for example, the center of an image) and initial transparencies of initial parameters, which are set in advance. A failure message of the alignment processing is displayed, thus prompting the operator to execute position correction processing via theinput device 54. - In this case, when the operator modifies the position and changes parameters of the transparencies α, if he or she moves, enlarges/reduces, and rotates the
SLO image 201, themap 200 on theSLO image 201 is simultaneously moved, enlarged/reduced, and rotated. That is, theSLO image 201 and map 200 operate as a single image. However, the transparencies α of theSLO image 201 and map 200 are independently set. The parameters which are changed by the operator via theinput device 54 are stored in theexternal storage device 52, and the next or subsequent display operation is made according to the set parameters. - In this manner, the fundus
image display field 82 in the fundusimage display section 94 displays the two-dimensional fundus image, the map superimposed on the fundus image, and the like. Note thatFIGS. 9A and 9B have explained the case in which the contour line map is superimposed in association with a corresponding position on the fundus image. However, the present invention is not limited to this. That is, a curvature map, layer thickness map, and the like may be displayed. - When an analysis target layer is to be changed, the operator can select, for example, the segmentation results (L1 to L5) from the tomographic image on the
display field 61 shown inFIG. 2 via theinput device 54. When an analysis target layer is switched, theimage processing apparatus 30 instructs thedisplay control unit 35 to normally display the segmentation result layer highlighted so far, and to highlight a new analysis target layer. Thus, the analysis results of an arbitrary layer can be displayed. - Details of the first analysis
result display section 96 including thefirst analysis result 84 shown inFIG. 6 will be described below. - The
first analysis result 84 displays a shape analysis map generated by the analysisresult generation unit 45. Acombo box 97 allows the operator to select a map type of thefirst analysis result 84. In this case, the shape analysis map indicated by thefirst analysis result 84 is a contour line map. - The type of the shape analysis map indicated as the
first analysis result 84 and that of the shape analysis map superimposed on the fundus image described usingFIGS. 9A and 9B can be changed in synchronism with each other by designating the type from thecombo box 97. Furthermore, the displayed contents on the second analysisresult display section 98 to be described later are also changed in synchronism with such designation. - An example of the tomographic
image observation screen 80 when the curvature result is displayed as the analysis result will be described below with reference toFIG. 10 . - When the analysis result to be displayed is switched from that of the area and volume to the curvature analysis result, results displayed on the tomographic
image display section 91, fundusimage display section 94, first analysisresult display section 96, and second analysisresult display section 98 have contents shown inFIG. 10 . - More specifically, on the tomographic
image display field 81, the segmentation results (L1 to L5) of the respective detected layers of the retinal layers are superimposed on a captured tomographic image. A curvature map is displayed as thefirst analysis result 84, and a curvature graph is displayed as thesecond analysis result 88. On the fundusimage display field 82, an image obtained by superimposing the SLO image (first fundus image) 201, the fundus photo (second fundus image) 202, and acurvature map 203 is displayed. - Details of the second analysis
result display section 98 including the second analysis results 85 and 86 shown inFIG. 6 will be described below. - As the
second analysis result 85, a shape analysis graph generated by the analysisresult generation unit 45 is displayed. In this case, a graph obtained by measuring the area and volume is displayed. The abscissa plots the height (depth), and the ordinate plots the volume. Asolid curve 87 represents the volume. - As the
second analysis result 86, a shape analysis result is displayed as a table. The table displays an area and volume at a height (for example, 100 μm, 500 μm, or the like) of a given reference value, and an area and volume corresponding to a height (a position of the Bruch's membrane opening) when a certain portion is used as a reference. - A case will be described below with reference to
FIG. 11 wherein area and volume results are displayed on one graph (second analysis result 85). In this case, the abscissa plots the height (depth), the ordinate on the left side of the graph plots the volume, and that on the right side of the graph plots the area. Abroken curve 89 is a graph indicating the area, and thesolid curve 87 is a graph indicating the volume. - [Step S207]
- The
image processing apparatus 30 instructs theanalysis unit 42 to execute shape analysis based on the detection result of the retinal layer. In this analysis processing, analysis using the detection result of the retinal layer is executed without generating three-dimensional shape data, and the like. For example, the analysis of a layer thickness or the like is executed. - [Step S208]
- The
image processing apparatus 30 instructs the analysisresult generation unit 45 to generate analysis results (for example, a map, graph, and numerical value information) based on the analysis result. - [Step S209]
- The
image processing apparatus 30 displays a tomographic image, the detection results of layers (RPE, ILM, and the like) detected by thedetection unit 41, and analysis results (map, graph, and numerical value information) generated by the analysisresult generation unit 45 on thedisplay device 53. -
FIG. 12 shows an example of the tomographicimage observation screen 80 displayed on thedisplay device 53 by the process of step S209. - In this case, a layer thickness map is displayed as a
first analysis result 302, and a layer thickness graph is displayed as asecond analysis result 301. That is, the analysis results using the detection result of the retinal layer are displayed as the first and second analysis results 302 and 301. - In this case as well, as in the process of step S206, parameters changed by the operator via the
input device 54 are stored in theexternal storage device 52, and the next or subsequent display operation is made according to the set parameters. - As described above, according to the first embodiment, a plurality of imaging modes including that for myopia analysis are provided, and the analysis processing can be selectively executed according to the imaging mode set when a tomographic image is captured.
- More specifically, as the plurality of imaging modes, at least the imaging mode for myopia analysis and that for non-myopia analysis are provided. In the imaging mode for myopia analysis, a tomographic image suited to three-dimensional shape analysis (analysis of a macular region in a myopia) is captured.
- For a tomographic image captured in the imaging mode for myopia analysis, three-dimensional shape data is generated, and three-dimensional shape analysis is executed based on the shape data. For a tomographic image captured in the imaging mode other than that for myopia analysis, three-dimensional shape data is not generated, and analysis processing based on the detection result of a retinal layer is executed.
- The second embodiment will be described below. The second embodiment will explain a case in which a tomographic image is displayed simultaneously in 2D and 3D display modes. More specifically, the second embodiment will explain a case in which a tomographic image, three-dimensional shape data, and analysis results are displayed side by side. Note that the second embodiment will exemplify, as a tomographic image, that captured in a radial scan.
- An example of a tomographic
image observation screen 400 according to the second embodiment will be described below with reference toFIG. 13 . - The tomographic
image observation screen 400 includes a first tomographicimage display section 410 including a two-dimensional tomographicimage display field 411, and a second tomographicimage display section 430 including a three-dimensional tomographicimage display field 431. Furthermore, the tomographicimage observation screen 400 includes a first analysisresult display section 440 including afirst analysis result 442, and a second analysisresult display section 420 including second analysis results 421 and 422. - In this embodiment, since the first analysis
result display section 440 and second analysisresult display section 420 are the same as those shown inFIGS. 6 and 9A used to explain the aforementioned first embodiment, a description thereof will not be repeated. Also, since the second tomographicimage display section 430 is the same as that shown inFIGS. 7A to 7C used to explain the first embodiment, a description thereof will not be repeated. In this embodiment, the first tomographicimage display section 410 will be mainly described. - The first tomographic
image display section 410 includes aslider bar 412 used to change a viewpoint position (slice position) of a tomographic image, and anarea 413 used to display a slice number of a tomographic image in addition to the two-dimensional tomographicimage display field 411. - The relationship between a
viewpoint direction 503 in a three-dimensional shape and the tomographic image (display field 411) shown inFIG. 13 will be described below with reference toFIG. 14 .Reference numeral 500 denotes an overview of a fundus when the three-dimensional shape of the tomographic image is viewed from the above in a depth direction (Z direction).Radial lines 501 indicate imaging slice positions of tomographic images. Abroken line 502 indicates a slice position corresponding to the currently displayed tomographic image (display field 411) of the imaging slice positions 501, and this slice position is orthogonal to theviewpoint direction 503. More specifically, the tomographic image at theposition 502 is that displayed on the tomographicimage display field 411. - A case will be described below wherein the operator operates the tomographic
image observation screen 400 described usingFIG. 13 via aninput device 54. - Assume that the operator moves the position of the
slider bar 412 from the center to one end via theinput device 54. Then, in synchronism with that operation, animage processing apparatus 30 changes the viewpoint position of the three-dimensional tomographic image (three-dimensional shape data) currently displayed on thedisplay field 431. Also, theimage processing apparatus 30 changes the slice position of the two-dimensional tomographic image currently displayed on thedisplay field 411 to a viewpoint position corresponding to a tomographic image designated by theslider bar 412. - Note that when the operator operates the
slider bar 412, the position of that bar is continuously changed in place of being discretely changed. For this reason, the position of the three-dimensional shape data of the currently displayed tomographic image is also continuously changed. More specifically, theimage processing apparatus 30 displays so that the three-dimensional shape data of the currently displayed tomographic image is rotated with reference to the center in the vertical direction. -
FIG. 15 shows an example of a screen display of the two-dimensional tomographic image on thedisplay field 411 and the three-dimensional tomographic image (three-dimensional shape data) on thedisplay field 431 when the position of theslider bar 412 is moved to one end. In this case, the relationship between aviewpoint direction 507 in the three-dimensional shape and the two-dimensional tomographic image (display field 411) becomes that shown inFIG. 16 . - When the position of the
slider bar 412 is operated, theimage processing apparatus 30 changes the slice position of the two-dimensional tomographic image currently displayed on thedisplay field 411, and also changes the viewpoint position of the three-dimensional tomographic image (three-dimensional shape data) corresponding to that slice position. Note that the viewpoint position of the three-dimensional shape data can be changed according to the operation of theslider bar 412 used to change the slice position of the two-dimensional tomographic image, and vice versa. That is, when the operator changes the viewpoint position of the three-dimensional shape data via theinput device 54, the slice position of the two-dimensional tomographic image and the position of theslider bar 412 may be changed accordingly. - As described above, according to the second embodiment, the two-dimensional tomographic image and three-dimensional tomographic image are simultaneously displayed, and their display modes can be changed in synchronism with each other in response to an operation by the operator.
- The third embodiment will be described below. The third embodiment will explain a case in which both the analysis result of shapes of retinal layers and a statistics database held in a database are comparably displayed.
-
FIG. 17 shows an example of the arrangement of animage processing system 10 according to the third embodiment. In this embodiment, differences from the arrangement shown inFIG. 1 used to explain the first embodiment will be mainly described. - An
image processing unit 33 of animage processing apparatus 30 according to the third embodiment newly includes a featureamount obtaining unit 601 andcomparison calculation unit 602 in addition to the arrangement of the first embodiment. - The feature
amount obtaining unit 601 calculates feature amounts (shape feature amounts) indicating shape features of retinal layers based on three-dimensional shape analysis. - The
comparison calculation unit 602 compares shape feature amounts (first feature amounts) with those held in astatistics database 55 stored in anexternal storage device 52, and outputs the comparison result. Note that thestatistics database 55 is generated based on a large number of eye data by integrating race-dependent and age-dependent data. That is, thestatistics database 55 holds statistical feature amounts obtained from a plurality of eyes to be examined. Note that in the ophthalmic field, data in the database may be classified based on parameters unique to eyes such as right/left eye-dependent parameters and ophthalmic axis length-dependent parameters. - An example of the sequence of processing of the
image processing apparatus 30 according to the third embodiment will be described below with reference toFIG. 18 . Note that the overall processing upon capturing a tomographic image is the same as the contents described usingFIG. 3A of the first embodiment, and a description thereof will not be given. Differences from the processing (detection/analysis processing of retinal layers) shown inFIG. 3B will be mainly explained. - [Steps S301 to S304]
- When the detection/analysis processing of retinal layers is started, the
image processing apparatus 30 executes the processes of steps S201 to S204 shown inFIG. 3B used to explain the first embodiment. Thus, retinal layers are detected from a tomographic image, and three-dimensional shape analysis of the detected retinal layer is executed. - [Step S305]
- The
image processing apparatus 30 instructs the featureamount obtaining unit 601 to calculate shape feature amounts based on the three-dimensional shape analysis executed in the process of step S304. - In this case, shape features of the retinal layer include, for example, feature amounts of an area and volume of the retinal layer, a feature amount indicating a degree (of irregularity) of circularity of the retinal layer, a feature amount of a curvature of the shape of the retinal layer, and the like.
- The method of calculating the feature amounts of the area and volume of the retinal layer will be described first. The feature amounts of the area and volume of the retinal layer can be obtained by calculating increasing/decreasing rates (typically, increasing rates) of the area and volume of the retinal layer. The area and volume of the retinal layer can be calculated by the same method as in the aforementioned first embodiment.
- Upon calculating the increasing/decreasing rates of the area and volume, the feature
amount obtaining unit 601 respectively calculates areas and volumes at given heights (depths) such as 100 μm, 200μ, and 300 μm from the deepest portion of the RPE based on the analysis result of the three-dimensional shape analysis by ananalysis unit 42. Then, based on the results, the increasing/decreasing rates of the area and volume are calculated as shape features of the retinal layer. - The feature amount indicating the degree (of irregularity) of circularity of the retinal layer is calculated from, for example, the sectional shape of the retinal layer to be (finally) used as an area measurement target. The calculation method of the feature amount indicating the degree (of irregularity) of circularity of the retinal layer will be described below with reference to
FIGS. 19A and 19B while taking a practical example. -
FIG. 19A shows a horizontal maximum chord length (CHORD H), vertical maximum chord length (CHORD V), and absolute maximum length (MAXIMUM LENGTH).FIG. 19B shows an area (AREA) and perimeter (PERIMETER). A degree CL of circularity can be calculated from these values. An equation of the degree of circularity is described as: -
- where A is an area (AREA), and L is a perimeter (PERIMETER). By calculating a reciprocal of the degree of circularity, a feature amount indicating a degree of irregularity can be obtained. Of course, the method of calculating the feature amount indicating the degree of irregularity of circularity is not limited to this, and for example, a moment feature around the barycenter may be used.
- Subsequently, the method of calculating the feature amount of the curvature of the shape of the retinal layer will be described below. Upon calculating the feature amount of the curvature, the feature
amount obtaining unit 601 calculates curvatures at respective portions of the retinal layer, and calculates an average, variance, and standard deviation of the curvatures at all the positions. - When the curvature value is calculated as a signed curvature value in place of an absolute value, the irregularity of the shape can be recognized based on positive/negative values. For this reason, upon calculating the signed curvature values, the number of positive curvature values and that of negative curvature values are respectively counted as a curvature value. In counts of the numbers of positive and negative curvature values, a curvature value of a perfect line becomes zero. However, upon automatic or manual detection of the retinal layer, some errors are included. For this reason, values around the curvature value=0 may be excluded from the counts, and only large positive and negative curvature values, that is, only positions with large features may be counted.
- [Step S306]
- The
image processing apparatus 30 instructs thecomparison calculation unit 602 to compare the feature amounts obtained by the process of step S305 and those in thestatistics database 55 stored in theexternal storage device 52. Note that thestatistics database 55 holds information in which a 95% range of normal data is a normal range, a 4% range is border line range, and the remaining 1% range is an abnormal range. - The
comparison calculation unit 602 compares in which of ranges the analysis results and feature amount values calculated by the processes of steps S304 and S305 are located with respect to feature amounts held in thestatistics database 55. For example, only the area and volume of the analysis results may be compared, or the values of the area and volume and feature amount values are plotted on two axes, and comparison may be made to decide their locations. Furthermore, when the feature amount is a curvature, the average value and variance value of the curvature values are plotted on two axes, and comparison may be made to decide their ranges. - This point will be further explained. In a graph shown in
FIG. 20A , the abscissa plots the average value of curvatures, and the ordinate plots the variance value of curvatures. Arhombus 701 indicates a curvature value of a retinal layer free from any symptom of posterior staphyloma in a myopia, and atriangle 702 indicates a curvature value of a retinal layer which suffers a mild symptom of posterior staphyloma in a myopia. Also, arectangle 703 indicates a curvature value of a retinal layer which suffers a symptom of posterior staphyloma in a myopia. - Note that posterior staphyloma means a state of an eyeball, the shape of which is deformed to be projected backward. A plurality of plot points indicate the number of cases, and this graph indicates plots of several ten cases. In this case, a description of the shape feature amounts obtained by the process of step S305 will not be given, and these shape feature amounts are plotted on the graph in a format (for example, a display format (for example, circle) is changed) identifiable from feature amounts held in the
statistics database 55. That is, the shape feature amounts obtained by the process of step S305 are displayed to be comparable with feature amounts held in thestatistics database 55. - In this case, a
region 711 indicates a curvature value region free from a symptom of posterior staphyloma, aregion 712 indicates a curvature range region suffering a mild symptom of posterior staphyloma, and aregion 713 indicates a curvature value region suffering a symptom of posterior staphyloma. Therefore, for example, when a curvature value of an eye to be examined as a target is measured, if the curvature value of that eye to be examined is located in theregion 711, it is determined that the eye is free from posterior staphyloma; if the curvature value is located in theregion 713, it is determined that the eye suffers posterior staphyloma. - As described above, according to this embodiment, the graph is classified into a plurality of regions which indicate possibility levels indicating if an eye to be examined suffers a disease stepwise (that is, a display mode indicating stepwise categories of symptoms of a disease of an eyes to be examined), and curvature values (feature amounts) are plotted on the graph to be displayed. Also, depending on a region to which a curvature value belongs, a display format (rhombus, triangle, or rectangle) of an object indicating a curvature value is changed and displayed. Note that the graph is classified into three regions for the sake of simplicity, but the present invention is not limited to this. For example, each of the regions may be divided into a plurality of sub-regions to be classified into a total of 10 levels, so as to allow the operator to determine a degree of posterior staphyloma.
- [Step S307]
- The
image processing apparatus 30 instructs an analysisresult generation unit 45 to generate an analysis result based on the result of the process of step S305. The analysis result is generated in a format which allows the operator to recognize a location of analysis values of an eye to be examined as a target in thestatistics database 55. - As an example of the analysis result, for example, as described above using
FIG. 20A , when thestatistics database 55 is expressed by two dimensions, a graph on which curvature values of the eye to be examined are plotted to indicate their regions may be used. Alternatively, as shown inFIG. 20B , a color bar (bar graph) which expresses features of thestatistics database 55 by one dimension may be generated, and an analysis result indicating a position on that color bar (bar graph) may be generated. - Furthermore, when one value corresponds to each position like curvature values, it may be compared with an arbitrary curvature value held in the
statistics database 55 at that position to generate a two-dimensional map which allows to recognize locations in thestatistics database 55. Note that in the two-dimensional map, an average value of those within a given range may be compared to thestatistics database 55 in place of comparison at each position. - [Step S308]
- The
image processing apparatus 30 instructs adisplay control unit 35 to display a tomographic image, the detection result of the layers detected by adetection unit 41, and analysis results (map, graph, and numerical value information) generated by the analysisresult generation unit 45 on adisplay device 53. Note that as the analysis results, the map, graph, and the like generated by the process of step S307 may be displayed. More specifically, a tomographicimage observation screen 80 shown inFIG. 21 is displayed. - In this case, as a
first analysis result 84, a curvature map of the retinal layer is displayed as the comparison results with the feature amounts (curvatures) in thestatistics database 55. This curvature map presents the comparison results of average values of given ranges with thestatistics database 55. A darker color portion indicates that a curvature value falls outside a normal value range of thestatistics database 55, and a lighter color portion indicates that a curvature value falls within the normal value range of thestatistics database 55. - A second analysis
result display section 98 displays agraph 730 indicating a position of an average value and variance value of curvatures in thestatistics database 55. Acircular point 722 in thegraph 730 is a plot of the analysis value of the eye to be examined as a target in this graph. - In the screen example shown in
FIG. 21 , only one map and one graph are displayed. However, the present invention is not limited to this. For example, an area/volume map and the curvature map may be displayed side by side in a first analysisresult display section 96. At the same time, an area/volume graph and the curvature graph may be displayed side by side in the second analysisresult display section 98. - As described above, according to the third embodiment, feature amounts (shape feature amounts: first feature amounts) indicating shape features of a retinal layer and shape feature amounts (second feature amounts) corresponding to a plurality of eyes to be examined which are held in advance in the statistics database can be displayed (presented) for the operator. Thus, the shape feature amounts of an eye to be examined as a diagnosis target can be presented together with an index as a criterion as to whether or not the eye to be examined suffers a disease.
- The fourth embodiment will be described below. The fourth embodiment will explain the following case. That is, feature amounts held in a statistics database are searched for using shape features (shape feature amounts) of a retinal layer and image features (image feature amounts) of a tomographic image or fundus image. Then, a tomographic image, fundus image, and diagnosis information of an eye to be examined having feature amounts close to the shape features of the current eye to be examined are displayed.
-
FIG. 22 shows an example of the arrangement of animage processing system 10 according to the fourth embodiment. In this embodiment, differences from the arrangement ofFIG. 17 used to explain the third embodiment will be mainly described. Animage processing apparatus 30 according to the fourth embodiment newly includes asearch unit 36 in addition to the arrangement of the third embodiment. - An
external storage device 52 according to the fourth embodiment stores feature amounts of retinal layers, tomographic images, fundus images, and diagnosis information in association with each other as astatistics database 55. Thesearch unit 36 searches the external storage device 52 (that is, the statistics database 55) using shape features of a retinal layer and image features in a tomographic image or fundus image. - An example of the sequence of processing of the
image processing apparatus 30 according to the fourth embodiment will be described below. Since the processing of theimage processing apparatus 30 according to the fourth embodiment is executed in the same sequence as that of the aforementioned third embodiment, different processes will be mainly described below usingFIG. 18 described above. The different processes include those of steps S305, S306, and S308. - [Step S305]
- The
image processing apparatus 30 instructs a featureamount obtaining unit 601 to extract image feature amounts from a tomographic image or fundus image in addition to shape feature amounts described in the aforementioned third embodiment. The image feature amounts include, for example, edge features, color features, histogram features, SIFT (Scale-Invariant Feature Transform) feature amounts, and the like. - [Step S306]
- The
image processing apparatus 30 instructs thesearch unit 36 to express the shape feature amounts and image feature amounts obtained by the process of step S305 as a multi-dimensional vector, and to compare these feature amounts with data held in thestatistics database 55 in theexternal storage device 52. Thus, an image having similar feature amounts is searched for. As a search method, a nearest neighbor search may be used. Note that an approximate nearest neighbor search or the like may be used to speed up the processing. - [Step S308]
- The
image processing apparatus 30 instructs adisplay control unit 35 to display a screen including a captured tomographic image and a similar tomographic image obtained by the search of asearch unit 36 on adisplay device 53. - More specifically, a tomographic
image observation screen 900 shown inFIG. 23 is displayed. The tomographicimage observation screen 900 includes a first analysisresult display section 930 for displaying current image analysis results, and a second analysisresult display section 940 for displaying previous image analysis results. - The first analysis
result display section 930 displays atomographic image 901,SLO image 903,fundus photo 902, (three-dimensional shape)analysis result map 904, andanalysis graph 905. Acircular point 906 in theanalysis graph 905 indicates a plot result of the analysis value of the eye to be examined as a target in the graph. Also, star-like points 907 indicate plot results of the analysis values of previous eyes to be examined displayed on the second analysisresult display section 940 in the graph. In this case, in theanalysis graph 905, the abscissa plots the shape feature amounts, and the ordinate plots the image feature amounts. Of course, the present invention is not limited to this. As the analysis result, a graph indicating curvature values described in the third embodiment or the like may be displayed. - The second analysis
result display section 940 displaystomographic images SLO images fundus photos diagnosis information FIG. 23 , images and diagnosis information of an identical eye to be examined are displayed along a vertical column. Of course, images and diagnosis information of an identical eye to be examined may be displayed side by side along a horizontal row. - When there are a plurality of images similar to the current eye to be examined, the operator operates a
slider bar 950 via aninput device 54 to selectively display these results. Note that previous similar images to be displayed may be switched using a member other than aslider bar 950. For example, when the operator clicks a star-like point 907 on theanalysis graph 905 via the input device (for example, a mouse) 54, an image having that feature amount may be displayed beside the current image. - As described above, according to the fourth embodiment, the statistics database is searched for tomographic images similar to a captured tomographic image of an eye to be examined and associated information based on feature amounts (shape feature amounts) indicating shape features of a retinal layer and image feature amounts of a tomographic image or fundus image. Then, the similar tomographic images and associated information are displayed side by side together with the captured tomographic image. Thus, the shape feature amounts of an eye to be examined as a diagnosis target can be presented together with an index used as a criterion as to whether or not the eye to be examined suffers a disease.
- The representative embodiments of the present invention have been described. However, the present invention is not limited to the above and illustrated embodiments, and the present invention can be practiced while being modified as needed without departing from its scope.
- For example, on some screens described in the aforementioned first to fourth embodiments, check boxes, combo boxes, and the like are arranged. Radio buttons, list boxes, buttons, and the like may be changed as needed as usage. For example, components described as the combo boxes in the aforementioned description may be implemented as list boxes.
- Aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiment(s), and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment(s). For this purpose, the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (for example, computer-readable medium).
- While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
- This application claims the benefit of Japanese Patent Application No. 2012-015935, filed Jan. 27, 2012, which is hereby incorporated by reference herein in its entirety.
Claims (13)
1. An image processing system comprising:
an analysis unit configured to obtain information indicating a degree of curvature of a retina from a tomographic image of an eye to be examined; and
an obtaining unit configured to obtain a category of the eye to be examined based on an analysis result.
2. The system according to claim 1 , wherein said analysis unit obtains the information indicating the degree of curvature from three-dimensional shape data generated based on predetermined layer information of the retina.
3. The system according to claim 1 , further comprising a selection unit configured to select a predetermined layer from retinal layers,
wherein said analysis unit generates three-dimensional shape data of the selected layer.
4. The system according to claim 1 , further comprising a display control unit configured to display on a display device in a display mode indicating the category.
5. The system according to claim 4 , wherein said analysis unit comprises:
a feature amount obtaining unit configured to obtain a first feature amount indicating a shape feature of retinal layers,
wherein said display control unit displays, on the display device, both the first feature amount obtained by said feature amount obtaining unit and a plurality of second feature amounts indicating shape features of retinal layers of a plurality of eyes to be examined, which are held in advance in a statistics database.
6. The system according to claim 5 , wherein said display control unit displays the first feature amount and the plurality of second feature amounts on the display device by plotting the first feature amount and the plurality of second feature amounts on a graph.
7. The system according to claim 6 , wherein said display control unit displays a plurality of regions which indicate possibility levels of an eye to be examined as a diseased eye stepwise on the graph.
8. The system according to claim 7 , wherein said display control unit displays the plurality of second feature amounts by changing display formats depending on to which of the plurality of regions the plurality of second feature amounts belong.
9. The system according to claim 5 , further comprising a search unit configured to search a plurality of held tomographic images for a similar tomographic image based on the first feature amount obtained by said feature amount obtaining unit,
wherein said display control unit displays the tomographic image found by said search unit on the display device.
10. The system according to claim 5 , further comprising a comparison unit configured to compare the first feature amount obtained by said feature amount obtaining unit and a plurality of second feature amounts held in advance in the statistics database,
wherein said display control unit displays, on a map, differences of a value of the first feature amount obtained by said feature amount obtaining unit from any of values of the plurality of second feature amounts based on the comparison result.
11. The system according to claim 5 , wherein said feature amount obtaining unit obtains, as the first feature amount, any of
increasing/decreasing rates of an area of an intersection region between a retinal layer and a measurement surface as a plane orthogonal to a depth direction of the retinal layer, which region is obtained by moving the measurement surface along the depth direction, and a volume of a region formed between the retinal layer and the measurement surface,
information indicating a degree of circularity of the retinal layer, and
information indicating a curvature of a shape of the retinal layer.
12. A processing method of an image processing system, comprising:
an analysis step of obtaining information indicating a degree of curvature of a retina from a tomographic image of an eye to be examined; and
an obtaining step of obtaining a category of the eye to be examined based on an analysis result.
13. A non-transitory computer readable storage medium storing a program for controlling a computer to execute a processing method of claim 12 .
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/044,429 US9824273B2 (en) | 2012-01-27 | 2016-02-16 | Image processing system, processing method, and storage medium |
US15/789,226 US10482326B2 (en) | 2012-01-27 | 2017-10-20 | Image processing system, processing method, and storage medium |
US16/658,590 US10872237B2 (en) | 2012-01-27 | 2019-10-21 | Image processing system, processing method, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012-015935 | 2012-01-27 | ||
JP2012015935A JP6226510B2 (en) | 2012-01-27 | 2012-01-27 | Image processing system, processing method, and program |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/044,429 Continuation US9824273B2 (en) | 2012-01-27 | 2016-02-16 | Image processing system, processing method, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130195340A1 true US20130195340A1 (en) | 2013-08-01 |
Family
ID=48870257
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/748,766 Abandoned US20130195340A1 (en) | 2012-01-27 | 2013-01-24 | Image processing system, processing method, and storage medium |
US15/044,429 Active US9824273B2 (en) | 2012-01-27 | 2016-02-16 | Image processing system, processing method, and storage medium |
US15/789,226 Active US10482326B2 (en) | 2012-01-27 | 2017-10-20 | Image processing system, processing method, and storage medium |
US16/658,590 Active US10872237B2 (en) | 2012-01-27 | 2019-10-21 | Image processing system, processing method, and storage medium |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/044,429 Active US9824273B2 (en) | 2012-01-27 | 2016-02-16 | Image processing system, processing method, and storage medium |
US15/789,226 Active US10482326B2 (en) | 2012-01-27 | 2017-10-20 | Image processing system, processing method, and storage medium |
US16/658,590 Active US10872237B2 (en) | 2012-01-27 | 2019-10-21 | Image processing system, processing method, and storage medium |
Country Status (2)
Country | Link |
---|---|
US (4) | US20130195340A1 (en) |
JP (1) | JP6226510B2 (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140063447A1 (en) * | 2012-08-30 | 2014-03-06 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US20140313223A1 (en) * | 2013-04-22 | 2014-10-23 | Fujitsu Limited | Display control method and device |
US9002085B1 (en) * | 2013-10-22 | 2015-04-07 | Eyenuk, Inc. | Systems and methods for automatically generating descriptions of retinal images |
JP2015080678A (en) * | 2013-10-24 | 2015-04-27 | キヤノン株式会社 | Ophthalmologic apparatus |
US20150116664A1 (en) * | 2013-10-24 | 2015-04-30 | Canon Kabushiki Kaisha | Ophthalmological apparatus, comparison method, and non-transitory storage medium |
EP2957219A1 (en) * | 2014-06-18 | 2015-12-23 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and program |
US20170065170A1 (en) * | 2015-09-04 | 2017-03-09 | Canon Kabushiki Kaisha | Ophthalmic apparatus |
CN106659378A (en) * | 2014-06-19 | 2017-05-10 | 诺华股份有限公司 | Ophthalmic imaging system with automatic retinal feature detection |
US9990773B2 (en) | 2014-02-06 | 2018-06-05 | Fujitsu Limited | Terminal, information processing apparatus, display control method, and storage medium |
EP3263016A4 (en) * | 2015-02-27 | 2018-10-24 | Kowa Company, Ltd. | Cross-section image capture device |
US10111582B2 (en) | 2014-05-02 | 2018-10-30 | Kowa Company, Ltd. | Image processing device and method to identify disease in an ocular fundus image |
US10251551B2 (en) | 2013-10-29 | 2019-04-09 | Nidek Co., Ltd. | Fundus analysis device and fundus analysis program |
US10292577B2 (en) * | 2015-01-23 | 2019-05-21 | Olympus Corporation | Image processing apparatus, method, and computer program product |
US10577776B2 (en) | 2014-02-24 | 2020-03-03 | Sumitomo(S.H.I.) Construction Machinery Co., Ltd. | Shovel and method of controlling shovel |
CN112638234A (en) * | 2018-09-06 | 2021-04-09 | 佳能株式会社 | Image processing apparatus, image processing method, and program |
US20220020144A1 (en) * | 2016-03-31 | 2022-01-20 | Bio-Tree Systems, Inc. | Methods of obtaining 3d retinal blood vessel geometry from optical coherent tomography images and methods of analyzing same |
CN116091586A (en) * | 2022-12-06 | 2023-05-09 | 中科三清科技有限公司 | Slotline identification method, device, storage medium and terminal |
US20230289374A1 (en) * | 2020-10-08 | 2023-09-14 | Fronteo, Inc. | Information search apparatus, information search method, and information search program |
US11922601B2 (en) | 2018-10-10 | 2024-03-05 | Canon Kabushiki Kaisha | Medical image processing apparatus, medical image processing method and computer-readable medium |
US12040079B2 (en) | 2018-06-15 | 2024-07-16 | Canon Kabushiki Kaisha | Medical image processing apparatus, medical image processing method and computer-readable medium |
US12100154B2 (en) | 2018-08-14 | 2024-09-24 | Canon Kabushiki Kaisha | Medical image processing apparatus, medical image processing method, computer-readable medium, and learned model |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007525495A (en) | 2004-02-11 | 2007-09-06 | アミリン・ファーマシューティカルズ,インコーポレイテッド | Hybrid polypeptides with selectable properties |
CN101094689B (en) | 2004-11-01 | 2013-06-12 | 安米林药品有限责任公司 | Methods of treating obesity and obesity-related diseases and disorders |
SG159551A1 (en) | 2005-02-11 | 2010-03-30 | Amylin Pharmaceuticals Inc | Gip analog and hybrid polypeptides with selectable properties |
EP1922336B1 (en) | 2005-08-11 | 2012-11-21 | Amylin Pharmaceuticals, LLC | Hybrid polypeptides with selectable properties |
BRPI0614649A2 (en) | 2005-08-11 | 2011-04-12 | Amylin Pharmaceuticals Inc | hybrid polypeptides with selectable properties |
EP1971362B1 (en) | 2005-08-19 | 2014-12-03 | Amylin Pharmaceuticals, LLC | Exendin for treating diabetes and reducing body weight |
WO2007133778A2 (en) | 2006-05-12 | 2007-11-22 | Amylin Pharmaceuticals, Inc. | Methods to restore glycemic control |
EP2650006A1 (en) | 2007-09-07 | 2013-10-16 | Ipsen Pharma S.A.S. | Analogues of exendin-4 and exendin-3 |
JP6226510B2 (en) * | 2012-01-27 | 2017-11-08 | キヤノン株式会社 | Image processing system, processing method, and program |
DE102012022058A1 (en) * | 2012-11-08 | 2014-05-08 | Carl Zeiss Meditec Ag | Flexible, multimodal retina image acquisition and measurement system |
JP2016002382A (en) * | 2014-06-18 | 2016-01-12 | キヤノン株式会社 | Imaging device |
JP6280458B2 (en) * | 2014-06-27 | 2018-02-14 | 株式会社キーエンス | Three-dimensional shape measuring apparatus, measurement data processing unit, measurement data processing method, and computer program |
JP6310343B2 (en) * | 2014-06-27 | 2018-04-11 | 株式会社キーエンス | Three-dimensional shape measuring apparatus, measurement data processing unit, measurement data processing method, and computer program |
JP2016041162A (en) * | 2014-08-18 | 2016-03-31 | 株式会社トーメーコーポレーション | Anterior eye part analyzer |
JP6831171B2 (en) * | 2015-03-02 | 2021-02-17 | 株式会社ニデック | Axial axis length measuring device, eyeball shape information acquisition method, and eyeball shape information acquisition program |
JP6748434B2 (en) * | 2016-01-18 | 2020-09-02 | キヤノン株式会社 | Image processing apparatus, estimation method, system and program |
JP6755192B2 (en) * | 2017-01-11 | 2020-09-16 | キヤノン株式会社 | How to operate the diagnostic support device and the diagnostic support device |
JP6909020B2 (en) * | 2017-03-03 | 2021-07-28 | 株式会社トプコン | Fundus information display device, fundus information display method and program |
Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3380334A (en) * | 1963-10-29 | 1968-04-30 | Control Data Corp | Optical scanning system using specular reflections |
US5321501A (en) * | 1991-04-29 | 1994-06-14 | Massachusetts Institute Of Technology | Method and apparatus for optical imaging with means for controlling the longitudinal range of the sample |
US6293674B1 (en) * | 2000-07-11 | 2001-09-25 | Carl Zeiss, Inc. | Method and apparatus for diagnosing and monitoring eye disease |
US6325512B1 (en) * | 2000-10-31 | 2001-12-04 | Carl Zeiss, Inc. | Retinal tracking assisted optical coherence tomography |
US6356036B1 (en) * | 2000-12-01 | 2002-03-12 | Laser Diagnostic Technologies, Inc. | System and method for determining birefringence of anterior segment of a patient's eye |
US6735331B1 (en) * | 2000-09-05 | 2004-05-11 | Talia Technology Ltd. | Method and apparatus for early detection and classification of retinal pathologies |
US20050018133A1 (en) * | 2003-05-01 | 2005-01-27 | The Cleveland Clinic Foundation | Method and apparatus for measuring a retinal sublayer characteristic |
US7146983B1 (en) * | 1999-10-21 | 2006-12-12 | Kristian Hohla | Iris recognition and tracking for optical treatment |
US20070159601A1 (en) * | 2006-01-12 | 2007-07-12 | Arthur Ho | Method and Apparatus for Controlling Peripheral Image Position for Reducing Progression of Myopia |
US20070195269A1 (en) * | 2006-01-19 | 2007-08-23 | Jay Wei | Method of eye examination by optical coherence tomography |
US20070216909A1 (en) * | 2006-03-16 | 2007-09-20 | Everett Matthew J | Methods for mapping tissue with optical coherence tomography data |
US20090033868A1 (en) * | 2007-08-02 | 2009-02-05 | Topcon Medical Systems, Inc. | Characterization of the Retinal Nerve Fiber Layer |
US7497574B2 (en) * | 2002-02-20 | 2009-03-03 | Brother Kogyo Kabushiki Kaisha | Retinal image display device |
US20090123044A1 (en) * | 2007-11-08 | 2009-05-14 | Topcon Medical Systems, Inc. | Retinal Thickness Measurement by Combined Fundus Image and Three-Dimensional Optical Coherence Tomography |
US20090123036A1 (en) * | 2007-11-08 | 2009-05-14 | Topcon Medical Systems, Inc. | Mapping of Retinal Parameters from Combined Fundus Image and Three-Dimensional Optical Coherence Tomography |
US20090268159A1 (en) * | 2008-04-23 | 2009-10-29 | University Of Pittsburgh - Of The Commonwealth System Of Higher Education | Automated assessment of optic nerve head with spectral domain optical coherence tomography |
US20090268161A1 (en) * | 2008-04-24 | 2009-10-29 | Bioptigen, Inc. | Optical coherence tomography (oct) imaging systems having adaptable lens systems and related methods and computer program products |
US20100220914A1 (en) * | 2009-03-02 | 2010-09-02 | Canon Kabushiki Kaisha | Image processing apparatus and method for controlling the same |
US20100290004A1 (en) * | 2009-05-14 | 2010-11-18 | Topcon Medical System, Inc. | Characterization of Retinal Parameters by Circular Profile Analysis |
US20100290005A1 (en) * | 2009-05-14 | 2010-11-18 | Topcon Medical Systems, Inc. | Circular Profile Mapping and Display of Retinal Parameters |
US20120057127A1 (en) * | 2009-07-14 | 2012-03-08 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and program |
US20120148130A1 (en) * | 2010-12-09 | 2012-06-14 | Canon Kabushiki Kaisha | Image processing apparatus for processing tomographic image of subject's eye, imaging system, method for processing image, and recording medium |
US20120274898A1 (en) * | 2011-04-29 | 2012-11-01 | Doheny Eye Institute | Systems and methods for automated classification of abnormalities in optical coherence tomography images of the eye |
Family Cites Families (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2951252B2 (en) | 1995-12-13 | 1999-09-20 | アロカ株式会社 | Ultrasound Doppler diagnostic device |
JP2912287B2 (en) | 1997-03-14 | 1999-06-28 | 興和株式会社 | Fundus three-dimensional shape measurement device |
US7256881B2 (en) * | 2002-02-15 | 2007-08-14 | Coopervision, Inc. | Systems and methods for inspection of ophthalmic lenses |
WO2006022045A1 (en) * | 2004-08-26 | 2006-03-02 | National University Corporation Nagoya University | Optical interference tomograph |
JP4843242B2 (en) | 2005-03-31 | 2011-12-21 | 株式会社トプコン | Fundus camera |
US7384146B2 (en) * | 2005-06-28 | 2008-06-10 | Carestream Health, Inc. | Health care kiosk having automated diagnostic eye examination and a fulfillment remedy based thereon |
US7668342B2 (en) * | 2005-09-09 | 2010-02-23 | Carl Zeiss Meditec, Inc. | Method of bioimage data processing for revealing more meaningful anatomic features of diseased tissues |
JP2007319416A (en) | 2006-05-31 | 2007-12-13 | Nidek Co Ltd | Retinal function measurement apparatus |
US20070291277A1 (en) * | 2006-06-20 | 2007-12-20 | Everett Matthew J | Spectral domain optical coherence tomography system |
JP4957291B2 (en) | 2006-09-08 | 2012-06-20 | Jfeスチール株式会社 | Apparatus and method for measuring surface distortion |
JP4817184B2 (en) * | 2006-09-08 | 2011-11-16 | 国立大学法人岐阜大学 | Image photographing apparatus and image analysis program |
JP5095167B2 (en) | 2006-09-19 | 2012-12-12 | 株式会社トプコン | Fundus observation apparatus, fundus image display apparatus, and fundus observation program |
JP5178119B2 (en) | 2007-09-28 | 2013-04-10 | キヤノン株式会社 | Image processing apparatus and image processing method |
JP5159242B2 (en) | 2007-10-18 | 2013-03-06 | キヤノン株式会社 | Diagnosis support device, diagnosis support device control method, and program thereof |
JP5328146B2 (en) | 2007-12-25 | 2013-10-30 | キヤノン株式会社 | Medical image processing apparatus, medical image processing method and program |
US8348429B2 (en) * | 2008-03-27 | 2013-01-08 | Doheny Eye Institute | Optical coherence tomography device, method, and system |
EP2312994B1 (en) * | 2008-07-18 | 2021-01-27 | Doheny Eye Institute | Optical coherence tomography - based ophthalmic testing systems |
JP4810562B2 (en) | 2008-10-17 | 2011-11-09 | キヤノン株式会社 | Image processing apparatus and image processing method |
US20100111373A1 (en) * | 2008-11-06 | 2010-05-06 | Carl Zeiss Meditec, Inc. | Mean curvature based de-weighting for emphasis of corneal abnormalities |
JP5473358B2 (en) * | 2009-03-02 | 2014-04-16 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
JP5478914B2 (en) | 2009-03-02 | 2014-04-23 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
JP5543126B2 (en) * | 2009-04-16 | 2014-07-09 | キヤノン株式会社 | Medical image processing apparatus and control method thereof |
JP4850927B2 (en) | 2009-06-02 | 2012-01-11 | キヤノン株式会社 | Image processing apparatus, image processing method, and computer program |
JP4909377B2 (en) | 2009-06-02 | 2012-04-04 | キヤノン株式会社 | Image processing apparatus, control method therefor, and computer program |
JP5474435B2 (en) * | 2009-07-30 | 2014-04-16 | 株式会社トプコン | Fundus analysis apparatus and fundus analysis program |
JP5017328B2 (en) | 2009-08-11 | 2012-09-05 | キヤノン株式会社 | Tomographic imaging apparatus, control method therefor, program, and storage medium |
JP5704879B2 (en) * | 2009-09-30 | 2015-04-22 | 株式会社ニデック | Fundus observation device |
JP5582772B2 (en) | 2009-12-08 | 2014-09-03 | キヤノン株式会社 | Image processing apparatus and image processing method |
JP5698465B2 (en) | 2010-04-22 | 2015-04-08 | キヤノン株式会社 | Ophthalmic apparatus, display control method, and program |
JP5127897B2 (en) * | 2010-08-27 | 2013-01-23 | キヤノン株式会社 | Ophthalmic image processing apparatus and method |
US8931904B2 (en) * | 2010-11-05 | 2015-01-13 | Nidek Co., Ltd. | Control method of a fundus examination apparatus |
JP5220208B2 (en) * | 2011-03-31 | 2013-06-26 | キヤノン株式会社 | Control device, imaging control method, and program |
JP5236089B1 (en) * | 2012-01-26 | 2013-07-17 | キヤノン株式会社 | Optical coherence tomography apparatus, control method of optical coherence tomography apparatus, and program |
JP5924955B2 (en) | 2012-01-27 | 2016-05-25 | キヤノン株式会社 | Image processing apparatus, image processing apparatus control method, ophthalmic apparatus, and program |
JP6146952B2 (en) | 2012-01-27 | 2017-06-14 | キヤノン株式会社 | Image processing apparatus, image processing method, and program. |
JP5932369B2 (en) | 2012-01-27 | 2016-06-08 | キヤノン株式会社 | Image processing system, processing method, and program |
JP6226510B2 (en) * | 2012-01-27 | 2017-11-08 | キヤノン株式会社 | Image processing system, processing method, and program |
US9279660B2 (en) | 2013-05-01 | 2016-03-08 | Canon Kabushiki Kaisha | Method and apparatus for processing polarization data of polarization sensitive optical coherence tomography |
JP2016075585A (en) | 2014-10-07 | 2016-05-12 | キヤノン株式会社 | Imaging device, noise reduction method of tomographic image, and program |
US10492682B2 (en) * | 2015-10-21 | 2019-12-03 | Nidek Co., Ltd. | Ophthalmic analysis device and ophthalmic analysis program |
JP7182350B2 (en) * | 2016-09-07 | 2022-12-02 | 株式会社ニデック | Ophthalmic analysis device, ophthalmic analysis program |
-
2012
- 2012-01-27 JP JP2012015935A patent/JP6226510B2/en not_active Expired - Fee Related
-
2013
- 2013-01-24 US US13/748,766 patent/US20130195340A1/en not_active Abandoned
-
2016
- 2016-02-16 US US15/044,429 patent/US9824273B2/en active Active
-
2017
- 2017-10-20 US US15/789,226 patent/US10482326B2/en active Active
-
2019
- 2019-10-21 US US16/658,590 patent/US10872237B2/en active Active
Patent Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3380334A (en) * | 1963-10-29 | 1968-04-30 | Control Data Corp | Optical scanning system using specular reflections |
US5321501A (en) * | 1991-04-29 | 1994-06-14 | Massachusetts Institute Of Technology | Method and apparatus for optical imaging with means for controlling the longitudinal range of the sample |
US7146983B1 (en) * | 1999-10-21 | 2006-12-12 | Kristian Hohla | Iris recognition and tracking for optical treatment |
US6293674B1 (en) * | 2000-07-11 | 2001-09-25 | Carl Zeiss, Inc. | Method and apparatus for diagnosing and monitoring eye disease |
US6735331B1 (en) * | 2000-09-05 | 2004-05-11 | Talia Technology Ltd. | Method and apparatus for early detection and classification of retinal pathologies |
US6325512B1 (en) * | 2000-10-31 | 2001-12-04 | Carl Zeiss, Inc. | Retinal tracking assisted optical coherence tomography |
US6356036B1 (en) * | 2000-12-01 | 2002-03-12 | Laser Diagnostic Technologies, Inc. | System and method for determining birefringence of anterior segment of a patient's eye |
US7497574B2 (en) * | 2002-02-20 | 2009-03-03 | Brother Kogyo Kabushiki Kaisha | Retinal image display device |
US20050018133A1 (en) * | 2003-05-01 | 2005-01-27 | The Cleveland Clinic Foundation | Method and apparatus for measuring a retinal sublayer characteristic |
US7347548B2 (en) * | 2003-05-01 | 2008-03-25 | The Cleveland Clinic Foundation | Method and apparatus for measuring a retinal sublayer characteristic |
US20070159601A1 (en) * | 2006-01-12 | 2007-07-12 | Arthur Ho | Method and Apparatus for Controlling Peripheral Image Position for Reducing Progression of Myopia |
US20070195269A1 (en) * | 2006-01-19 | 2007-08-23 | Jay Wei | Method of eye examination by optical coherence tomography |
US20070216909A1 (en) * | 2006-03-16 | 2007-09-20 | Everett Matthew J | Methods for mapping tissue with optical coherence tomography data |
US20090033868A1 (en) * | 2007-08-02 | 2009-02-05 | Topcon Medical Systems, Inc. | Characterization of the Retinal Nerve Fiber Layer |
US20090123044A1 (en) * | 2007-11-08 | 2009-05-14 | Topcon Medical Systems, Inc. | Retinal Thickness Measurement by Combined Fundus Image and Three-Dimensional Optical Coherence Tomography |
US20090123036A1 (en) * | 2007-11-08 | 2009-05-14 | Topcon Medical Systems, Inc. | Mapping of Retinal Parameters from Combined Fundus Image and Three-Dimensional Optical Coherence Tomography |
US8081808B2 (en) * | 2007-11-08 | 2011-12-20 | Topcon Medical Systems, Inc. | Retinal thickness measurement by combined fundus image and three-dimensional optical coherence tomography |
US20090268159A1 (en) * | 2008-04-23 | 2009-10-29 | University Of Pittsburgh - Of The Commonwealth System Of Higher Education | Automated assessment of optic nerve head with spectral domain optical coherence tomography |
US20090268161A1 (en) * | 2008-04-24 | 2009-10-29 | Bioptigen, Inc. | Optical coherence tomography (oct) imaging systems having adaptable lens systems and related methods and computer program products |
US20100220914A1 (en) * | 2009-03-02 | 2010-09-02 | Canon Kabushiki Kaisha | Image processing apparatus and method for controlling the same |
US20100290004A1 (en) * | 2009-05-14 | 2010-11-18 | Topcon Medical System, Inc. | Characterization of Retinal Parameters by Circular Profile Analysis |
US20100290005A1 (en) * | 2009-05-14 | 2010-11-18 | Topcon Medical Systems, Inc. | Circular Profile Mapping and Display of Retinal Parameters |
US20120057127A1 (en) * | 2009-07-14 | 2012-03-08 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and program |
US20120148130A1 (en) * | 2010-12-09 | 2012-06-14 | Canon Kabushiki Kaisha | Image processing apparatus for processing tomographic image of subject's eye, imaging system, method for processing image, and recording medium |
US8761481B2 (en) * | 2010-12-09 | 2014-06-24 | Canon Kabushiki Kaisha | Image processing apparatus for processing tomographic image of subject's eye, imaging system, method for processing image, and recording medium |
US20120274898A1 (en) * | 2011-04-29 | 2012-11-01 | Doheny Eye Institute | Systems and methods for automated classification of abnormalities in optical coherence tomography images of the eye |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140063447A1 (en) * | 2012-08-30 | 2014-03-06 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US9585554B2 (en) * | 2012-08-30 | 2017-03-07 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US20140313223A1 (en) * | 2013-04-22 | 2014-10-23 | Fujitsu Limited | Display control method and device |
US10147398B2 (en) * | 2013-04-22 | 2018-12-04 | Fujitsu Limited | Display control method and device |
US9002085B1 (en) * | 2013-10-22 | 2015-04-07 | Eyenuk, Inc. | Systems and methods for automatically generating descriptions of retinal images |
US9008391B1 (en) * | 2013-10-22 | 2015-04-14 | Eyenuk, Inc. | Systems and methods for processing retinal images for screening of diseases or abnormalities |
US20150110372A1 (en) * | 2013-10-22 | 2015-04-23 | Eyenuk, Inc. | Systems and methods for automatically generating descriptions of retinal images |
US20150110368A1 (en) * | 2013-10-22 | 2015-04-23 | Eyenuk, Inc. | Systems and methods for processing retinal images for screening of diseases or abnormalities |
US9622656B2 (en) * | 2013-10-24 | 2017-04-18 | Canon Kabushiki Kaisha | Ophthalmological apparatus, comparison method, and non-transitory storage medium |
JP2015080678A (en) * | 2013-10-24 | 2015-04-27 | キヤノン株式会社 | Ophthalmologic apparatus |
US20150116664A1 (en) * | 2013-10-24 | 2015-04-30 | Canon Kabushiki Kaisha | Ophthalmological apparatus, comparison method, and non-transitory storage medium |
US10251551B2 (en) | 2013-10-29 | 2019-04-09 | Nidek Co., Ltd. | Fundus analysis device and fundus analysis program |
US9990773B2 (en) | 2014-02-06 | 2018-06-05 | Fujitsu Limited | Terminal, information processing apparatus, display control method, and storage medium |
US10577776B2 (en) | 2014-02-24 | 2020-03-03 | Sumitomo(S.H.I.) Construction Machinery Co., Ltd. | Shovel and method of controlling shovel |
US10111582B2 (en) | 2014-05-02 | 2018-10-30 | Kowa Company, Ltd. | Image processing device and method to identify disease in an ocular fundus image |
US9585560B2 (en) | 2014-06-18 | 2017-03-07 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and program |
CN105310645A (en) * | 2014-06-18 | 2016-02-10 | 佳能株式会社 | Image processing apparatus and image processing method |
EP2957219A1 (en) * | 2014-06-18 | 2015-12-23 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and program |
CN106659378A (en) * | 2014-06-19 | 2017-05-10 | 诺华股份有限公司 | Ophthalmic imaging system with automatic retinal feature detection |
US10292577B2 (en) * | 2015-01-23 | 2019-05-21 | Olympus Corporation | Image processing apparatus, method, and computer program product |
EP3263016A4 (en) * | 2015-02-27 | 2018-10-24 | Kowa Company, Ltd. | Cross-section image capture device |
US10188286B2 (en) | 2015-02-27 | 2019-01-29 | Kowa Company, Ltd. | Tomographic image capturing device |
US10022047B2 (en) * | 2015-09-04 | 2018-07-17 | Canon Kabushiki Kaisha | Ophthalmic apparatus |
US20170065170A1 (en) * | 2015-09-04 | 2017-03-09 | Canon Kabushiki Kaisha | Ophthalmic apparatus |
US20220020144A1 (en) * | 2016-03-31 | 2022-01-20 | Bio-Tree Systems, Inc. | Methods of obtaining 3d retinal blood vessel geometry from optical coherent tomography images and methods of analyzing same |
US11704797B2 (en) * | 2016-03-31 | 2023-07-18 | Bio-Tree Systems, Inc. | Methods of obtaining 3D retinal blood vessel geometry from optical coherent tomography images and methods of analyzing same |
US12040079B2 (en) | 2018-06-15 | 2024-07-16 | Canon Kabushiki Kaisha | Medical image processing apparatus, medical image processing method and computer-readable medium |
US12100154B2 (en) | 2018-08-14 | 2024-09-24 | Canon Kabushiki Kaisha | Medical image processing apparatus, medical image processing method, computer-readable medium, and learned model |
CN112638234A (en) * | 2018-09-06 | 2021-04-09 | 佳能株式会社 | Image processing apparatus, image processing method, and program |
US20210183019A1 (en) * | 2018-09-06 | 2021-06-17 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method and computer-readable medium |
US12039704B2 (en) * | 2018-09-06 | 2024-07-16 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method and computer-readable medium |
US11922601B2 (en) | 2018-10-10 | 2024-03-05 | Canon Kabushiki Kaisha | Medical image processing apparatus, medical image processing method and computer-readable medium |
US20230289374A1 (en) * | 2020-10-08 | 2023-09-14 | Fronteo, Inc. | Information search apparatus, information search method, and information search program |
CN116091586A (en) * | 2022-12-06 | 2023-05-09 | 中科三清科技有限公司 | Slotline identification method, device, storage medium and terminal |
Also Published As
Publication number | Publication date |
---|---|
US20160162736A1 (en) | 2016-06-09 |
US9824273B2 (en) | 2017-11-21 |
US10482326B2 (en) | 2019-11-19 |
US20200050852A1 (en) | 2020-02-13 |
JP2013153884A (en) | 2013-08-15 |
US10872237B2 (en) | 2020-12-22 |
US20180039833A1 (en) | 2018-02-08 |
JP6226510B2 (en) | 2017-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10872237B2 (en) | Image processing system, processing method, and storage medium | |
US9149183B2 (en) | Image processing system, processing method, and storage medium | |
JP6146952B2 (en) | Image processing apparatus, image processing method, and program. | |
US9585560B2 (en) | Image processing apparatus, image processing method, and program | |
JP6526145B2 (en) | Image processing system, processing method and program | |
US8870377B2 (en) | Image processing apparatus, image processing apparatus control method, ophthalmologic apparatus, ophthalmologic apparatus control method, ophthalmologic system, and storage medium | |
US10152807B2 (en) | Signal processing for an optical coherence tomography (OCT) apparatus | |
US8223143B2 (en) | User interface for efficiently displaying relevant OCT imaging data | |
US9307902B2 (en) | Image processing device, image processing system, image processing method, and program | |
CN102469937B (en) | Tomography apparatus and control method for same | |
RU2637851C2 (en) | Image processing device and method for image processing device control | |
US10102621B2 (en) | Apparatus, method, and program for processing image | |
JP6243957B2 (en) | Image processing apparatus, ophthalmic system, control method for image processing apparatus, and image processing program | |
US10916012B2 (en) | Image processing apparatus and image processing method | |
JP7005382B2 (en) | Information processing equipment, information processing methods and programs | |
JP6526154B2 (en) | Image processing apparatus, ophthalmologic system, control method of image processing apparatus, and image processing program | |
JP2019115827A (en) | Image processing system, processing method and program | |
JP2013153880A (en) | Image processing system, processing method, and program | |
JP2013153881A (en) | Image processing system, processing method, and program | |
JP2019195586A (en) | Image processing device, image processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CANON KABUSHIKI KAISHA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:IWASE, YOSHIHIKO;SHINBATA, HIROYUKI;SATO, MAKOTO;SIGNING DATES FROM 20130109 TO 20130116;REEL/FRAME:030219/0215 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |