WO2007110982A1 - Image analyzer and program for stereo eye fundus image - Google Patents

Image analyzer and program for stereo eye fundus image Download PDF

Info

Publication number
WO2007110982A1
WO2007110982A1 PCT/JP2006/318912 JP2006318912W WO2007110982A1 WO 2007110982 A1 WO2007110982 A1 WO 2007110982A1 JP 2006318912 W JP2006318912 W JP 2006318912W WO 2007110982 A1 WO2007110982 A1 WO 2007110982A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
region
stereo
nerve head
optic nerve
Prior art date
Application number
PCT/JP2006/318912
Other languages
French (fr)
Japanese (ja)
Inventor
Hiroshi Fujita
Toshiaki Nakagawa
Yoshinori Hayashi
Original Assignee
Tak Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tak Co., Ltd. filed Critical Tak Co., Ltd.
Publication of WO2007110982A1 publication Critical patent/WO2007110982A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes

Definitions

  • the present invention relates to an image analysis apparatus and an image analysis program for reproducing a three-dimensional shape of a measurement object based on a stereo image obtained by photographing different position force measurement objects.
  • the present invention relates to an image analysis apparatus and an image analysis program for analyzing information of a stereo image obtained by photographing a fundus oculi and obtaining 3D information for accurately reproducing the 3D shape of the optic nerve head.
  • Fundus images obtained by photographing the fundus are widely used for eye examination and disease diagnosis.
  • observing the shape of the optic disc from the fundus image is very useful in confirming the presence and progression of glaucoma. Therefore, there is a need for a technique that can observe the exact three-dimensional shape of the optic disc.
  • various attempts have been made to reproduce the three-dimensional shape of the optic nerve head based on stereo images obtained by photographing the fundus from different positions.
  • FIG. 1 schematically shows a basic photographing position when a stereo image of the fundus of a subject is photographed. Photographing is performed so that the optical axes are parallel to each other at positions 22 and 24, which are the base line length.
  • the first fundus image 26 is captured by photographing at the position 22, and the second fundus image 28 is photographed by the camera at the position 24.
  • the focal lengths of the camera at position 22 and the camera at position 24 are adjusted so that the plane determined by the fundus image 26 and the plane defined by the fundus image 28 are the same.
  • the point P to be measured at the same position in real space is captured as shown in point 28 of image 28 in 2.
  • a point is called a corresponding point.
  • a pine that compares the characteristics of pixel values for each pixel to find corresponding points on the image. Ching is performed. As a result of matching, the point corresponding to the point force with the most similar characteristics of the pixel value is determined.
  • R R R R can be used to find the coordinates (X, y, z) of point P shown in the global coordinate system of real space.
  • the coordinate value of the point P is expressed by the following equation (1) (2) (3), with the coordinate value P (X, y) and coordinate value P
  • parallax The amount of deviation of the corresponding points due to is referred to as parallax.
  • the measurement object is recorded in the image in a shape including distortion derived from the optical characteristics of the lens. For this reason, it is necessary to correct image distortion in order to accurately measure the position of corresponding points and to obtain parallax.
  • a stereo image of the fundus is photographed with an image including distortion caused by these factors. Because there are individual differences in the shape of the lens and cornea of the eyeball, it is difficult to accurately determine the corresponding point because it is impossible to know the exact fundus shape only by correction using general optical calculations. .
  • Patent Document 1 discloses a technique for correcting the position of an image using information on red, blue, and green in the upper and lower thirds of a stereo image to determine corresponding points.
  • Patent Document 2 discloses a technique for correcting an image by applying a quadric surface assuming that the eyeball is an elliptical spherical surface.
  • Patent Document 1 Japanese Patent Laid-Open No. 2000-245700
  • Patent Document 2 Japanese Patent Laid-Open No. 2002-34924
  • the amount of light passing through the lens and cornea and the wavelength of the light vary depending on the illumination conditions and the shooting direction, and the same part of the measurement object should be shot.
  • Pixel value force May vary depending on the image.
  • the amount of change in pixel value is particularly large in the retinal area that is the periphery of the imaging area, and the corresponding points may not be accurately extracted in the retinal area.
  • the 3D shape of the fundus was analyzed based on the information on such incorrect corresponding points, even if it is a smooth flat surface that does not actually have unevenness, it is recognized as a structure with large unevenness and the result is displayed. In many cases, improvements were required.
  • the present invention has been made in view of the above problems, and obtains an accurate parallax value from a stereo image of the fundus oculi and efficiently and accurately analyzes the three-dimensional shape of the optic disc.
  • the object was to provide an image analysis device and an image analysis program.
  • the present invention provides an image analysis device that accurately analyzes the three-dimensional shape of the optic disc even when there is a change in pixel value due to illumination conditions between stereo images of the fundus.
  • the object is to provide an image analysis program.
  • the invention of claim 1 relates to an image analysis apparatus that analyzes a three-dimensional shape of the optic nerve head from a stereo image obtained by photographing the fundus of a subject.
  • the image analysis apparatus of the present invention includes an image input means for inputting pixel data of a stereo image, an area identifying means for identifying an area of the optic nerve head and an area other than the optic nerve head for each stereo image, and an optic nerve head of a stereo image. Analyzing pixel data in other areas, what?
  • Image correction means for aligning the entire image including the area of the optic nerve head and pixel data of the optic nerve head area of the stereo image aligned by the image correction means are analyzed, and The corresponding point determination means for determining the corresponding point by taking the same part of the image and calculating the depth information of the optic nerve head from the position information in each image of the corresponding point determined by the corresponding point determination means. Including depth information determination means.
  • a stereo image of the fundus mainly captures the optic disc, the retina, and a plurality of blood vessels running on the retina.
  • the inventor has photographed a stereo image of the fundus of the region other than the optic disc, that is, the retina and a plurality of blood vessels running on the retina!
  • the optic nerve head has a conical shape with a concave center, and the apex of the cone is located on the far side of the image.
  • the retina is curved along the shape of the eyeball, but because of its relatively large diameter, it is photographed as a substantially plane that is substantially parallel to the image.
  • the inventor pays attention to the difference in the shape of the optic disc and the retina.
  • the image correction means of the invention of claim 2 evaluates a total value of differences between pixel values at the same position between stereo images, and calculates a vertical movement amount and a horizontal movement amount between images.
  • the image position correction is performed by obtaining the enlargement / reduction ratio in the horizontal direction.
  • the inventor found that the correction of the position of the stereo image of the fundus occupies the vertical and horizontal movements of the image and the horizontal scaling. I found out that it would be possible.
  • the amount of image movement and the enlargement / reduction ratio can be set between stereo images. Since the sum of the differences between the pixel values at the same position can be obtained and calculated based on this value, the correction is performed efficiently and quickly.
  • the corresponding point determining means sets a reference point in a region of the optic disc of the reference image, sets a region of interest in a specific range around the reference point, A region having an array of pixel data that is most similar to the pixel data of the region of interest is searched from the region of the optic nerve, and pixels that are sufficiently similar to the region of interest in the region of the optic nerve of other images
  • the region of the optic nerve head in another image is expanded or reduced in the horizontal direction to search for a region having an array of pixel data that is most similar to the pixel data of the region of interest, and based on the search result. It is characterized by determining corresponding points between the image used as a reference and another image.
  • the corresponding point determining means of the present invention determines a region of interest in the optic disc region of the reference image when extracting corresponding points in the optic disc region, and from the optic disc region of other images. Then, an array of pixel data similar to the pixel data of the region of interest is searched. If an array of similar pixel data cannot be obtained, a region having a similar array of pixel data may be searched by performing lateral enlargement or reduction of the optic nerve head region of another image. it can.
  • the image of the optic nerve head region may have a lateral distortion different from the image outside the peripheral region, but the corresponding point determination means of the present invention can also cope with this distortion. Therefore, the corresponding points can be set in the image with higher accuracy.
  • the invention of claim 4 relates to an image analysis program for analyzing the three-dimensional shape of the optic nerve head from pixel data of a stereo image obtained by photographing the fundus of a subject.
  • the image analysis program of the present invention discriminates each pixel data of the region of the optic disc and the region other than the optic disc for each stereo image, analyzes the pixel data of the region other than the optic disc of the stereo image, and analyzes the optic disc of the optic disc.
  • the image analysis apparatus and the image analysis program of the present invention accurately analyze the three-dimensional shape of the optic nerve head even when there is a change in pixel values due to illumination conditions between stereo images. can do.
  • the image analysis apparatus is composed of one computer.
  • Image input means for inputting pixel data of a stereo image of the image analysis device, area identification means for identifying an area of the optic nerve head and an area other than the optic nerve head for each stereo image, and pixels of an area other than the optic nerve head of the stereo image Analyzing the data?
  • Image correction means for aligning the entire image including the area of the optic nerve head and the pixel data of the optic nerve area of the aligned stereo image are analyzed to capture the same part in the optic nerve head.
  • Corresponding point determining means for determining the corresponding corresponding points and depth information determining means for calculating the depth information of the optic disc from the position information in each image of the determined corresponding points are the storage unit (ROM, RAM). These means can perform analysis by appropriately calculating the pixel data of the stereo image using the CPU of the computer.
  • the retina is the innermost film of the eyeball and has a curved shape along the surface of the eyeball.
  • the optic nerve head is located in the center of the eyeball at a distance from the fovea. It also has a conical recess in the center and the nipple marginal force around the recess. In other words, the apex of the conical recess is located on the farthest side.
  • the size of the depression of the optic nerve head varies greatly depending on the individual and the presence or absence of disease, but generally the diameter is a few percent of the diameter of the eyeball.
  • Stereo images of the fundus are slightly different from image to image by passing the frontal force light of the eyeball through the eyeball.
  • the picture is taken from the shooting position.
  • the stereo images analyzed in this embodiment are the image 2 (hereinafter also referred to as the left image) taken from the left side of the fundus shown in FIG. 2 (a), and the right side of the fundus shown in FIG. 2 (b). It consists of two images 4 (hereinafter also referred to as the right image).
  • the optic disc region 6, the retinal region 8, and a plurality of blood vessels 10 running on the retina are photographed.
  • Stereo images 2 and 4 of the fundus are taken in this embodiment so that the optic disc region 6 is arranged at approximately the center of the image.
  • the nipple edge 12 in the optic nerve head region 6 is photographed in a bright orange color, and the depressed portion 14 is brighter and lighter than the nipple edge 12 in orange.
  • the retinal region 8 that is captured in the brown image to the dark brown color tone in the stereo image in Fig. 2 is actually curved to be convex in the depth direction with respect to the image, but the curvature of the curve is large. Since the shooting range is relatively narrow, the amount of change in the depth direction in the shooting range can be handled as a plane that is almost parallel to the image plane with very little.
  • Fig. 3 shows the flow of analysis performed by the image analyzer to analyze the three-dimensional shape of the optic nerve head region 6 from the two stereo images 2, 4 of the fundus. Will be explained.
  • the image input means of the image analyzer first inputs pixel data of two stereo images 2 and 4 in step s2.
  • the pixel data of the stereo image in this embodiment includes R (red), G (green), and B (blue) data for each pixel.
  • the image analysis device extracts an outline of the optic disc region 6 for each stereo image using the region identification means.
  • the luminance value calculated based on the R, G, and B values of the nipple edge region 12 of the optic nerve region 6 is higher than that of the retinal region 8. or,?
  • the boundary hereinafter also referred to as an edge
  • the optic nerve head is connected.
  • the contour of area 6 can be determined.
  • the contour can be determined with high accuracy by applying the dynamic contour extraction method.
  • the region identifying means extracts the contour of the optic disc region 6 by the dynamic contour extraction method.
  • the method will be described in more detail.
  • a process of temporarily deleting a plurality of blood vessels 10 captured in the stereo images 2 and 4 from the image is performed. This is a process for avoiding erroneous extraction of the boundary between the blood vessel 10 and the retinal region 8 where the luminance value change amount, that is, the edge strength is high, as the boundary between the optic disc region 6 by mistake.
  • This process utilizes the fact that the pixel value of the blood vessel 10 is lower in luminance than the pixel value of the surrounding retinal region 8, and a morphology in which pixels having a pixel value lower than the surrounding pixel value are filled with the surrounding pixel value. It can be executed by applying processing based on computation.
  • the region identification unit to extract the contour of the optic disc region 6 from the stereo images 2 and 4 in which the pixel values of the blood vessel 10 are replaced with the pixel values of the retinal region 8 is roughly described.
  • This is a process for defining an initial contour composed of control points on a plurality of images in order to determine the initial value of the contour of the optic nerve head region 6.
  • the control points constituting the initial contour can be determined by the following processing. That is, a histogram of the pixel values of the stereo images 2 and 4 is created, and threshold processing is performed on the stereo images 2 and 4 with a predetermined luminance value as a threshold value.
  • the center point of the region corresponding to the optic disc region 6 is determined, and the center point force is determined so as to be arranged at equal intervals on a circle at a predetermined distance.
  • the region identification means uses the values of the edge strength around the control point, the distance between the control points, and the slope between the control points as terms. Each term is weighted in consideration of the characteristics of the edge strength of the stereo image, and a conditional expression is applied to perform an operation to find the optimal position of the control point that minimizes the value of the conditional expression. .
  • the position of the control point converges while moving while iterating repeatedly, and the final control point arrangement becomes the contour extraction result.
  • Applying the contour of the optic disc region 6 determined in this way a stereo image of the fundus showing the optic disc region 6 in white and the retinal region 8 other than the optic disc region 6 and the blood vessel 10 in black is shown in Fig. 4. Show.
  • step s6 the image analysis apparatus analyzes the pixel data of the retinal regions 8 and blood vessels 10 of the stereo images 2 and 4 using the image correction means, Perform alignment.
  • the stereo image is taken so that the position and shape of the optic disc region 6 differ greatly from image to image.
  • the image correction means of the image analysis apparatus can advance the alignment correction with very high accuracy by analyzing the pixel data of the retinal region 8 and the blood vessel 10.
  • the image correcting means translates and superimposes the right image 4 on the left image 2 in a vertical direction within a range of dozens of pixels, and overlaps between the two images in the retinal region 8 and the blood vessel 10.
  • the horizontal correlation is performed by moving the pixels one by one within a range of dozens of pixels and superimposing them, and the cross-correlation features of the pixel values between the two images in the retinal region 8 and the blood vessel 10 are calculated.
  • the cross-correlation feature is obtained by calculating a cross-correlation function using the RGB value, color information such as luminance, saturation, and brightness calculated based on the RGB value, or edge strength.
  • the image correction means translates the movement amount in the horizontal direction and the vertical direction when the alignment correction is achieved as a correction value.
  • the image correction means evaluates the cross-correlation feature obtained by the parallel movement of the image, and determines that the right image 4 of the stereo image is predetermined when it is determined that more preferable alignment is necessary. It is also possible to calculate the cross-correlation feature quantity with the left image 2 by performing parallel movement after enlarging or reducing in the horizontal direction at a magnification of. This is because it is sometimes possible to obtain a high value of the cross-correlation feature value by enlarging or reducing one image in the horizontal direction.
  • the enlargement / reduction ratio of the right image 4 can indicate the size of the optic disc region 6 identified by the region identifying means.
  • the image correcting means When enlarging the right image 4 in the horizontal direction, the image correcting means inserts a new pixel column at a predetermined interval between the horizontal columns of pixels constituting the image, and the inserted new pixel. Next, the existing existing pixel values are defined by linear interpolation. When the right image 4 is reduced, the horizontal pixel rows are deleted at predetermined intervals.
  • the image correction unit of the present embodiment uses the pixel values of the retinal region 8 and the blood vessel 10 of the stereo images 2 and 4 to calculate the cross-correlation feature amount of the pixel values between the two images. Calculate and perform image alignment.
  • the range of the retinal region 8 and blood vessel 10 in the stereo images 2 and 4 is about one-half to about two-thirds. Compared to the case, the analysis proceeds very efficiently.
  • Figure 5 shows the right image 16 with the alignment corrected.
  • the cross-correlation feature amount calculated by the cross-correlation function used in the present embodiment has a value even if the luminance is linearly converted between the two stereo images 2 and 4. Since there is no change, it is possible to correct the alignment even for images with different contrasts due to differences in lighting conditions.
  • step s8 the image analysis apparatus analyzes the pixel data of each of the optic disc regions 6 of the stereo images 2 and 16 that have been aligned by the corresponding point determination means, and the optic disc region 6 The corresponding point which is photographing the same part in is extracted and determined.
  • the corresponding point determination means sets a reference point 18 over the entire optic disc region 6 of the left image 2 and searches for a point corresponding to the reference point 18 in the right image 4.
  • Fig. 6 (a) An example of the reference point set in the optic disc area 6 is shown.
  • the corresponding point determination means sets a rectangular region having a side of several tens of pixels as the region of interest 19 around the reference point 18 in the left image 2. To do. Next, as shown in FIG. 6B, a corresponding region 20 having the same size is set in the vicinity of the region of interest 19 in the left image 2 in the optic disc region 6 in the right image 16. Then, the similarity of the pixel value pattern between the corresponding region 20 and the pixel value of the region of interest 19 is evaluated.
  • Corresponding point determination means can calculate a correlation feature between the pixel value of the region of interest 19 and the pixel value of the corresponding region 20 by using a least square matching method, and can evaluate the similarity between stereo images. Even if there is a linear change in color tone, reasonable matching is possible.
  • the corresponding point determining means moves the corresponding region 20 on the image 16 in step slO, and evaluates the cross-correlation feature quantity of the pixel values of the region of interest 19 and the corresponding region 20. However, if the corresponding region 20 having a sufficiently large cross-correlation feature amount is not obtained even by such movement, the process proceeds to step sl2, and the region of the optic nerve head 6 in the right image 16 is further corrected.
  • Corresponding point determination means searches the corresponding region 20 having a pixel data pattern similar to the region of interest 19 again after performing the lateral enlargement or reduction of the optic disc 6 of the right image 16. In the image of the optic nerve head region 6, lateral distortion may occur.
  • the corresponding point determination means can determine the corresponding point with higher accuracy by performing lateral enlargement or reduction of the optic nerve head region 6 of the right image 16.
  • the optic disc region 6 is expanded in the horizontal direction, new pixel columns are inserted at predetermined intervals between horizontal columns of pixels constituting the image, and the inserted new pixel columns are inserted.
  • adjacent existing pixel values are defined by linear interpolation.
  • horizontal pixel rows are deleted at predetermined intervals.
  • the corresponding point determination unit corrects the optic nerve head region 6 of the right image 16 as necessary, and the region of interest 19 of the reference point 18 set in the left image 2 from the right image 16.
  • a corresponding region 20 having a sufficiently high cross-correlation feature quantity is searched to determine a corresponding point 21 (step s14).
  • Corresponding point determination means determines and stores a set of the reference point 18 of the left image 2 and the corresponding point 21 of the right image 16 over the entire optic disc region 6.
  • step sl6 the image analysis apparatus calculates depth information of the optic nerve head from position information in each image of the stored reference point / corresponding point set using depth information determining means.
  • depth information a measurement method that applies the principle of triangulation is most commonly applied.
  • FIG. 7 shows an output example of a three-dimensional image of the optic nerve head region 6 constructed by the image analysis device based on the depth information calculated by the depth information determining means.
  • the image of the left image 2 is applied to the images of the retinal region 8 and blood vessels 10 around the optic disc region 6.
  • the corresponding points are extracted and the depth information is calculated! Therefore, the fundus image including the exact shape of the optic disc region 6 is output very quickly.
  • the display of the retinal region 8 since the corresponding points have not been accurately performed in the past, a structure with large unevenness is displayed even on a smooth flat surface that does not actually have unevenness. However, it is possible to display one image efficiently and smoothly.
  • the stereo image 2 in which the fundus is photographed By analyzing the pixel data of 4, it is possible to obtain 3D information that can reproduce the 3D shape of the papillary optic nerve region 6 accurately, efficiently, and quickly.
  • the image analysis apparatus according to the present embodiment accurately corrects the three-dimensional shape of the optic disc region 6 even if the stereo images 2 and 4 to be analyzed have pixel value changes due to illumination conditions and the like. It can be analyzed.
  • the image analysis apparatus in this embodiment can be replaced with an image analysis program that can cause a computer to execute the same process.
  • each means of the image analysis apparatus of the present embodiment can be configured as an external apparatus that is modularized and connected to a computer.
  • the geometrical method of image analysis employed by the area identification unit, image correction unit, corresponding point determination unit, and depth information determination unit of the image analysis apparatus is the characteristics of the image content and the measurement object. It can be selected and changed as appropriate.
  • FIG. 1 is a diagram showing a basic imaging position when a stereo image of the fundus of a subject is captured.
  • FIG. 2 is a diagram showing an example of stereo images 2 and 4 to be analyzed.
  • FIG. 3 is a flowchart showing the contents of image processing by the image analysis apparatus of one embodiment of the present invention.
  • FIG. 4 is a stereo image diagram of the fundus oculi where the optic disc region 6 is shown in white and the retinal region 8 and blood vessels 10 other than the optic disc region 6 are shown in black by the region identification process.
  • FIG. 5 is a diagram showing a right image 16 that has been subjected to alignment correction.
  • FIG. 6 shows a left image 2 in which a region of interest 19 is arranged and a right image 4 in which a corresponding region 20 is arranged.
  • FIG. 7 is a diagram showing an output example of a three-dimensional image of the optic nerve head region 6 constructed by the image analysis apparatus.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

[PROBLEMS] To provide an image analyzer whereby the three-dimensional shape of an optic papilla can be accurately and efficiently analyzed from a stereo eye fundus image. [MEANS FOR SOLVING PROBLEMS] An image analyzer whereby the pixel data of the area other than optic papillae in a stereo image is analyzed, the whole image including the optic papilla area is positioned, the pixel data of the optic papilla area in the stereo image thus positioned is analyzed, the point photographing the same position in an optic papilla is determined, and the depth data of the optic papilla is calculated based on the position data of the corresponding point in each image.

Description

明 細 書  Specification
眼底のステレオ画像のための画像解析装置及ぴプログラム 技術分野  Image analysis apparatus and program for stereo images of the fundus
[0001] 本発明は、異なる位置力 測定対象物を撮影して得られるステレオ画像に基づいて 、測定対象物の 3次元形状を再現するための画像解析装置と画像解析プログラムに 関する。特に、眼底を撮影したステレオ画像の情報を解析し、視神経乳頭の 3次元形 状を正確に再現する 3次元情報を得るための画像解析装置と画像解析プログラムに 関する。  The present invention relates to an image analysis apparatus and an image analysis program for reproducing a three-dimensional shape of a measurement object based on a stereo image obtained by photographing different position force measurement objects. In particular, the present invention relates to an image analysis apparatus and an image analysis program for analyzing information of a stereo image obtained by photographing a fundus oculi and obtaining 3D information for accurately reproducing the 3D shape of the optic nerve head.
背景技術  Background art
[0002] 眼底を撮影して得られる眼底画像は、眼の検査や疾病の診断に広く用いられている 。特に、眼底画像から視神経乳頭の形状の観察を行うことは、緑内障の有無やその 進行度を確認する上で非常に有益である。このため、視神経乳頭の正確な 3次元形 状を観察できる技術が求められて 、る。視神経乳頭の 3次元形状を観察する一つの 手段として、眼底を異なる位置カゝら撮影して得られるステレオ画像に基づいて、視神 経乳頭の 3次元形状を再現する試みが種々行われて 、る。  [0002] Fundus images obtained by photographing the fundus are widely used for eye examination and disease diagnosis. In particular, observing the shape of the optic disc from the fundus image is very useful in confirming the presence and progression of glaucoma. Therefore, there is a need for a technique that can observe the exact three-dimensional shape of the optic disc. As a means of observing the three-dimensional shape of the optic nerve head, various attempts have been made to reproduce the three-dimensional shape of the optic nerve head based on stereo images obtained by photographing the fundus from different positions. The
[0003] 眼底の 3次元形状を示す情報を得ることを目的として、被験者の眼底のステレオ画像 を撮影する場合の、基本的な撮影位置を図 1に模式的に示す。撮影は、基線長 と なる位置 22と位置 24において、光軸が互いに平行になるようになされる。位置 22で の撮影によって第 1の眼底画像 26が撮影され、位置 24のカメラによって第 2の眼底 画像 28が撮影される。位置 22のカメラと位置 24のカメラの焦点距離は、眼底画像 26 によって決定される平面と眼底画像 28によって定義される平面とが同一となるように 調整されている。  [0003] For the purpose of obtaining information indicating the three-dimensional shape of the fundus, FIG. 1 schematically shows a basic photographing position when a stereo image of the fundus of a subject is photographed. Photographing is performed so that the optical axes are parallel to each other at positions 22 and 24, which are the base line length. The first fundus image 26 is captured by photographing at the position 22, and the second fundus image 28 is photographed by the camera at the position 24. The focal lengths of the camera at position 22 and the camera at position 24 are adjusted so that the plane determined by the fundus image 26 and the plane defined by the fundus image 28 are the same.
[0004] 画像 26と画像 28を解析して、眼底の形状を示す 3次元の座標値を得る方法につい て、簡単に説明する。ステレオ画像の解析では、最初に、第 1の画像 26の点 Pと、第  [0004] A method of analyzing the image 26 and the image 28 to obtain a three-dimensional coordinate value indicating the shape of the fundus will be briefly described. In the stereo image analysis, first the point P in the first image 26 and the
 Shi
2の画像 28の点 Pのように、実空間で同一の位置にある測定対象点 Pが撮影されて The point P to be measured at the same position in real space is captured as shown in point 28 of image 28 in 2.
R  R
いる画像上の点の位置を決定する必要がある。このような点は、対応点と称される。 通常、画像上の対応点を探索するために、画素ごとの画素値の特性を比較するマツ チングが行われる。そしてマッチングの結果、最も画素値の特性が類似している箇所 力 対応点に決定される。 It is necessary to determine the position of the point on the image. Such a point is called a corresponding point. Usually, a pine that compares the characteristics of pixel values for each pixel to find corresponding points on the image. Ching is performed. As a result of matching, the point corresponding to the point force with the most similar characteristics of the pixel value is determined.
[0005] 対応点である点 Pと点 Pの画像内の位置が決定されると、点 Pの画像の座標系で  [0005] When the corresponding point P and the position of the point P in the image are determined, the coordinate system of the image of the point P
L R L  L R L
示される座標値 P (X , y ) )  Coordinate value P (X, y))
し し しと、点 P  The point P
Rの画像の座標系で示される座標値 P (X , y  Coordinate value P (X, y shown in the coordinate system of R image
R R Rを 用いて、実空間の全体座標系で示される点 Pの座標 (X, y, z)を求めることができる。 点 Pの座標値は、以下に示す既知の式(1) (2) (3)に、座標値 P (X , y )と座標値 P  R R R can be used to find the coordinates (X, y, z) of point P shown in the global coordinate system of real space. The coordinate value of the point P is expressed by the following equation (1) (2) (3), with the coordinate value P (X, y) and coordinate value P
し し し  し し し し
(X , y )  (X, y)
R R Rを代入して方程式を解くことによって得られる。特に、測定対象点の奥行き 情報である zの値は、カメラの基線長 Lと、カメラの焦点距離 fと、第 1の画像と第 2の画 像間の撮影位置の違いに起因する対応点のずれ量 X - X  It is obtained by substituting R R R and solving the equation. In particular, the value of z, which is the depth information of the measurement target point, corresponds to a corresponding point due to the camera base length L, the focal length f of the camera, and the difference in shooting position between the first image and the second image. Deviation X-X
L Rとから、容易に計算する ことができる。尚、 X - Xの値のように、第 1の画像と第 2の画像間の撮影位置の違い  It can be easily calculated from LR. The difference in shooting position between the first image and the second image, such as the value of X-X
L R  L R
に起因する対応点のずれ量は、視差と称される。  The amount of deviation of the corresponding points due to is referred to as parallax.
[0006] [数 1] [0006] [Equation 1]
x
Figure imgf000004_0001
- - ひ)
x
Figure imgf000004_0001
--
- R  -R
L  L
y=yL x • · (2) y = y L x • · (2)
L L
z = t X · ■ ■ (3)  z = t X
[0007] ステレオ画像は、カメラのレンズを通して画像が得られているために、測定対象物は レンズの光学的な特性に由来する歪みを含んだ形状で画像に記録されて ヽる。この ため、正確な対応点の位置を計測し、視差を求めるためには、画像の歪みの補正が 必要となる。特に、眼底のステレオ画像は、眼底が眼球の水晶体レンズや角膜を通し て撮影されるために、これらの影響が加わってより多くの要因からなる歪みを含んだ 画像が撮影されることになる。眼球の水晶体レンズや角膜の形状には個人差がある ために、一般的な光学計算による補正だけでは正確な眼底の形状を知ることができ ず、対応点を正確に求めることが困難であった。 [0008] このため、眼底のステレオ画像に含まれる歪みを補正したうえで対応点を決定し、正 確な 3次元形状を決定しょうとする試みが種々なされている。例えば、ステレオ画像の 上下三分の一の赤、青、緑の情報を用いて画像の位置の補正を行って対応点を決 定する技術が、特許文献 1に開示されている。又、特許文献 2には、眼球を楕円球面 と仮定し、 2次曲面を当てはめて画像の補正を行う技術が、開示されている。 [0007] Since the stereo image is obtained through the lens of the camera, the measurement object is recorded in the image in a shape including distortion derived from the optical characteristics of the lens. For this reason, it is necessary to correct image distortion in order to accurately measure the position of corresponding points and to obtain parallax. In particular, since the fundus is photographed through a crystalline lens or cornea of the eyeball, a stereo image of the fundus is photographed with an image including distortion caused by these factors. Because there are individual differences in the shape of the lens and cornea of the eyeball, it is difficult to accurately determine the corresponding point because it is impossible to know the exact fundus shape only by correction using general optical calculations. . [0008] For this reason, various attempts have been made to determine the corresponding points by correcting the distortion included in the fundus stereo image and to determine an accurate three-dimensional shape. For example, Patent Document 1 discloses a technique for correcting the position of an image using information on red, blue, and green in the upper and lower thirds of a stereo image to determine corresponding points. Patent Document 2 discloses a technique for correcting an image by applying a quadric surface assuming that the eyeball is an elliptical spherical surface.
特許文献 1:特開 2000— 245700号公報  Patent Document 1: Japanese Patent Laid-Open No. 2000-245700
特許文献 2:特開 2002— 34924号公報  Patent Document 2: Japanese Patent Laid-Open No. 2002-34924
[0009] しかしながら、これら従来の技術は、画像間の位置の補正を行うために、画像全体、 あるいは画像の三分の二の領域を対象として画素データの解析を行って補正を行う ものであり、補正のための画像処理に非常に時間を要していた。  [0009] However, these conventional techniques perform correction by analyzing pixel data for the entire image or two-thirds of the image in order to correct the position between the images. The image processing for correction took a very long time.
[0010] また眼底のステレオ画像に固有の事例として、水晶体や角膜を通過する光量や光の 波長が照明条件や撮影方向によって変化し、測定対象物の同一箇所を撮影してい るはずの箇所の画素値力 画像によって異なることがある。このような画素値の変化 量は、特に撮影領域の周辺部である網膜領域において大きくなつており、網膜領域 では対応点の抽出が正確に行われないことがあった。そして、このような誤った対応 点の情報を元に眼底の 3次元形状を解析したために、実際には凹凸がないなめらか な平面であっても、凹凸が大きな構造として認識して結果を表示してしまうことが屡々 あり、その改善が求められていた。  [0010] As a specific example of stereo images of the fundus, the amount of light passing through the lens and cornea and the wavelength of the light vary depending on the illumination conditions and the shooting direction, and the same part of the measurement object should be shot. Pixel value force May vary depending on the image. The amount of change in pixel value is particularly large in the retinal area that is the periphery of the imaging area, and the corresponding points may not be accurately extracted in the retinal area. Then, because the 3D shape of the fundus was analyzed based on the information on such incorrect corresponding points, even if it is a smooth flat surface that does not actually have unevenness, it is recognized as a structure with large unevenness and the result is displayed. In many cases, improvements were required.
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0011] 本発明は上記の問題点に鑑みてなされたものであって、眼底のステレオ画像から、 正確な視差の値を求めて視神経乳頭の 3次元形状を効率よぐしかも正確に解析す る画像解析装置と画像解析プログラムを提供することを課題としてなされたものである [0011] The present invention has been made in view of the above problems, and obtains an accurate parallax value from a stereo image of the fundus oculi and efficiently and accurately analyzes the three-dimensional shape of the optic disc. The object was to provide an image analysis device and an image analysis program.
[0012] 更に本発明は、眼底のステレオ画像間に、照明条件などに起因する画素値の変化 が存在した場合であっても、視神経乳頭の 3次元形状を正確に解析する画像解析装 置と画像解析プログラムを提供することを課題としてなされたものである。 [0012] Further, the present invention provides an image analysis device that accurately analyzes the three-dimensional shape of the optic disc even when there is a change in pixel value due to illumination conditions between stereo images of the fundus. The object is to provide an image analysis program.
課題を解決するための手段 [0013] 請求項 1の発明は、被験者の眼底を撮影したステレオ画像から、視神経乳頭の 3次 元形状を解析する画像解析装置に関する。本発明の画像解析装置は、ステレオ画 像の画素データを入力する画像入力手段と、ステレオ画像毎に視神経乳頭の領域と 視神経乳頭以外の領域とを識別する領域識別手段と、ステレオ画像の視神経乳頭 以外の領域の画素データを解析して、?見神経乳頭の領域を含む画像全体の位置合 わせを行う画像補正手段と、画像補正手段によって位置合わせをされたステレオ画 像の視神経乳頭の領域の画素データを解析して、?見神経乳頭の中の同一箇所を撮 影して!/、る対応点を決定する対応点決定手段と、対応点決定手段によって決定され た対応点の各画像における位置情報から、視神経乳頭の奥行き情報を計算する奥 行き情報決定手段とを含んで ヽることを特徴とする。 Means for solving the problem [0013] The invention of claim 1 relates to an image analysis apparatus that analyzes a three-dimensional shape of the optic nerve head from a stereo image obtained by photographing the fundus of a subject. The image analysis apparatus of the present invention includes an image input means for inputting pixel data of a stereo image, an area identifying means for identifying an area of the optic nerve head and an area other than the optic nerve head for each stereo image, and an optic nerve head of a stereo image. Analyzing pixel data in other areas, what? Image correction means for aligning the entire image including the area of the optic nerve head and pixel data of the optic nerve head area of the stereo image aligned by the image correction means are analyzed, and The corresponding point determination means for determining the corresponding point by taking the same part of the image and calculating the depth information of the optic nerve head from the position information in each image of the corresponding point determined by the corresponding point determination means. Including depth information determination means.
[0014] 眼底のステレオ画像には、主に、視神経乳頭と、網膜と、網膜上を走行する複数の 血管が撮影される。種々の検討の結果、発明者は、眼底のステレオ画像を、視神経 乳頭以外の領域、即ち網膜と網膜上を走行する複数の血管とが撮影されて!、る領域 の画素データを解析して画像の位置の補正を行うことにより、?見神経乳頭の 3次元形 状を効率よぐし力も正確に解析できることを見いだした。視神経乳頭は、中心部が 陥凹した円錐型になっており、その円錐の頂点は、画像の奥側に位置している。網 膜は、眼球の形状に沿って湾曲しているが、その径が比較的大きいために、画像と ほぼ平行な略平面として撮影される。発明者は、このような視神経乳頭と網膜の形状 の違いに着目し、?見神経乳頭以外の領域を解析することで、画像全体の位置合わせ を効率よく行うことができることを見いだし、更に位置合わせ後のステレオ画像を用い て視神経乳頭の領域の対応点を決定して解析を行うことにより、視神経乳頭の奥行 き情報を正確に計算することを可能とした。  [0014] A stereo image of the fundus mainly captures the optic disc, the retina, and a plurality of blood vessels running on the retina. As a result of various studies, the inventor has photographed a stereo image of the fundus of the region other than the optic disc, that is, the retina and a plurality of blood vessels running on the retina! By analyzing the pixel data of this area and correcting the position of the image, It was found that the three-dimensional shape of the optic nerve head can be analyzed accurately and efficiently. The optic nerve head has a conical shape with a concave center, and the apex of the cone is located on the far side of the image. The retina is curved along the shape of the eyeball, but because of its relatively large diameter, it is photographed as a substantially plane that is substantially parallel to the image. The inventor pays attention to the difference in the shape of the optic disc and the retina. By analyzing the area other than the optic nerve head, we found that the entire image can be aligned efficiently, and using the stereo image after alignment, the corresponding points of the optic nerve area are determined and analyzed. By doing so, it became possible to calculate the depth information of the optic disc accurately.
[0015] 請求項 2の発明の画像補正手段は、ステレオ画像間における同位置の画素値の差 分の合計値を評価して、画像間の縦方向の移動量と、横方向の移動量と、横方向の 拡大縮小率を求めることによって画像の位置補正を行うことを特徴とする。  [0015] The image correction means of the invention of claim 2 evaluates a total value of differences between pixel values at the same position between stereo images, and calculates a vertical movement amount and a horizontal movement amount between images. The image position correction is performed by obtaining the enlargement / reduction ratio in the horizontal direction.
[0016] 発明者は、多くの画像を解析して補正方法を検討した結果、眼底のステレオ画像の 位置の補正が、画像の縦方向及び横方向の移動と、横方向の拡大縮小を行うことで 可能となることを見いだした。画像の移動量と拡大縮小率は、ステレオ画像間におけ る同位置の画素値の差分の合計値を求めて、この値に基づいて計算することが可能 であるので、補正は効率的で迅速に行われる。 [0016] As a result of analyzing the correction method by analyzing many images, the inventor found that the correction of the position of the stereo image of the fundus occupies the vertical and horizontal movements of the image and the horizontal scaling. I found out that it would be possible. The amount of image movement and the enlargement / reduction ratio can be set between stereo images. Since the sum of the differences between the pixel values at the same position can be obtained and calculated based on this value, the correction is performed efficiently and quickly.
[0017] 請求項 3の対応点決定手段は、基準とする画像の視神経乳頭の領域中に基準点を 設定し、この基準点の周囲の特定の範囲に関心領域を設定し、他の画像の視神経 乳頭の領域から、関心領域の画素データに最も類似する画素データの配列を有する 領域を検索し、他の画像の視神経乳頭の領域に、関心領域と充分に類似する画素
Figure imgf000007_0001
、場合には、他の画像の視神経乳頭の領域の横方向の拡 大若しくは縮小を行って、関心領域の画素データに最も類似する画素データの配列 を有する領域を検索し、検索の結果に基づ!、て基準とする画像と他の画像の対応点 を決定することを特徴とする。
[0017] The corresponding point determining means according to claim 3 sets a reference point in a region of the optic disc of the reference image, sets a region of interest in a specific range around the reference point, A region having an array of pixel data that is most similar to the pixel data of the region of interest is searched from the region of the optic nerve, and pixels that are sufficiently similar to the region of interest in the region of the optic nerve of other images
Figure imgf000007_0001
In this case, the region of the optic nerve head in another image is expanded or reduced in the horizontal direction to search for a region having an array of pixel data that is most similar to the pixel data of the region of interest, and based on the search result. It is characterized by determining corresponding points between the image used as a reference and another image.
[0018] 本発明の対応点決定手段は、視神経乳頭の領域で対応点を抽出する際に、基準と する画像の視神経乳頭の領域に関心領域を定め、他の画像の視神経乳頭の領域か ら、関心領域の画素データと類似する画素データの配列を検索する。そして、類似 する画素データの配列が得られな 、場合には、他の画像の視神経乳頭の領域の横 方向の拡大若しくは縮小を行って、類似する画素データの配列を有する領域を検索 することができる。視神経乳頭の領域の画像には、周辺部の領域外の画像とは異な る横方向の歪みが生じることがあるが、本発明の対応点決定手段は、この歪みにも対 応することができるため、より精度高く画像内に対応点を設定することができる。  [0018] The corresponding point determining means of the present invention determines a region of interest in the optic disc region of the reference image when extracting corresponding points in the optic disc region, and from the optic disc region of other images. Then, an array of pixel data similar to the pixel data of the region of interest is searched. If an array of similar pixel data cannot be obtained, a region having a similar array of pixel data may be searched by performing lateral enlargement or reduction of the optic nerve head region of another image. it can. The image of the optic nerve head region may have a lateral distortion different from the image outside the peripheral region, but the corresponding point determination means of the present invention can also cope with this distortion. Therefore, the corresponding points can be set in the image with higher accuracy.
[0019] 請求項 4の発明は、被験者の眼底を撮影したステレオ画像の画素データから、視神 経乳頭の 3次元形状を解析する画像解析プログラムに関する。本発明の画像解析プ ログラムは、ステレオ画像毎に視神経乳頭の領域と視神経乳頭以外の領域の各画素 データを識別し、ステレオ画像の視神経乳頭以外の領域の画素データを解析して、 視神経乳頭の領域を含む画像全体の位置合わせを行 ヽ、位置合わせされたステレ ォ画像の視神経乳頭の領域の画素データを解析して、?見神経乳頭の中の同一箇所 を撮影している対応点を決定し、対応点の各画像における位置情報から、視神経乳 頭の奥行き情報を計算する処理をコンピュータに実行させるものである。  [0019] The invention of claim 4 relates to an image analysis program for analyzing the three-dimensional shape of the optic nerve head from pixel data of a stereo image obtained by photographing the fundus of a subject. The image analysis program of the present invention discriminates each pixel data of the region of the optic disc and the region other than the optic disc for each stereo image, analyzes the pixel data of the region other than the optic disc of the stereo image, and analyzes the optic disc of the optic disc. Align the entire image including the area, analyze the pixel data of the optic nerve head area of the aligned stereo image, Corresponding points at which the same part of the optic nerve head is photographed are determined, and the computer is caused to execute a process of calculating depth information of the optic nerve head from position information in each image of the corresponding point.
発明の効果  The invention's effect
[0020] 本発明の画像解析装置及び画像解析プログラムによって、眼底を撮影したステレオ 画像の画素データを効率よく解析し、乳頭視神経の 3次元形状を正確に再現可能な 3次元情報を得ることができる。 [0020] Stereo obtained by photographing the fundus using the image analysis apparatus and the image analysis program of the present invention The pixel data of the image can be analyzed efficiently, and 3D information that can accurately reproduce the 3D shape of the papillary optic nerve can be obtained.
[0021] 本発明の画像解析装置及び画像解析プログラムによって、ステレオ画像間に照明条 件などに起因する画素値の変化が存在する場合であっても、視神経乳頭の 3次元形 状を正確に解析することができる。 [0021] The image analysis apparatus and the image analysis program of the present invention accurately analyze the three-dimensional shape of the optic nerve head even when there is a change in pixel values due to illumination conditions between stereo images. can do.
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0022] 以下、本発明の画像解析装置の好ましい実施の形態の一例として、眼底を撮影した ステレオ画像から視神経乳頭の三次元形状を再現する画像解析装置につ ヽて、図 面を参照しつつ詳細に説明する。  Hereinafter, as an example of a preferred embodiment of the image analysis apparatus of the present invention, an image analysis apparatus that reproduces a three-dimensional shape of the optic nerve head from a stereo image obtained by photographing the fundus is referred to the drawings. This will be described in detail.
[0023] 本実施形態における画像解析装置は 1台のコンピュータで構成されている。画像解 析装置のステレオ画像の画素データを入力する画像入力手段と、ステレオ画像毎に 視神経乳頭の領域と視神経乳頭以外の領域を識別する領域識別手段と、ステレオ 画像の視神経乳頭以外の領域の画素データを解析して、?見神経乳頭の領域を含む 画像全体の位置合わせを行う画像補正手段と、位置合わせをされたステレオ画像の 視神経乳頭の領域の画素データを解析して、視神経乳頭の中の同一箇所を撮影し て ヽる対応点を決定する対応点決定手段と、決定された対応点の各画像における位 置情報から、視神経乳頭の奥行き情報を計算する奥行き情報決定手段とは、コンビ ユータの記憶部 (ROM, RAM等)に記憶されている。これらの手段が、コンピュータ の CPUを用いてステレオ画像の画素データの演算を適宜行うことにより、解析が進 められる。  [0023] The image analysis apparatus according to the present embodiment is composed of one computer. Image input means for inputting pixel data of a stereo image of the image analysis device, area identification means for identifying an area of the optic nerve head and an area other than the optic nerve head for each stereo image, and pixels of an area other than the optic nerve head of the stereo image Analyzing the data? Image correction means for aligning the entire image including the area of the optic nerve head and the pixel data of the optic nerve area of the aligned stereo image are analyzed to capture the same part in the optic nerve head. Corresponding point determining means for determining the corresponding corresponding points and depth information determining means for calculating the depth information of the optic disc from the position information in each image of the determined corresponding points are the storage unit (ROM, RAM). These means can perform analysis by appropriately calculating the pixel data of the stereo image using the CPU of the computer.
[0024] 本実施形態の画像解析装置によって解析される視神経乳頭と網膜について、その 形状の一般的な特徴を説明する。網膜は眼球の一番奥にある膜であり、眼球の表面 に沿って湾曲した形状となっている。視神経乳頭は眼球の奥の中心窩カ ゃや離れ た位置にあり、中央の円錐型の陥凹部と、陥凹部の周囲の乳頭辺縁部力もなる。す なわち、陥凹部の円錐の頂点は、最も奥側に位置している。視神経乳頭の陥凹部の 大きさは個人差が大きぐ又疾患の有無等によってもその大きさが変わるが、一般的 にその径は、眼球の径の数パーセントである。  [0024] General characteristics of the shape of the optic disc and retina analyzed by the image analysis apparatus of this embodiment will be described. The retina is the innermost film of the eyeball and has a curved shape along the surface of the eyeball. The optic nerve head is located in the center of the eyeball at a distance from the fovea. It also has a conical recess in the center and the nipple marginal force around the recess. In other words, the apex of the conical recess is located on the farthest side. The size of the depression of the optic nerve head varies greatly depending on the individual and the presence or absence of disease, but generally the diameter is a few percent of the diameter of the eyeball.
[0025] 眼底のステレオ画像は、眼球の正面力 光を眼球に通し、画像毎にわずかに異なる 撮影位置から撮影される。本実施形態で解析するステレオ画像は、図 2 (a)に示され る眼底を左側カゝら撮影した画像 2 (以下、左画像とも言う)と、図 2 (b)に示される眼底 を右側カゝら撮影した画像 4 (以下、右画像とも言う)の 2枚で構成されている。眼底のス テレオ画像には、視神経乳頭領域 6と、網膜領域 8と、網膜上を走行する複数の血管 10が撮影される。 [0025] Stereo images of the fundus are slightly different from image to image by passing the frontal force light of the eyeball through the eyeball. The picture is taken from the shooting position. The stereo images analyzed in this embodiment are the image 2 (hereinafter also referred to as the left image) taken from the left side of the fundus shown in FIG. 2 (a), and the right side of the fundus shown in FIG. 2 (b). It consists of two images 4 (hereinafter also referred to as the right image). In the stereo image of the fundus, the optic disc region 6, the retinal region 8, and a plurality of blood vessels 10 running on the retina are photographed.
[0026] 眼底のステレオ画像 2, 4は、本実施形態では視神経乳頭領域 6が画像のほぼ中央 に配置されるように撮影される。図 2のステレオ画像の中で、視神経乳頭領域 6の乳 頭辺縁部 12は、明るいオレンジ色で撮影されており、陥凹部 14は乳頭辺縁部 12より も更に輝度の高 ヽ薄 、オレンジ色で撮影されて 、る。図 2のステレオ画像に褐色から 暗褐色の色調で撮影されて ヽる網膜領域 8は、実際には画像に対して奥行き方向に 凸となるように湾曲しているが、湾曲の曲率が大きい一方で撮影範囲が相対的に狭 いために、撮影範囲における奥行き方向の変化量は非常に少なぐ画像の面とほぼ 平行な平面として取り扱うことができる。  [0026] Stereo images 2 and 4 of the fundus are taken in this embodiment so that the optic disc region 6 is arranged at approximately the center of the image. In the stereo image of FIG. 2, the nipple edge 12 in the optic nerve head region 6 is photographed in a bright orange color, and the depressed portion 14 is brighter and lighter than the nipple edge 12 in orange. Taken in color. The retinal region 8 that is captured in the brown image to the dark brown color tone in the stereo image in Fig. 2 is actually curved to be convex in the depth direction with respect to the image, but the curvature of the curve is large. Since the shooting range is relatively narrow, the amount of change in the depth direction in the shooting range can be handled as a plane that is almost parallel to the image plane with very little.
[0027] このような 2枚の眼底のステレオ画像 2, 4から視神経乳頭領域 6の 3次元形状を解析 するために、画像解析装置が実施する解析のフローを図 3に示し、詳細な解析内容 の説明を行う。画像解析装置の画像入力手段は、まず最初にステップ s2において、 2枚のステレオ画像 2, 4の画素データを入力する。本実施形態におけるステレオ画 像の画素データは、画素毎に R (赤)、 G (緑)、 B (青)のデータを含んで 、る。  [0027] Fig. 3 shows the flow of analysis performed by the image analyzer to analyze the three-dimensional shape of the optic nerve head region 6 from the two stereo images 2, 4 of the fundus. Will be explained. The image input means of the image analyzer first inputs pixel data of two stereo images 2 and 4 in step s2. The pixel data of the stereo image in this embodiment includes R (red), G (green), and B (blue) data for each pixel.
[0028] 次に画像解析装置は、ステップ s4にお ヽて、領域識別手段を用いて、ステレオ画像 ごとに視神経乳頭領域 6の輪郭抽出を行う。ステレオ画像 2, 4において、視神経乳 頭領域 6の乳頭辺縁部 12は、網膜領域 8と比較すると、 R、 G、 B値に基づいて算出 される輝度値が高くなる。又、?見神経乳頭領域 6と網膜領域 8が隣りあう部分では輝 度値の変化量が大きぐその境界 (以下、エッジともいう)は明瞭である。そこで、輝度 値の変化が大き 、領域の画素毎の画素データを評価し、所定のしき 、値以上の輝 度を有する画素を選択して、この画素の閉じた列を連ねることにより、視神経乳頭領 域 6の輪郭を決定することができる。この輪郭の決定は、動的輪郭抽出法を適用する ことによって、精度良く行うことができる。  [0028] Next, in step s4, the image analysis device extracts an outline of the optic disc region 6 for each stereo image using the region identification means. In the stereo images 2 and 4, the luminance value calculated based on the R, G, and B values of the nipple edge region 12 of the optic nerve region 6 is higher than that of the retinal region 8. or,? In the part where the optic disc region 6 and the retinal region 8 are adjacent to each other, the boundary (hereinafter also referred to as an edge) where the change in luminance value is large is clear. Therefore, by evaluating the pixel data for each pixel in the region with a large change in luminance value, selecting a pixel having a predetermined threshold value or higher and connecting the closed columns of the pixels, the optic nerve head is connected. The contour of area 6 can be determined. The contour can be determined with high accuracy by applying the dynamic contour extraction method.
[0029] 領域識別手段が、動的輪郭抽出法によって視神経乳頭領域 6の輪郭を抽出する方 法を、更に詳細に説明する。まず前処理として、ステレオ画像 2, 4の中に撮影されて いる複数の血管 10を、一旦画像から消去する処理を行う。これは、輝度値の変化量 即ちエッジ強度が高くなる血管 10と網膜領域 8の境界が、誤って視神経乳頭領域 6 の境界であると誤抽出されることを避けるための処理である。この処理は、血管 10の 画素値が周囲の網膜領域 8の画素値よりも輝度の低 、ことを利用し、周囲よりも輝度 の低い画素値を持つ画素を、周囲の画素値で塗りつぶすというモルフォロジ演算に 基づく処理を適用することにより実行することができる。 [0029] The region identifying means extracts the contour of the optic disc region 6 by the dynamic contour extraction method. The method will be described in more detail. First, as preprocessing, a process of temporarily deleting a plurality of blood vessels 10 captured in the stereo images 2 and 4 from the image is performed. This is a process for avoiding erroneous extraction of the boundary between the blood vessel 10 and the retinal region 8 where the luminance value change amount, that is, the edge strength is high, as the boundary between the optic disc region 6 by mistake. This process utilizes the fact that the pixel value of the blood vessel 10 is lower in luminance than the pixel value of the surrounding retinal region 8, and a morphology in which pixels having a pixel value lower than the surrounding pixel value are filled with the surrounding pixel value. It can be executed by applying processing based on computation.
[0030] 領域識別手段が、血管 10の画素値が網膜領域 8の画素値で置き換えられたステレ ォ画像 2, 4から視神経乳頭領域 6の輪郭を抽出するために行う次の処理は、大まか な視神経乳頭領域 6の輪郭の初期値を定めるために、複数の画像上の制御点で構 成される初期輪郭を定義する処理である。ここで初期輪郭を構成する制御点は、以 下の処理によって決定することができる。即ち、ステレオ画像 2, 4の画素値のヒストグ ラムを作成し、ステレオ画像 2, 4に対して所定の輝度値をしきい値とするしきい値処 理を施し、しきい値処理によって 2値ィ匕された領域のうち視神経乳頭領域 6に相当す る領域の中心点を求め、中心点力 所定の距離にある円上に等間隔で配置されるよ うに決定する。 [0030] The following processing performed by the region identification unit to extract the contour of the optic disc region 6 from the stereo images 2 and 4 in which the pixel values of the blood vessel 10 are replaced with the pixel values of the retinal region 8 is roughly described. This is a process for defining an initial contour composed of control points on a plurality of images in order to determine the initial value of the contour of the optic nerve head region 6. Here, the control points constituting the initial contour can be determined by the following processing. That is, a histogram of the pixel values of the stereo images 2 and 4 is created, and threshold processing is performed on the stereo images 2 and 4 with a predetermined luminance value as a threshold value. The center point of the region corresponding to the optic disc region 6 is determined, and the center point force is determined so as to be arranged at equal intervals on a circle at a predetermined distance.
[0031] 最初の制御点によって視神経乳頭領域 6の初期輪郭を定義すると、領域識別手段 は、制御点の周囲のエッジ強度の値と、制御点間の距離と、制御点間の傾きを項とし 、各項にステレオ画像のエッジ強度の特性を考慮した重み付けがなされて 、る条件 式を適用して、この条件式の値を最小化する制御点の最適な位置を求めるための演 算を行う。制御点の位置は、繰り返し演算を行う間に移動しつつ収束し、最終的な制 御点の配置が輪郭抽出結果となる。こうして決定された視神経乳頭領域 6の輪郭を 適用し、視神経乳頭領域 6を白色で示し、視神経乳頭領域 6以外の網膜領域 8と血 管 10を黒色で示した眼底のステレオ画像を、図 4に示す。  [0031] When the initial contour of the optic disc region 6 is defined by the first control point, the region identification means uses the values of the edge strength around the control point, the distance between the control points, and the slope between the control points as terms. Each term is weighted in consideration of the characteristics of the edge strength of the stereo image, and a conditional expression is applied to perform an operation to find the optimal position of the control point that minimizes the value of the conditional expression. . The position of the control point converges while moving while iterating repeatedly, and the final control point arrangement becomes the contour extraction result. Applying the contour of the optic disc region 6 determined in this way, a stereo image of the fundus showing the optic disc region 6 in white and the retinal region 8 other than the optic disc region 6 and the blood vessel 10 in black is shown in Fig. 4. Show.
[0032] 次に画像解析装置は、ステップ s6にお 、て、画像補正手段を用いて、ステレオ画像 2, 4の網膜領域 8と血管 10の画素データを解析し、ステレオ画像 2, 4全体の位置合 わせを行う。図 2の左画像 2と右画像 4の比較から明らかであるように、ステレオ画像 には視神経乳頭領域 6の位置と形状が画像毎に大きく異なるように撮影される一方 で、網膜領域 8と網膜上の血管 10は、ほぼ同じ位置に撮影される。画像解析装置の 画像補正手段は、網膜領域 8と血管 10の画素データを解析することで、非常に精度 高く位置あわせの補正を進めることができる。 [0032] Next, in step s6, the image analysis apparatus analyzes the pixel data of the retinal regions 8 and blood vessels 10 of the stereo images 2 and 4 using the image correction means, Perform alignment. As is clear from the comparison between the left image 2 and the right image 4 in FIG. 2, the stereo image is taken so that the position and shape of the optic disc region 6 differ greatly from image to image. Thus, the retinal region 8 and the blood vessel 10 on the retina are photographed at substantially the same position. The image correction means of the image analysis apparatus can advance the alignment correction with very high accuracy by analyzing the pixel data of the retinal region 8 and the blood vessel 10.
[0033] 画像補正手段は、左画像 2に対して右画像 4を垂直方向に一画素ずつ十数画素の 範囲で平行移動して重ね合わせ、網膜領域 8と血管 10における 2枚の画像間の画 素値の相互相関特徴量を計算する。同様に、水平方向に一画素ずつ十数画素の範 囲で平行移動して重ね合わせ、網膜領域 8と血管 10における 2枚の画像間の画素 値の相互相関特徴量を計算する。ここで、相互相関特徴量は、 RGB値や RGB値に 基づいて算出される輝度、彩度、明度などの色情報、若しくはエッジ強度を用いた相 互相関関数を計算することで求められる。相互相関特徴量が、所定の基準値よりも高 V、値をとるとき、 2枚のステレオ画像の位置あわせの補正が達成できたとみなすことが できる。画像補正手段は、位置あわせの補正が達成されたときの水平方向と垂直方 向の移動量を補正値として平行移動させる。  [0033] The image correcting means translates and superimposes the right image 4 on the left image 2 in a vertical direction within a range of dozens of pixels, and overlaps between the two images in the retinal region 8 and the blood vessel 10. Calculate cross-correlation features of pixel values. Similarly, the horizontal correlation is performed by moving the pixels one by one within a range of dozens of pixels and superimposing them, and the cross-correlation features of the pixel values between the two images in the retinal region 8 and the blood vessel 10 are calculated. Here, the cross-correlation feature is obtained by calculating a cross-correlation function using the RGB value, color information such as luminance, saturation, and brightness calculated based on the RGB value, or edge strength. When the cross-correlation feature value is higher than the predetermined reference value by V, it can be considered that the correction of the alignment of the two stereo images has been achieved. The image correction means translates the movement amount in the horizontal direction and the vertical direction when the alignment correction is achieved as a correction value.
[0034] 画像補正手段は、画像の平行移動によって得られた相互相関特徴量を評価した上 で、より好ましい位置あわせが必要であると判断される場合には、ステレオ画像の右 画像 4を所定の倍率で水平方向に拡大又は縮小した後に平行移動を行って、左画 像 2との相互相関特徴量を計算することもできる。一方の画像に横方向の拡大又は 縮小を行うことで、相互相関特徴量の高 ヽ値を得ることができる場合があるからである 。右画像 4の拡大縮小率は、領域識別手段によって識別された視神経乳頭領域 6の 大きさを指標することができる。画像補正手段は、右画像 4を横方向に拡大する場合 には、画像を構成する画素の水平方向の列の間に所定の間隔で新たな画素の列を 挿入し、挿入された新たな画素の列に対して、隣り合う既存の画素値を線形補間して 定義する。又、右画像 4を縮小する場合には、所定の間隔で水平方向の画素の列を 削除する。  [0034] The image correction means evaluates the cross-correlation feature obtained by the parallel movement of the image, and determines that the right image 4 of the stereo image is predetermined when it is determined that more preferable alignment is necessary. It is also possible to calculate the cross-correlation feature quantity with the left image 2 by performing parallel movement after enlarging or reducing in the horizontal direction at a magnification of. This is because it is sometimes possible to obtain a high value of the cross-correlation feature value by enlarging or reducing one image in the horizontal direction. The enlargement / reduction ratio of the right image 4 can indicate the size of the optic disc region 6 identified by the region identifying means. When enlarging the right image 4 in the horizontal direction, the image correcting means inserts a new pixel column at a predetermined interval between the horizontal columns of pixels constituting the image, and the inserted new pixel. Next, the existing existing pixel values are defined by linear interpolation. When the right image 4 is reduced, the horizontal pixel rows are deleted at predetermined intervals.
[0035] このように、本実施形態の画像補正手段は、ステレオ画像 2, 4の網膜領域 8と血管 1 0の画素値を用いて、 2枚の画像間の画素値の相互相関特徴量を計算し、画像の位 置あわせを実施することができる。ステレオ画像 2, 4に占める網膜領域 8と血管 10の 範囲は、約 2分の 1から約 3分の 2であるため、画像全体を解析して補正を行う従来の 場合と比較すると、非常に効率よく解析が進められる。図 5に、位置あわせの補正が 行われた右画像 16を示す。 As described above, the image correction unit of the present embodiment uses the pixel values of the retinal region 8 and the blood vessel 10 of the stereo images 2 and 4 to calculate the cross-correlation feature amount of the pixel values between the two images. Calculate and perform image alignment. The range of the retinal region 8 and blood vessel 10 in the stereo images 2 and 4 is about one-half to about two-thirds. Compared to the case, the analysis proceeds very efficiently. Figure 5 shows the right image 16 with the alignment corrected.
[0036] 本実施形態で使用した相互相関関数によって計算される相互相関特徴量は、 2枚の ステレオ画像 2, 4の間で輝度が線形変換されて 、るような場合であっても値の変化 がないため、照明条件の違いなどに由来してコントラストが異なるような画像であって も位置あわせの補正が可能となる。  [0036] The cross-correlation feature amount calculated by the cross-correlation function used in the present embodiment has a value even if the luminance is linearly converted between the two stereo images 2 and 4. Since there is no change, it is possible to correct the alignment even for images with different contrasts due to differences in lighting conditions.
[0037] 次に、画像解析装置は、ステップ s8において、対応点決定手段によって、位置合わ せされたステレオ画像 2, 16の各々の視神経乳頭領域 6の画素データを解析して、 視神経乳頭領域 6の中の同一箇所を撮影している対応点を抽出し、決定する。対応 点決定手段は、左画像 2の視神経乳頭領域 6の全域に亘つて基準点 18を設定し、 右画像 4の中の基準点 18に対応する点を検索する。図 6 (a)に、?見神経乳頭領域 6 に設定された基準点の一例を示す。  [0037] Next, in step s8, the image analysis apparatus analyzes the pixel data of each of the optic disc regions 6 of the stereo images 2 and 16 that have been aligned by the corresponding point determination means, and the optic disc region 6 The corresponding point which is photographing the same part in is extracted and determined. The corresponding point determination means sets a reference point 18 over the entire optic disc region 6 of the left image 2 and searches for a point corresponding to the reference point 18 in the right image 4. In Fig. 6 (a), An example of the reference point set in the optic disc area 6 is shown.
[0038] 対応点決定手段は、基準点 18に対応する点を検索するために、左画像 2の中の基 準点 18を中心として、一辺が数十画素の矩形領域を関心領域 19として設定する。 次に、図 6 (b)に示すように、右画像 16の視神経乳頭領域 6の中に、左画像 2の関心 領域 19の近辺に同じ大きさの対応領域 20を設定する。そして、対応領域 20と関心 領域 19の画素値との画素値のパターンの類似度を評価する。最初に設定した対応 領域 20の画素値のパターン力 関心領域 19の画素値のパターンと充分に類似して いない場合には、対応領域 20を画像 16上で移動させて、再度類似度を評価する。 そして、関心領域 19と充分に類似する画素値のパターンを有する対応領域 20を検 索し、対応領域 20の中心を対応点 21として決定する。対応点決定手段は、最小 2乗 マッチングの手法を用いて関心領域 19の画素値と対応領域 20の画素値の相互相 関特徴量を計算して類似度を評価することができ、ステレオ画像間で色調の線形変 化がある場合にも、妥当なマッチングが可能である。  [0038] In order to search for a point corresponding to the reference point 18, the corresponding point determination means sets a rectangular region having a side of several tens of pixels as the region of interest 19 around the reference point 18 in the left image 2. To do. Next, as shown in FIG. 6B, a corresponding region 20 having the same size is set in the vicinity of the region of interest 19 in the left image 2 in the optic disc region 6 in the right image 16. Then, the similarity of the pixel value pattern between the corresponding region 20 and the pixel value of the region of interest 19 is evaluated. If the pattern value of the pixel value of the corresponding region 20 that was initially set is not sufficiently similar to the pattern of the pixel value of the region of interest 19, the corresponding region 20 is moved on the image 16 and the similarity is evaluated again. . Then, the corresponding region 20 having a pattern of pixel values sufficiently similar to the region of interest 19 is searched, and the center of the corresponding region 20 is determined as the corresponding point 21. Corresponding point determination means can calculate a correlation feature between the pixel value of the region of interest 19 and the pixel value of the corresponding region 20 by using a least square matching method, and can evaluate the similarity between stereo images. Even if there is a linear change in color tone, reasonable matching is possible.
[0039] 対応点決定手段は、ステップ slOにおいて画像 16上で対応領域 20を移動させて、 関心領域 19と対応領域 20の画素値の相互相関特徴量を評価する。しかし、このよう な移動によっても、相互相関特徴量が充分に大きな対応領域 20が得られない場合 には、処理はステップ sl2に進み、右画像 16の視神経乳頭 6の領域の補正を更に行 う。対応点決定手段は、右画像 16の視神経乳頭 6の横方向の拡大若しくは縮小を行 つた後に、再度関心領域 19と類似する画素データのパターンを有する対応領域 20 を検索する。視神経乳頭領域 6の画像には、横方向の歪みが生じることがある。この ような歪みに対応するために、対応点決定手段は、右画像 16の視神経乳頭領域 6の 横方向の拡大又は縮小を行うことにより、より精度高く対応点を決定することができる 。視神経乳頭領域 6を横方向に拡大する場合には、画像を構成する画素の水平方 向の列の間に所定の間隔で新たな画素の列を挿入し、挿入された新たな画素の列 に対して、隣り合う既存の画素値を線形補間して定義する。又、視神経乳頭領域 6を 縮小する場合には、所定の間隔で水平方向の画素の列を削除する。 The corresponding point determining means moves the corresponding region 20 on the image 16 in step slO, and evaluates the cross-correlation feature quantity of the pixel values of the region of interest 19 and the corresponding region 20. However, if the corresponding region 20 having a sufficiently large cross-correlation feature amount is not obtained even by such movement, the process proceeds to step sl2, and the region of the optic nerve head 6 in the right image 16 is further corrected. Yeah. Corresponding point determination means searches the corresponding region 20 having a pixel data pattern similar to the region of interest 19 again after performing the lateral enlargement or reduction of the optic disc 6 of the right image 16. In the image of the optic nerve head region 6, lateral distortion may occur. In order to cope with such distortion, the corresponding point determination means can determine the corresponding point with higher accuracy by performing lateral enlargement or reduction of the optic nerve head region 6 of the right image 16. When the optic disc region 6 is expanded in the horizontal direction, new pixel columns are inserted at predetermined intervals between horizontal columns of pixels constituting the image, and the inserted new pixel columns are inserted. On the other hand, adjacent existing pixel values are defined by linear interpolation. When the optic nerve head area 6 is reduced, horizontal pixel rows are deleted at predetermined intervals.
[0040] このように、対応点決定手段は、必要に応じて右画像 16の視神経乳頭領域 6の補正 を行って、左画像 2に設定した基準点 18の関心領域 19について、右画像 16から相 互相関特徴量が充分に高い対応領域 20を検索し、対応点 21を決定する (ステップ s 14)。対応点決定手段は、視神経乳頭領域 6の全域に亘つて左画像 2の基準点 18と 右画像 16の対応点 21の組を決定して記憶する。  In this way, the corresponding point determination unit corrects the optic nerve head region 6 of the right image 16 as necessary, and the region of interest 19 of the reference point 18 set in the left image 2 from the right image 16. A corresponding region 20 having a sufficiently high cross-correlation feature quantity is searched to determine a corresponding point 21 (step s14). Corresponding point determination means determines and stores a set of the reference point 18 of the left image 2 and the corresponding point 21 of the right image 16 over the entire optic disc region 6.
[0041] 次に画像解析装置は、ステップ sl6において、奥行き情報決定手段を用いて、記憶 された基準点と対応点の組の各画像における位置情報から、視神経乳頭の奥行き 情報を計算する。奥行き情報の計算には、三角測量法の原理を適用した計測方法 が最も一般的に適用される。  [0041] Next, in step sl6, the image analysis apparatus calculates depth information of the optic nerve head from position information in each image of the stored reference point / corresponding point set using depth information determining means. For the calculation of depth information, a measurement method that applies the principle of triangulation is most commonly applied.
[0042] 図 7に、奥行き情報決定手段によって計算された奥行き情報に基づいて、画像解析 装置が構築した視神経乳頭領域 6の 3次元画像の出力例を示す。図 7の 3次元画像 では、?見神経乳頭領域 6の周囲の網膜領域 8と血管 10の画像に左画像 2の画像を 適用している。網膜領域 8と血管 10では、対応点の抽出と奥行き情報の計算を行つ て!、な 、ために、正確な視神経乳頭領域 6の形状を含む眼底画像の出力が非常に 迅速に行われる。又、網膜領域 8の表示においては、従来対応点が正確に行われな いことに由来して、実際には凹凸がないなめらかな平面であっても、凹凸が大きな構 造が表示されることがあつたが、一方の画像を適用して効率よくなめらかな表示を行 うことが可能となっている。  FIG. 7 shows an output example of a three-dimensional image of the optic nerve head region 6 constructed by the image analysis device based on the depth information calculated by the depth information determining means. In the 3D image in Figure 7, what? The image of the left image 2 is applied to the images of the retinal region 8 and blood vessels 10 around the optic disc region 6. In the retinal region 8 and the blood vessel 10, the corresponding points are extracted and the depth information is calculated! Therefore, the fundus image including the exact shape of the optic disc region 6 is output very quickly. In addition, in the display of the retinal region 8, since the corresponding points have not been accurately performed in the past, a structure with large unevenness is displayed even on a smooth flat surface that does not actually have unevenness. However, it is possible to display one image efficiently and smoothly.
[0043] このように、本実施形態の画像解析装置によれば、眼底を撮影したステレオ画像 2, 4の画素データを解析することにより、乳頭視神経領域 6の 3次元形状を正確に、しか も効率よく迅速に再現可能な 3次元情報を得ることができる。また、本実施形態の画 像解析装置は、解析するステレオ画像 2, 4に照明条件などに起因する画素値の変 化が存在する場合であっても、視神経乳頭領域 6の 3次元形状を正確に解析するこ とがでさる。 [0043] As described above, according to the image analysis apparatus of the present embodiment, the stereo image 2, in which the fundus is photographed, By analyzing the pixel data of 4, it is possible to obtain 3D information that can reproduce the 3D shape of the papillary optic nerve region 6 accurately, efficiently, and quickly. In addition, the image analysis apparatus according to the present embodiment accurately corrects the three-dimensional shape of the optic disc region 6 even if the stereo images 2 and 4 to be analyzed have pixel value changes due to illumination conditions and the like. It can be analyzed.
[0044] 更に、ステレオ画像の位置合わせを行う場合には、画像の平行移動と拡大縮小に加 えて、画像全体の回転あるいは変形処理を行うことにより、充分な水準の相互相関特 徴量を得ることもできる。また、 3枚以上のステレオ画像を用いて、 3次元形状を示す 情報をより正確に信頼性高く求めることも可能である。  [0044] Furthermore, when performing stereo image alignment, a sufficient level of cross-correlation features are obtained by performing rotation or deformation of the entire image in addition to parallel translation and enlargement / reduction of the image. You can also It is also possible to obtain more accurate and reliable information indicating the three-dimensional shape using three or more stereo images.
[0045] 以上、実施形態において本発明の一実施形態を詳細に説明したが、これは例示に すぎず、請求の範囲を限定するものではない。請求の範囲に記載の技術には、以上 に例示した具体例を様々に変形、変更したものが含まれる。例えば、本実施形態に おける画像解析装置は、同一の工程をコンピュータに実行させることのできる画像解 析プログラムに置き換えることが可能である。また、本実施形態の画像解析装置のそ れぞれの手段は、それぞれモジュールィ匕されてコンピュータに接続される外部装置と して構成することができる。更に、画像解析装置の領域識別手段と、画像補正手段と 、対応点決定手段と、奥行き情報決定手段が採用している画像解析の幾何学的な 手法は、画像の内容と測定対象物の特性等に合わせて、適宜選択と変更が可能で ある。  [0045] Although one embodiment of the present invention has been described in detail above in the embodiment, this is merely an example and does not limit the scope of the claims. The technology described in the claims includes various modifications and changes of the specific examples illustrated above. For example, the image analysis apparatus in this embodiment can be replaced with an image analysis program that can cause a computer to execute the same process. Further, each means of the image analysis apparatus of the present embodiment can be configured as an external apparatus that is modularized and connected to a computer. Furthermore, the geometrical method of image analysis employed by the area identification unit, image correction unit, corresponding point determination unit, and depth information determination unit of the image analysis apparatus is the characteristics of the image content and the measurement object. It can be selected and changed as appropriate.
図面の簡単な説明  Brief Description of Drawings
[0046] [図 1]被験者の眼底のステレオ画像を撮影する場合の、基本的な撮影位置を示す図 [図 2]解析するステレオ画像 2, 4の一例を示す図である。  FIG. 1 is a diagram showing a basic imaging position when a stereo image of the fundus of a subject is captured. FIG. 2 is a diagram showing an example of stereo images 2 and 4 to be analyzed.
[図 3]本発明の一実施形態の画像解析装置による画像処理の内容を示すフロー図 である。  FIG. 3 is a flowchart showing the contents of image processing by the image analysis apparatus of one embodiment of the present invention.
[図 4]領域識別処理によって視神経乳頭領域 6が白色で示され、視神経乳頭領域 6 以外の網膜領域 8と血管 10が黒色で示された眼底のステレオ画像図である。  FIG. 4 is a stereo image diagram of the fundus oculi where the optic disc region 6 is shown in white and the retinal region 8 and blood vessels 10 other than the optic disc region 6 are shown in black by the region identification process.
[図 5]位置あわせの補正が行われた右画像 16を示す図である。 [図 6]関心領域 19が配置された左画像 2と、対応領域 20が配置された右画像 4を示 す図である。 FIG. 5 is a diagram showing a right image 16 that has been subjected to alignment correction. FIG. 6 shows a left image 2 in which a region of interest 19 is arranged and a right image 4 in which a corresponding region 20 is arranged.
[図 7]画像解析装置が構築した視神経乳頭領域 6の 3次元画像の出力例を示す図で ある。  FIG. 7 is a diagram showing an output example of a three-dimensional image of the optic nerve head region 6 constructed by the image analysis apparatus.
符号の説明 Explanation of symbols
2左画像 4, 16右画像 6視神経乳頭領域 8網膜領域 10血管 12乳頭辺縁部 14陥凹部 18基準点 19関心領域 20対応領域 21対応点 22, 24撮影位置 26, 28画像 2 Left image 4, 16 Right image 6 Optic nerve head region 8 Retina region 10 Blood vessel 12 Papillary margin 14 Depression recess 18 Reference point 19 Region of interest 20 Corresponding region 21 Corresponding point 22, 24 Imaging position 26, 28 image

Claims

請求の範囲 The scope of the claims
[1] 被験者の眼底を撮影したステレオ画像から、?見神経乳頭の 3次元形状を解析する画 像解析装置であって、前記ステレオ画像の画素データを入力する画像入力手段と、 前記ステレオ画像毎に視神経乳頭の領域と視神経乳頭以外の領域とを識別する領 域識別手段と、前記ステレオ画像の視神経乳頭以外の領域の画素データを解析し て、視神経乳頭の領域を含む画像全体の位置合わせを行う画像補正手段と、前記 画像補正手段によって位置合わせをされたステレオ画像の視神経乳頭の領域の画 素データを解析して、視神経乳頭の中の同一箇所を撮影して ヽる対応点を決定する 対応点決定手段と、前記対応点決定手段によって決定された対応点の各画像にお ける位置情報から、視神経乳頭の奥行き情報を計算する奥行き情報決定手段とを含 んで 、ることを特徴とする画像解析装置。  [1] An image analyzing apparatus for analyzing a three-dimensional shape of a optic nerve head from a stereo image obtained by photographing a fundus of a subject, an image input means for inputting pixel data of the stereo image, and each stereo image The region discriminating means for discriminating between the optic disc region and the region other than the optic disc, and the pixel data of the region other than the optic disc in the stereo image are analyzed to align the entire image including the optic disc region. The image correction means to be performed and the pixel data of the region of the optic nerve head of the stereo image aligned by the image correction means are analyzed, and corresponding points to be captured are determined by photographing the same portion in the optic nerve head. Corresponding point determining means, and depth information determining means for calculating depth information of the optic nerve head from position information in each image of the corresponding points determined by the corresponding point determining means. In the image analysis apparatus according to claim Rukoto.
[2] 画像補正手段は、ステレオ画像間における同位置の画素値の差分の合計値を評価 して、画像間の縦方向の移動量と、横方向の移動量と、横方向の拡大縮小率を求め ることによって画像の位置補正を行うことを特徴とする請求項 1に記載の画像解析装 置。  [2] The image correction means evaluates the total difference between the pixel values at the same position between the stereo images, and determines the vertical movement amount, the horizontal movement amount, and the horizontal scaling ratio between the images. 2. The image analysis apparatus according to claim 1, wherein the position of the image is corrected by obtaining.
[3] 対応点決定手段は、基準とする画像の視神経乳頭の領域中に基準点を設定し、こ の基準点の周囲の特定の範囲に関心領域を設定し、他の画像の視神経乳頭の領域 から、前記関心領域の画素データに最も類似する画素データの配列を有する領域を 検索し、他の画像の視神経乳頭の領域に、関心領域と充分に類似する画素データ の配列が得られな 、場合には、他の画像の視神経乳頭の領域の横方向の拡大若し くは縮小を行って、前記関心領域の画素データに最も類似する画素データの配列を 有する領域を検索し、検索の結果に基づ!、て基準とする画像と他の画像の対応点を 決定することを特徴とする請求項 1又は 2に記載の画像解析装置。  [3] Corresponding point determination means sets a reference point in the area of the optic nerve head of the reference image, sets a region of interest in a specific range around this reference point, and sets the optic nerve head of another image. A region having a pixel data array most similar to the pixel data of the region of interest is searched from the region, and an array of pixel data sufficiently similar to the region of interest is not obtained in the region of the optic nerve head of another image. In this case, the region of the optic nerve head in the other image is expanded or reduced in the horizontal direction to search for a region having an array of pixel data most similar to the pixel data of the region of interest, and the result of the search The image analysis apparatus according to claim 1 or 2, wherein a corresponding point between the image used as a reference and another image is determined based on the above.
[4] 被験者の眼底を撮影したステレオ画像の画素データから、視神経乳頭の 3次元形状 を解析させるための画像解析プログラムであって、前記ステレオ画像毎に視神経乳 頭の領域と視神経乳頭以外の領域の各画素データを識別し、前記ステレオ画像の 視神経乳頭以外の領域の画素データを解析して、?見神経乳頭の領域を含む画像全 体の位置合わせを行!ヽ、前記位置合わせされたステレオ画像の視神経乳頭の領域 の画素データを解析して、視神経乳頭の中の同一箇所を撮影している対応点を決 定し、前記対応点の各画像における位置情報から、視神経乳頭の奥行き情報を計 算する処理をコンピュータに実行させるための画像解析プログラム。 [4] An image analysis program for analyzing the three-dimensional shape of the optic nerve head from pixel data of a stereo image obtained by photographing the fundus of the subject, wherein the area of the optic nerve head and the area other than the optic nerve head for each of the stereo images Are identified, and the pixel data of the region other than the optic nerve head of the stereo image are analyzed. Align the entire image including the optic disc area! ヽ, the optic disc area of the aligned stereo image The processing of calculating the depth information of the optic disc from the position information in each image of the corresponding point is determined by analyzing An image analysis program to be executed.
PCT/JP2006/318912 2006-03-24 2006-09-25 Image analyzer and program for stereo eye fundus image WO2007110982A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006082323A JP5278984B2 (en) 2006-03-24 2006-03-24 Image analysis apparatus and image analysis program
JP2006-082323 2006-03-24

Publications (1)

Publication Number Publication Date
WO2007110982A1 true WO2007110982A1 (en) 2007-10-04

Family

ID=38540921

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/318912 WO2007110982A1 (en) 2006-03-24 2006-09-25 Image analyzer and program for stereo eye fundus image

Country Status (2)

Country Link
JP (1) JP5278984B2 (en)
WO (1) WO2007110982A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007252707A (en) * 2006-03-24 2007-10-04 Gifu Univ Image analysis apparatus and program
WO2008111550A1 (en) * 2007-03-13 2008-09-18 Gifu University Image analysis system and image analysis program
WO2022175736A1 (en) * 2021-02-22 2022-08-25 Alcon Inc. Tracking of retinal traction through digital image correlation

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5054579B2 (en) * 2008-03-07 2012-10-24 興和株式会社 Image processing method and image processing apparatus
JP5478103B2 (en) * 2009-04-15 2014-04-23 興和株式会社 Image processing method
JP5355220B2 (en) 2009-05-22 2013-11-27 キヤノン株式会社 Fundus photographing device
JP6711617B2 (en) * 2015-12-28 2020-06-17 キヤノン株式会社 Image forming apparatus, image forming method and program thereof
JP7078948B2 (en) 2017-06-27 2022-06-01 株式会社トプコン Ophthalmic information processing system, ophthalmic information processing method, program, and recording medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04166708A (en) * 1990-10-31 1992-06-12 Canon Inc Image processing apparatus
JPH04276232A (en) * 1991-03-04 1992-10-01 Canon Inc Image processing method and system therefor

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3798161B2 (en) * 1998-10-29 2006-07-19 株式会社ニデック Fundus measurement device and recording medium recording fundus measurement program
JP2000245700A (en) * 1999-03-01 2000-09-12 Nidek Co Ltd Instrument for measuring eyeground and recording medium for recording measurement program
JP4896311B2 (en) * 2001-07-18 2012-03-14 興和株式会社 Fundus image acquisition and display device
JP4699064B2 (en) * 2005-03-29 2011-06-08 株式会社ニデック Stereoscopic fundus image processing method and processing apparatus
JP5278984B2 (en) * 2006-03-24 2013-09-04 興和株式会社 Image analysis apparatus and image analysis program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04166708A (en) * 1990-10-31 1992-06-12 Canon Inc Image processing apparatus
JPH04276232A (en) * 1991-03-04 1992-10-01 Canon Inc Image processing method and system therefor

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007252707A (en) * 2006-03-24 2007-10-04 Gifu Univ Image analysis apparatus and program
WO2008111550A1 (en) * 2007-03-13 2008-09-18 Gifu University Image analysis system and image analysis program
JP2008220617A (en) * 2007-03-13 2008-09-25 Gifu Univ Image analysis system and program
US8265398B2 (en) 2007-03-13 2012-09-11 Kowa Company, Ltd. Image analysis system and image analysis program
WO2022175736A1 (en) * 2021-02-22 2022-08-25 Alcon Inc. Tracking of retinal traction through digital image correlation
US11950969B2 (en) 2021-02-22 2024-04-09 Alcon Inc. Tracking of retinal traction through digital image correlation

Also Published As

Publication number Publication date
JP2007252707A (en) 2007-10-04
JP5278984B2 (en) 2013-09-04

Similar Documents

Publication Publication Date Title
US11775056B2 (en) System and method using machine learning for iris tracking, measurement, and simulation
US6714672B1 (en) Automated stereo fundus evaluation
JP5278984B2 (en) Image analysis apparatus and image analysis program
KR0158038B1 (en) Apparatus for identifying person
US8194936B2 (en) Optimal registration of multiple deformed images using a physical model of the imaging distortion
EP2888718B1 (en) Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation
JP4307496B2 (en) Facial part detection device and program
US8855386B2 (en) Registration method for multispectral retinal images
Crihalmeanu et al. Enhancement and registration schemes for matching conjunctival vasculature
CN102670168A (en) Ophthalmologic apparatus and control method of same
EP2188779A1 (en) Extraction method of tongue region using graph-based approach and geometric properties
US20080123906A1 (en) Image Processing Apparatus And Method, Image Sensing Apparatus, And Program
JP4915737B2 (en) Image analysis system and image analysis program
US20200401841A1 (en) Apparatus for diagnosing glaucoma
JP5583980B2 (en) Image processing method and image processing apparatus
KR102250688B1 (en) Method and device for automatic vessel extraction of fundus photography using registration of fluorescein angiography
CN109658393A (en) Eye fundus image joining method and system
CN112164043A (en) Method and system for splicing multiple fundus images
US10573007B2 (en) Image processing apparatus, image processing method, and image processing program
CN109993090B (en) Iris center positioning method based on cascade regression forest and image gray scale features
CN111340052A (en) Tongue tip red detection device and method for tongue diagnosis in traditional Chinese medicine and computer storage medium
KR20100002799A (en) Method for making three-dimentional model using color correction
JP5636550B2 (en) Lens image analyzer
CN109447948B (en) Optic disk segmentation method based on focus color retina fundus image
CN114081437A (en) Method for measuring iris rotation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06810480

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06810480

Country of ref document: EP

Kind code of ref document: A1