WO2015182632A1 - Dispositif de traitement d'image, procédé de traitement d'image, et programme de traitement d'image - Google Patents

Dispositif de traitement d'image, procédé de traitement d'image, et programme de traitement d'image Download PDF

Info

Publication number
WO2015182632A1
WO2015182632A1 PCT/JP2015/065171 JP2015065171W WO2015182632A1 WO 2015182632 A1 WO2015182632 A1 WO 2015182632A1 JP 2015065171 W JP2015065171 W JP 2015065171W WO 2015182632 A1 WO2015182632 A1 WO 2015182632A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sampling
sampling interval
scanning
images
Prior art date
Application number
PCT/JP2015/065171
Other languages
English (en)
Japanese (ja)
Inventor
中川 俊明
Original Assignee
興和株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 興和株式会社 filed Critical 興和株式会社
Priority to JP2016523521A priority Critical patent/JP6461936B2/ja
Publication of WO2015182632A1 publication Critical patent/WO2015182632A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions

Definitions

  • the present invention relates to an image processing apparatus, an image processing method, and an image processing program for processing a tomographic image captured by a tomographic imaging apparatus or the like to generate an image suitable for a diagnostic image.
  • a tomographic imaging apparatus using optical interference called OCT (Optical Coherence Tomography) for capturing a tomographic image of the fundus
  • OCT Optical Coherence Tomography
  • a tomographic image (B-scan image) in the xz direction can be acquired with the left-right direction of the fundus in the x direction, the vertical direction in the y direction, and the depth in the z direction.
  • general OCT imaging for example, a tomographic image is captured at a speed of 40 images / second, and a group of 100 or more retinal tomographic images can be acquired by one examination (imaging of a part of the retina).
  • Patent Document 1 discloses a technique for generating a tomographic image with less noise by adding and averaging the entire photographed two-dimensional tomographic image.
  • a tomographic image may be acquired using a plurality of scan patterns in one imaging.
  • Patent Document 2 discloses a technique for scanning an attention site such as an optic disc on the fundus, a macular region, a lesion site, or a treatment site with a narrower interval than other sites.
  • Patent Document 3 and Patent Document 4 disclose a technique for performing an auxiliary scan in addition to a base scan in order to prevent fixation fine movement.
  • JP 2008-237238 A Japanese Patent No. 4971864 Special table 2013-525035 Special table 2011-516235
  • Patent Documents 1 to 4 are effective in reducing noise, reducing misalignment due to the effect of fixational motion, and reducing image distortion, all of them increase the resolution of the interpretation image itself. Thus, it is not possible to obtain information on the fundus with higher accuracy. Although there is a possibility that a higher resolution image for interpretation can be obtained by improving hardware such as a tomographic imaging apparatus, a higher resolution image can be obtained by image processing alone without hardware improvement. Can be obtained technically.
  • An object of the present invention is to provide an image processing apparatus, an image processing method, and an image processing program capable of generating an exceeding high resolution image.
  • the present invention is obtained by scanning substantially the same position of an object in the same direction at a predetermined sampling interval in a tomographic imaging apparatus that captures a tomographic image of the fundus of the eye to be examined.
  • An image processing apparatus for processing the first image wherein the first image is obtained by scanning the first image at a first sampling interval in the same direction, and the first sampling is performed by image processing from the first image.
  • An image generation means for generating a second image having a second sampling interval that is narrower than the interval, and an image obtained by scanning at different sampling positions at the first sampling interval in the same direction, An image different from the first image is set as a third image, and for each sampling image of the third image, an image corresponding to the sampling image is searched from the second image, and the search is performed. And to provide an image processing apparatus and an adder for adding the sampled image to the image (invention 1).
  • the second narrower than the first sampling interval by image processing based on the first image obtained by actually scanning the object at the first sampling interval, the second narrower than the first sampling interval by image processing.
  • a second image can be generated as if it was obtained by scanning at a sampling interval, and a third image obtained by scanning at different sampling positions at the first sampling interval can be added. That is, since the second image is generated as if it had an image having twice the number of pixels as the first image, the second image is used as a reference image in the subsequent addition process.
  • An image for image interpretation obtained later can have a higher resolution than an image obtained by actually scanning an object.
  • the first image may be one image selected from a plurality of images obtained by scanning in the same direction at the first sampling interval (Invention 2) It may be an image obtained by averaging a predetermined number of images among a plurality of images obtained by scanning at the first sampling interval in the same direction (Invention 3).
  • the second sampling interval is one half of the first sampling interval (Invention 4).
  • the first sampling interval is set so that the sampling image is an A-scan image (Invention 5).
  • the present invention provides an image processing method for processing an image obtained by scanning substantially the same position of an object in the same direction at a predetermined sampling interval in a tomographic imaging apparatus that captures a tomographic image of the fundus of the eye to be examined.
  • the image obtained by scanning at the first sampling interval in the same direction is set as a first image, and the sampling interval is narrower than the first sampling interval by image processing from the first image.
  • An image generation step for generating a second image and an image obtained by scanning at different sampling positions at the first sampling interval in the same direction and different from the first image is defined as a third image. For each sampled image of the third image, an image corresponding to the sampled image is searched from the second image, and the sampled image is added to the searched image.
  • an image processing method comprising an adding step of adding the (invention 6).
  • the second narrower than the first sampling interval by the image processing based on the first image obtained by actually scanning the object at the first sampling interval, the second narrower than the first sampling interval by the image processing.
  • a second image can be generated as if it was obtained by scanning at a sampling interval, and a third image obtained by scanning at different sampling positions at the first sampling interval can be added. That is, since the second image is generated as if it had an image having twice the number of pixels as the first image, the second image is used as a reference image in the subsequent addition process.
  • An image for image interpretation obtained later can have a higher resolution than an image obtained by actually scanning an object.
  • the first image may be one image selected from a plurality of images obtained by scanning in the same direction at the first sampling interval (Invention 7) An image obtained by averaging a predetermined number of images among a plurality of images obtained by scanning at the first sampling interval in the same direction (Invention 8).
  • the second sampling interval is one half of the first sampling interval (Invention 9).
  • the first sampling interval is set so that the sampling image is an A-scan image (Invention 10).
  • the present invention provides an image processing program characterized by causing a computer to execute the image processing method according to the above inventions (Inventions 6 to 10) (Invention 11).
  • the image processing apparatus the image processing method, and the image processing program of the present invention
  • hardware limitations are obtained from a plurality of images obtained by scanning substantially the same position in the same direction without improving the hardware. It is possible to generate a high-resolution image exceeding.
  • FIG. 1 is a configuration diagram illustrating an image processing system according to an embodiment of the present invention.
  • FIG. 2 is an optical diagram showing a detailed configuration of a tomography unit according to the embodiment.
  • 5 is a flowchart showing a flow of image processing according to the embodiment. It is explanatory drawing which showed the state which scans a fundus with signal light in the embodiment. It is explanatory drawing which showed the state which acquires the tomographic image of several sheets in the same embodiment. It is explanatory drawing which showed the reference
  • FIG. 1 is a configuration diagram showing the entire image processing system according to an embodiment of the present invention, that is, a system for acquiring and processing a tomographic image of the fundus of the eye to be examined.
  • a fundus photographing unit 1 for observing and photographing the fundus (retina) Ef of the eye E to be examined, and an imaging device 100 constituted by an illumination optical system 4, a photographing optical system 5, a two-dimensional CCD, and a CMOS. It has.
  • the illumination optical system 4 includes an observation light source such as a halogen lamp and a photographing light source such as a xenon lamp, and light from these light sources is guided to the fundus oculi Ef via the illumination optical system 4 to illuminate the fundus oculi Ef.
  • the photographing optical system 5 includes an optical system such as an objective lens, a photographing lens, and a focusing lens. The photographing optical system 5 guides photographing light reflected by the fundus oculi Ef to the imaging device 100 along the photographing optical path, and photographs an image of the fundus oculi Ef.
  • the scanning unit 6 guides the signal light reflected by the fundus oculi Ef described later to the tomography unit 2.
  • the scanning unit 6 is a known galvanometer mirror 11 or focus optical system 12 for scanning light from the low-coherence light source 20 of the tomography unit 2 in the x direction (horizontal direction) and the y direction (vertical direction) in FIG. It is a mechanism equipped with.
  • the scanning unit 6 is optically connected to the tomographic imaging unit 2 that captures a tomographic image of the fundus oculi Ef via the connector 7 and the connection line 8.
  • the tomographic imaging unit 2 is a known unit that operates, for example, in the Fourier domain method (spectral domain method), and its detailed configuration is shown in FIG. 2 and has a wavelength of 700 nm to 1100 nm and about several ⁇ m to several tens ⁇ m.
  • a low-coherence light source 20 that emits light of a temporal coherence length.
  • the low coherence light LO generated by the low coherence light source 20 is guided to the optical coupler 22 by the optical fiber 22a, and is divided into the reference light LR and the signal light LS.
  • the reference light LR passes through the optical fiber 22b, the collimator lens 23, the glass block 24, and the density filter 25, and reaches the reference mirror 26 that can move in the optical axis direction for adjusting the optical path length.
  • the glass block 24 and the density filter 25 function as delay means for matching the optical path length (optical distance) of the reference light LR and the signal light LS, and as means for matching the dispersion characteristics of the reference light LR and the signal light LS. .
  • the signal light LS reaches the fundus oculi Ef via the scanning unit 6 of FIG. 1 by the optical fiber 22c inserted through the connection line 8, and scans the fundus in the horizontal direction (x direction) and the vertical direction (y direction). .
  • the signal light LS that has reached the fundus oculi Ef is reflected by the fundus oculi Ef and returns to the optical coupler 22 by following the above path in reverse.
  • the reference light LR reflected by the reference mirror 26 and the signal light LS reflected by the fundus oculi Ef are superimposed by the optical coupler 22 to become interference light LC.
  • the interference light LC is guided to the OCT signal detection device 21 by the optical fiber 22d.
  • the interference light LC is converted into a parallel light beam by the collimator lens 21a in the OCT signal detection device 21, and then is incident on the diffraction grating 21b and dispersed, and is imaged on the CCD 21d by the imaging lens 21c.
  • the OCT signal detection device 21 generates an OCT signal indicating information in the depth direction (z direction) of the fundus oculi based on the dispersed interference light.
  • an image processing apparatus 3 configured by a personal computer or the like connected to the tomography unit 2 is provided.
  • the image processing apparatus 3 is provided with a control unit 30 including a CPU, a RAM, a ROM, and the like, and the control unit 30 controls the entire image processing by executing an image processing program.
  • the display unit 31 is configured by a display device such as an LCD, for example, and displays an image such as a tomographic image or a front image generated or processed by the image processing device 3 or accompanying information such as information on the subject. Or display.
  • the input unit 32 performs an input operation on the image displayed on the display unit 31 with input means such as a mouse, a keyboard, and an input pen. Further, the operator can give an instruction to the image processing apparatus 3 or the like through the input unit 32.
  • the tomographic image forming unit 41 is provided in the image processing apparatus 3.
  • the tomographic image forming unit 41 is realized by a dedicated electronic circuit that executes a known analysis method such as a Fourier domain method (spectral domain method), or an image processing program that is executed by the above-described CPU. Based on the detected OCT signal, a tomographic image of the fundus oculi Ef is formed.
  • the tomographic image formed by the tomographic image forming unit 41 is stored in a storage unit 42 configured by, for example, a semiconductor memory or a hard disk device.
  • the storage unit 42 further stores the above-described image processing program and the like.
  • the image processing apparatus 3 is provided with an image processing unit 50, and the image processing unit 50 includes an image generation unit 51 and an addition unit 52.
  • the image generation means 51 selects or generates a reference image (first image) from the tomographic image formed by the tomographic image forming unit 41, and combines the virtual image generated based on the reference image with the reference image.
  • An image (second image) is generated.
  • the adding unit 52 adds a tomographic image (third image) different from the reference image to the synthesized image to generate an image for interpretation.
  • This image processing is performed by the control unit 30 reading and executing the image processing program stored in the storage unit 42.
  • the eye E and the fundus imaging unit 1 are aligned, and the fundus Ef is focused.
  • the low-coherence light source 20 is turned on, the signal light from the tomography unit 2 is swept in the x and y directions by the scanning unit 6, and the fundus oculi Ef is scanned.
  • This state is illustrated in FIG. 4, and the region R where the macular portion of the retina is present is scanned by n scanning lines y 1 , y 2 ,..., Y n in a direction parallel to the x axis. Is done.
  • the signal light LS reflected by the fundus oculi Ef is superimposed on the reference light LR reflected by the reference mirror 26 in the tomography unit 2.
  • interference light LC is generated, and an OCT signal is generated from the OCT signal detector 21.
  • the tomographic image forming unit 41 forms a tomographic image of the fundus oculi Ef based on this OCT signal (step S2), and the formed tomographic image is stored in the storage unit.
  • Reference image B1 is 100 tomographic to the image T i may select any one piece, 100 tomographic images T i averaging process a plurality of arbitrarily selected from-obtained image It is good.
  • one of the 100 tomographic images T i formed first that is, the tomographic image T 1
  • the reference image B1 is selected or created in this way, it is stored in the storage unit 42.
  • one of the 100 tomographic images T i formed first that is, the tomographic image T 1
  • the reference image B 1 is selected as the reference image B 1 and stored in the storage unit 42.
  • the selected reference image B1 is shown in FIG.
  • the reference image B1 (tomographic image T 1 ) in the present embodiment is composed of 10 lines of sampling images, and each sampling image is an area having a length of the number of pixels defined in the z direction. Each of these regions is a line (width is a width corresponding to one pixel) extending in the z direction as shown in FIG. 6A, and is a line called an A scan image. That is, in this embodiment, the sampling interval is set so that the sampling image becomes an A-scan image.
  • a virtual image B1 ′ is generated based on the reference image B1 stored in the storage unit 42.
  • the virtual image B1 ′ generates pixels other than the pixels of the retinal layer L (fundus tissue) that are the objects of the tomographic image in the reference image B1 at intervals of a predetermined amount d along the x axis.
  • the predetermined amount d is half the sampling interval, that is, half the width of the A-scan image.
  • the virtual image B1 ′ is generated as if it were an image formed in a state where the retinal layer L, which is the object, was shifted half a pixel.
  • the virtual image B1 ′ is generated in this way, it is stored in the storage unit.
  • step S5 the reference image B1 and the virtual image B1 ′ are aligned and superimposed so that the retinal layer L that is the object does not shift, and a composite image B2 is generated as shown in FIG. 6C. To do.
  • the composite image B2 is generated in this way, it is stored in the storage unit 42.
  • step S6 the tomographic image T i other than the reference image B1 on the synthesized image B2 (the tomographic image T 1) is added as an addition target image B3, and generates an image to be read.
  • the tomographic image T 1 than all tomographic images of the selected as the reference image B1, or a portion of the tomographic images selected arbitrarily from the tomographic image T 1 other than the tomographic images T i, one by one storage unit 42 is called an addition target image B3, and for each sampling image (A scan image) of the addition target image B3, an image corresponding to the sampling image is searched from the composite image B2, and the searched image is the sampling image. Repeat the process of adding.
  • step S3 to step S6 will be described in detail with reference to an explanatory diagram (FIG. 7) based on a model obtained by simplifying an object and an image.
  • the object is simplified to a substantially circular shape
  • the image is simplified to an image composed of four sampling images (A scan images).
  • FIG. 7 shows a model for explanation only.
  • the reference image B1 is selected in step S3.
  • the object is located in the second A-scan image A2 from the left.
  • a virtual image B1 ′ is generated in step S4 by shifting the object other than the object by a half of the A scan width.
  • the left half of the object is located in the leftmost A-scan image A1 ′
  • the right half of the object is located in the second A-scan image A2 ′ from the left.
  • the reference image B1 and the virtual image B1 ′ are overlapped to generate a composite image B2.
  • eight A-scan images are overlapped by half, and the object is included in the third, third, and fourth A-scan images from the left.
  • step S6 it is assumed that the addition target image B3 is an image acquired in a state where the target object itself has moved to the right by half of the A scan width. Therefore, in the addition target image B3, the left half of the object is located in the second A-scan image A2 from the left, and the right half of the object is located in the third A-scan image A3 from the left.
  • the region corresponding to the composite image B2 corresponding to the A-scan image A2 of the addition target image B3 as a search target sampling image is searched, it is indicated by the second region from the left in the composite image B2, that is, a one-dot chain line. The region is most matched with the region, and the sampling image is added to this region.
  • the synthesized image B2 is not generated and the search target sampling image corresponds to which region of the reference image B1 as in the prior art, the matching region cannot be searched in the example as shown in FIG.
  • the sampled image may be ignored without being added.
  • the reference image B1 obtained by actually scanning the object is superimposed on the virtual image B1 ′ generated from the reference image B1 so that the sampling interval is half the A scan width. Since the composite image B2 generated in this way is generated as if it is an image having twice as many pixels as the reference image B1, the composite image is used as a reference image in the addition process of the addition target image B3. Since the sampling image can also be included in the addition target, the image for interpretation obtained after the addition process can have a higher resolution than the image obtained by actually scanning the target object.
  • step S6 the search for which region of the composite image B2 corresponds to each sampling image of the addition target image B3 can be performed by, for example, calculating the correlation coefficient r shown below. Note that the search target of the sampled image of the addition target image B3 A S, each region image of the composite image B2 and A C.
  • a (k) in the above equation (Equation 1) is a set of pixel values (number of pixels n), and A (upper horizontal line) is an average of the pixel values.
  • the entire sample image A S which be searched by performing matching using only a predetermined region it is possible to shorten the search time. For example, as the region of interest, the total luminance value is increased, the luminance value contrast (maximum value, minimum value) is large, or the region of the retinal layer or the lesioned region where the total value of edge strength is large is set, You may match only about the said interested region.
  • the area image A C a search area of the composite image B2
  • by performing matching only a predetermined area as a search area it is possible to shorten the search time.
  • the position in addition object image B3 of the sampled image A S to be searched target reference position, a total of three regions only the search area of the composite image B2 that corresponds to the region of the reference position and the left and right both sides, the three Only the search area may be matched.
  • step S6 the process for adding the corresponding region of the sampled image A S the searched synthesized image B2 to be searched is repeated for each sampled image of the addition target image B3. Further, the above-described step S6 is repeated for a desired number of tomographic images T i to be added, and after the addition processing to the composite image B2 is completed, an average thereof is obtained to generate a high-resolution image for interpretation. be able to.
  • the hardware limit is exceeded from a plurality of images obtained by scanning substantially the same position in the same direction without improving the hardware. High-resolution images can be generated.
  • the image processing system according to the present invention has been described with reference to the drawings.
  • the present invention is not limited to the above-described embodiment, and various modifications can be made.
  • the present invention is also applicable to an image processing system that processes images other than fundus tomographic images.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

La présente invention concerne un dispositif (50) de traitement d'image comprenant un moyen (51) de production d'image et un moyen d'addition (52). En effectuant un traitement d'image sur de premières images (B1) obtenues par un balayage à un premier intervalle d'échantillonnage dans une direction donnée, le moyen (51) de production d'image produit des deuxièmes images (B2) présentant un deuxième intervalle d'échantillonnage qui est plus court que le premier intervalle d'échantillonnage. En faisant appel, en tant que troisièmes images (B3), à des images qui sont différentes des premières images (B1) et qui sont obtenues par balayage au dit premier intervalle d'échantillonnage dans la même direction à différentes positions d'échantillonnage, pour chaque image d'échantillonnage consistant en une troisième image (B3), le moyen d'addition (52) recherche les deuxièmes images (B2) pour une image correspondant à ladite image d'échantillonnage et additionne ladite image d'échantillonnage à ladite image trouvée.
PCT/JP2015/065171 2014-05-28 2015-05-27 Dispositif de traitement d'image, procédé de traitement d'image, et programme de traitement d'image WO2015182632A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2016523521A JP6461936B2 (ja) 2014-05-28 2015-05-27 画像処理装置、画像処理方法及び画像処理プログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014110051 2014-05-28
JP2014-110051 2014-05-28

Publications (1)

Publication Number Publication Date
WO2015182632A1 true WO2015182632A1 (fr) 2015-12-03

Family

ID=54698956

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/065171 WO2015182632A1 (fr) 2014-05-28 2015-05-27 Dispositif de traitement d'image, procédé de traitement d'image, et programme de traitement d'image

Country Status (2)

Country Link
JP (1) JP6461936B2 (fr)
WO (1) WO2015182632A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7044421B1 (ja) 2021-03-15 2022-03-30 株式会社吉田製作所 Oct装置、その制御方法およびoct装置制御プログラム

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012075640A (ja) * 2010-09-30 2012-04-19 Nidek Co Ltd 眼科観察システム
JP2013034658A (ja) * 2011-08-08 2013-02-21 Nidek Co Ltd 眼底撮影装置
JP2014083263A (ja) * 2012-10-24 2014-05-12 Nidek Co Ltd 眼科撮影装置及び眼科撮影プログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012075640A (ja) * 2010-09-30 2012-04-19 Nidek Co Ltd 眼科観察システム
JP2013034658A (ja) * 2011-08-08 2013-02-21 Nidek Co Ltd 眼底撮影装置
JP2014083263A (ja) * 2012-10-24 2014-05-12 Nidek Co Ltd 眼科撮影装置及び眼科撮影プログラム

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7044421B1 (ja) 2021-03-15 2022-03-30 株式会社吉田製作所 Oct装置、その制御方法およびoct装置制御プログラム
JP2022141083A (ja) * 2021-03-15 2022-09-29 株式会社吉田製作所 Oct装置、その制御方法およびoct装置制御プログラム

Also Published As

Publication number Publication date
JPWO2015182632A1 (ja) 2017-04-20
JP6461936B2 (ja) 2019-01-30

Similar Documents

Publication Publication Date Title
JP5523658B2 (ja) 光画像計測装置
JP6025349B2 (ja) 画像処理装置、光干渉断層撮像装置、画像処理方法および光干渉断層撮像方法
JP6276943B2 (ja) 眼科装置
JP6909109B2 (ja) 情報処理装置、情報処理方法、及びプログラム
US10165939B2 (en) Ophthalmologic apparatus and ophthalmologic apparatus control method
JP2007215733A (ja) 眼底観察装置
JP5941761B2 (ja) 眼科撮影装置及び眼科画像処理装置
KR101636811B1 (ko) 촬영 장치, 화상 처리 장치 및 화상 처리 방법
WO2020050308A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et programme
US10588508B2 (en) Ophthalmic apparatus
JP2007181632A (ja) 眼底観察装置
JPWO2016002740A1 (ja) 断層像撮影装置
JP6461936B2 (ja) 画像処理装置、画像処理方法及び画像処理プログラム
JP6461937B2 (ja) 画像処理装置、画像処理方法及び画像処理プログラム
JP6099782B2 (ja) 眼科撮影装置
JP6042573B2 (ja) 画像処理装置、画像処理方法及び画像処理プログラム
JP2018191761A (ja) 情報処理装置、情報処理方法及びプログラム
JP6564076B2 (ja) 眼科装置
JP5913519B2 (ja) 眼底観察装置
WO2015129718A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image
JP5209143B2 (ja) 眼底観察装置
JP7086708B2 (ja) 画像処理装置、画像処理方法及びプログラム
JP6527970B2 (ja) 眼科装置
JP6653174B2 (ja) 断層像撮影装置
JP6106299B2 (ja) 眼科撮影装置及び眼科画像処理装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15799277

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016523521

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15799277

Country of ref document: EP

Kind code of ref document: A1