WO2013167641A1 - Analysis and visualization of oct angiography data - Google Patents
Analysis and visualization of oct angiography data Download PDFInfo
- Publication number
- WO2013167641A1 WO2013167641A1 PCT/EP2013/059560 EP2013059560W WO2013167641A1 WO 2013167641 A1 WO2013167641 A1 WO 2013167641A1 EP 2013059560 W EP2013059560 W EP 2013059560W WO 2013167641 A1 WO2013167641 A1 WO 2013167641A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vessel
- recited
- oct
- image
- capillary
- Prior art date
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/102—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0025—Operational features thereof characterised by electronic signal processing, e.g. eye models
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B9/00—Measuring instruments characterised by the use of optical techniques
- G01B9/02—Interferometers
- G01B9/02083—Interferometers characterised by particular signal processing and presentation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B9/00—Measuring instruments characterised by the use of optical techniques
- G01B9/02—Interferometers
- G01B9/0209—Low-coherence interferometers
- G01B9/02091—Tomographic interferometers, e.g. based on optical coherence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
Definitions
- the present invention relates to medical imaging, and in particular to analysis and visualization methods for data acquired through optical coherence tomography.
- Optical coherence tomography is a noninvasive, noncontact imaging modality that uses coherence gating to obtain high-resolution cross-sectional images of tissue microstructure.
- FD-OCT Fourier domain OCT
- the interferometric signal between light from a reference and the back-scattered light from a sample point is recorded in the frequency domain rather than the time domain.
- a one-dimensional Fourier transform is taken to obtain an A-line spatial distribution of the object scattering potential.
- the spectral information discrimination in FD-OCT can be accomplished by using a dispersive spectrometer in the detection arm in the case of spectral-domain OCT (SD-OCT) or rapidly tuning a swept laser source in the case of swept-source OCT (SS-OCT).
- SD-OCT spectral-domain OCT
- SS-OCT swept-source OCT
- OCT Angiography to map the retinal vasculature or identify regions with flow in the tissue
- OCT Angiography provides correlation between microstructure and microvasculature of optic nerve head in human subjects
- J. Biomed. Opt. 17, 116018 (2012) Zhao et al, "Doppler standard deviation imaging for clinical monitoring of in vivo human skin blood flow”
- Optics Letters 25, 1358-1360 (2000) Fingler et al.
- Mobility and transverse flow visualization using phase variance contrast with spectral domain optical coherence tomography Optics Express. Vol. 15, No. 20.
- OCT Angiography provides a non-invasive technique to visualize and indirectly quantify the integrity of retinal circulation pathways.
- Anomalies in retinal circulation have a direct relation to ocular pathologies, especially within the macula, wherein compromised hemodynamics may not only be related to decreased visual acuity, but could also be a surrogate biomarker for ocular pathologies like retinal vein occlusion (RVO), diabetic retinopathy (DR), and intra retinal microvasculature abnormality (IRMA).
- RVO retinal vein occlusion
- DR diabetic retinopathy
- IRMA intra retinal microvasculature abnormality
- correlation between retinal vasculature and blood flow are attributes of interest in a number of ocular defects.
- DR and RVO are pathologies that could lead to early changes to the vascular structure and function, and may, in turn, be etiologic to numerous complications like macular edema, retinal ischemia and optic neuropathy. For these cases, quantification and
- ischemic regions in the retina can be mapped to evaluate the extent of damage and further management of the disease.
- vascular rich retina there is a small area in the macula, at the fovea, which is devoid of any capillaries. This is called the Foveal Avascular zone (FAZ), and abnormal changes in the size of this region are also indicative of pathologies like ischemic maculopathy and DR.
- FAZ Foveal Avascular zone
- the quantification of the FAZ and measuring changes in its size over time can be a clinically significant numerical score for disease presence and progression, especially for DR.
- Angiography No information of the depth structure of the vasculature is captured by this method.
- vascular images generated by examining the OCT intensity or phase signal are non-invasive, and provide comparable fidelity in capturing the existing vascular network with blood flow contrast along with its depth encoding.
- vascular and the typically avascular retina are important indicators of developing retinal pathologies.
- visualization of the vascular structure helps in boosting the diagnostic efficacy of this imaging technique, it can be further leveraged by augmenting the visualization with some salient quantifications and metrics derived from the identified vascular and avascular sections of the retina.
- the primary quantity of interest is the global or structure-specific retinal blood flow kinetics, which can be challenging to quantify because of low flow velocities relative to the temporal resolution of the technique, and the almost perpendicular orientation of the capillaries with respect to the probing beam.
- derived quantifiers from the angiography data which serve to aid in differentiating capillary networks in healthy and diseased eyes are also desirable.
- Avakian et al. demonstrated the use of fractal characterization of fluorescein angiography (FA) images of the human retina to distinguish between healthy and diseased retina (see for example Avakian, et al., "Fractal analysis of region-based vascular change in the normal and non-proliferative diabetic retina,” Curr. Eye Res. 24, 274-280, 2002). Schmoll et al.
- FA fluorescein angiography
- OCT angiography data is often displayed as 2D projections with the color encoded depth information (see Kim et al. "In vivo volumetric imaging of human retinal circulation with phase variance OCT,” Biomedical Optics Express, 2(6), 1504-1513 (2011)).
- 2D projections at least allow distinguishing capillary layers of different depths. They however lack the 3D impression and also don't provide easily accessible information of which larger retinal vessels feed and drain different capillary network regions.
- Retinal vessel connectivity measures are also known for fundus photography, they however only focus on a few major retinal vessels in 2D fundus images, rather than visualizing the supply of dense, complex parafoveal capillary networks (see for example Al-Diri et al.
- Ganesan et al. investigates the connectivity of vessels in mouse retinas from the largest vessels to the smallest capillaries in confocal microscopy images in order to develop a network model (see for example Ganesan et al. "Development of an Image-Based Network Model of Retinal Vasculature,” Annals of Biomedical Engineering 38(4) 1566-1585 (2010)). They however don't describe using this as a way to interactively visualize human angiography acquisitions.
- the method described herein is a non-invasive, computational technique to generate images of retinal vasculature (or blood flow) that are then used to either extract various
- the anatomical location of the vasculature is defined as locations where there is an appreciable motion contrast, which is typically due to flow of blood.
- OCT based methods to detect motion contrast such as Doppler OCT, speckle or intensity variance, and phase-resolved methods.
- the motion contrast is determined by obtaining at least two OCT measurements at approximately the same location, where the two measurements are separated in time by a pre-determined interval and by applying an algorithm to look at the changes in the complex OCT signal or its components such as intensity or phase.
- the accuracy of these measurements can be improved by minimizing motion related errors including but not limited to removing signal due to bulk motion of the sample in the axial direction.
- the vascular structure extracted is post- processed to remove outliers and smooth the vessel structure.
- This derived structure is then depth coded and displayed over the rendered retinal anatomy (magnitude image).
- the post-processed vascular structure within a specific depth range can be summed or integrated along the axial direction to generate a projection map that shows the vasculature as an en face view, devoid of any depth information.
- the proposed invention deviates from the known prior art by detailing a completely automated (no manual intervention) method to accurately determine the capillary devoid regions of retina, by examination of the statistical properties of the intensity content of the retinal image which preferentially contrasts vascular regions.
- avascular zone In addition to the avascular zone, other metrics, like the vasculature density, capillary density, vessel geometry, capillary diameter, inter-capillary distance, area bounded by capillary loops, etc. can be determined by standard mathematical models and tools. These metrics can be identified in the vicinity of the fovea, or in other areas of interest, such as the perifoveal or peripapillary regions, the papillomacular bundle, or within the optic nerve head. This technique could be further extended to automatically identify regions of retinal ischemia in pathologies such as branch retinal vein occlusion (BRVO) and central retinal vein occlusion (CRVO). Furthermore, these techniques could also assist in identifying intraretinal microvascular abnormalities (IRMA).
- BRVO branch retinal vein occlusion
- CRVO central retinal vein occlusion
- IRMA intraretinal microvascular abnormalities
- IRMA is typically a DR-related condition that results in areas of capillary dilatation and intraretinal formation of new capillary beds. Often the IRMA related new vessel formation occurs in retinal tissues to act as shunts through areas of nonperfusion. or ischemia. Change in the metrics defined above could be used as a criterion to monitor if there has been new growth of vasculature or change in the regions of nonperfusion or ischemia.
- a novel method for effectively visualizing OCT angiography acquisitions in a meaningful way and quantitatively characterizing vasculature networks is presented.
- the examiner could select a vessel within an OCT angiography acquisition and the program would show all connecting vessels down to the capillary network.
- the information about the connectivity of different retinal vessels may also be used to quantitatively evaluate OCT angiography acquisitions and compare them to a normative database.
- FIG. 1 shows a flow chart of various step involved with processing motion contrast OCT data according to the present invention.
- FIG. 2 illustrates a generalized ophthalmic OCT imaging system that could be used for collection of motion contrast data.
- FIG. 3 shows an en face vasculature image generated from OCT data using normalized vector difference variance.
- FIG. 4 shows an OCT image of the retina illustrating three different plexus and the layers that they include.
- FIG. 5a shows a 3D visualization of OCT angiography data
- FIG. 5b shows a
- FIG. 6 illustrates how a stereoscopic image pair could be generated from motion contrast data to enable a type of 3D visualization.
- FIG. 7 shows an intensity histogram of an en face vasculature image that can be used to identify the foveal avascular zone (FAZ).
- FAZ foveal avascular zone
- FIG. 8 shows the results of an isophote delineation of an en face vasculature image according to one aspect of the present invention.
- FIG. 9 illustrates how the FAZ can be isolated from the rest of the image data after the isophote delineation in FIG. 8.
- FIG. 10 shows a map of the density of vessels in radially distributed sectors around the fovea. DETAILED DESCRIPTION
- FIG. 1 Preferred and alternative embodiments for the processing of vasculature enhanced OCT data are illustrated in the schematic of FIG. 1.
- This figure illustrates three possible ways to generate and use vasculature data from complex OCT data 102 acquired and reconstructed in an OCT system 101.
- the text below refers to each possible combination as a "Process", A, B or C, depending on which process path has been taken to generate the input data for the embodiment under discussion.
- different portions of the complex data intensity only, phase only, or both intensity and phase
- undergo different processing steps layer segmentation, motion contrast, integration/summation for en face image generation
- OCT data can be collected with any type of OCT system employing a variety of scan patterns, for example, a spectral domain OCT system, or a swept source OCT system, employing laser sources of different wavelength like 840 nm or 1060 nm.
- a diagram of a generalized OCT system is shown in FIG. 2.
- Light from source 201 is routed, typically by optical fiber 205, to illuminate the sample 210, a typical sample being tissues in the human eye.
- the source 201 can be either a broadband light source with short temporal coherence length in the case of SD-OCT or a wavelength tunable laser source in the case of SS-OCT.
- the light is scanned, typically with a scanner 207 between the output of the fiber and the sample, so that the beam of light (dashed line 208) is scanned laterally (in x and y) over the area or volume to be imaged.
- Light scattered from the sample is collected, typically into the same fiber 205 used to route the light for sample illumination.
- Reference light derived from the same source 201 travels a separate path, in this case involving fiber 203 and retro- reflector 204 with an adjustable optical delay.
- a transmissive reference path can also be used and that the adjustable delay could be placed in the sample or reference arm of the interferometer.
- Collected sample light is combined with reference light, typically in a fiber coupler 202, to form light interference in a detector 220.
- the output from the detector is supplied to a processor 221.
- the results can be stored in the processor 221 or displayed on display 222.
- the processing and storing functions may be localized within the OCT instrument or functions may be performed on an external processing unit to which the collected data is transferred. This unit could be dedicated to data processing or perform other tasks which are quite general and not dedicated to the OCT device.
- the sample and reference arms in the interferometer could consist of bulk-optics, fiber-optics or hybrid bulk-optic systems and could have different architectures such as Michelson, Mach- Zehnder or common-path based designs as would be known by those skilled in the art.
- Light beam as used herein should be interpreted as any carefully directed light path.
- the reference arm needs to have a tunable optical delay to generate interference.
- Balanced detection systems are typically used in TD-OCT and SS-OCT systems, while spectrometers are used at the detection port for SD-OCT systems.
- the invention described herein could be applied to any type of OCT system capable of generating data for functional analysis.
- the interference causes the intensity of the interfered light to vary across the spectrum.
- the Fourier transform of the interference light reveals the profile of scattering intensities at different path lengths, and therefore scattering as a function of depth (z-direction) in the sample (see for example Leitgeb et al. "Ultrahigh resolution Fourier domain optical coherence tomography,” Optics Express 12(10):2156 (2004)).
- the Fourier transform results in complex data, and the absolute values of the complex data are tabulated to construct the intensity image.
- the complex OCT signal also encodes information related to the phase shifts arising from local sample motion, and can be used to deduce quantities related to physical motion of dominant scatterers in the sample with high sensitivity.
- A-scan The profile of scattering as a function of depth is called an axial scan (A-scan).
- a set of A-scans measured at neighboring locations in the sample produces a cross-sectional image (tomogram or B-scan) of the sample.
- B-scan cross-sectional image
- a collection of B-scans collected at different transverse locations on the sample makes up a data volume or cube.
- fast axis refers to the scan direction along a single B-scan whereas slow axis refers to the axis along which multiple B-scans are collected.
- Intensity based local searches or global approaches can be used on the magnitude images to extract prominent layers which are then used as boundaries for the summation of intensities.
- the boundaries extracted serve to include only that tissue extent in the summation which is known a-priori to have blood vessels.
- the result of this summing procedure is a flat view (projection) of the volume looking along (and into) the imaging axis, and the features in this projection (en face vasculature image) capture the vascular distribution (Process A in FIG. 1). Since the summation integrates out the depth along the axial direction, this view only captures the omnibus morphology of the vasculature, and not position in the depth direction.
- FIG. 3 shows the en face vasculature image generated by selective summation (FIG. 1, Process A) through the motion contrasted volume generated by performing normalized vector difference variance as described in US Patent Publication No. 2012/0277579, hereby incorporated by reference.
- the bright vasculature indicated by the arrows 301 and 302 stands out against the dark background.
- the absence of vessels in the central region indicates the foveal avascular zone (FAZ).
- FAZ foveal avascular zone
- FIG. 4 Anatomically the vasculature is distributed in the 3 major sections of the retina (FIG. 4): the superficial capillary plexus (SCP), intermediate capillary plexus (ICP) and the deep capillary plexus (DCP) (see for example Kim et al. "Noninvasive Imaging of the Foveal Avascular Zone with High-Speed, Phase- Variance Optical Coherence Tomography” Investigative Ophthalmology & Visual Science, 53 (1), 85 - 92 (2012) hereby incorporated by reference).
- SCP superficial capillary plexus
- ICP intermediate capillary plexus
- DCP deep capillary plexus
- Each of these plexus is made up of a finite number of retinal layers, which, in an alternate embodiment, can be used as bounding layers for selective summation to generate vasculature en face views showing the three specific types of vasculature networks.
- Selective summation is not required to generate an en face image and other examples of selective summations or other ways to represent a plurality of intensity values as a single representative value (e.g. integration, summing, minimum, maximum, median value, etc.) can be envisioned by those skilled in the art (see for example US Patent No. 7,301,644, US Patent Publication No. 2011/0034803 and US Patent Publication No. 2008/0100612 hereby incorporated by reference).
- the projection views are devoid of any depth information, because the hyperintense pixels, which signal the presence of vasculature are summed along the axial direction.
- the retinal vasculature is distributed in the 3 -dimensional space of the retinal tissue, with the distribution and characteristics of the vasculature varying also within each specific plexus (FIG. 4).
- the depth information in the view presented in FIG. 3 can be preserved by explicitly using the volumetric definition of the vasculature (after motion contrasting the phase data), and rendering the volume in three-dimensions (3D, process B in FIG. 1).
- this process which extends process B, would entail preserving the high intensity locations in the acquired volume using a thresholding or selection criteria, and performing some post-processing to enforce the anatomical connectivity of the vasculature.
- a schematic of the expected form of the 3D vasculature map is shown in FIG. 5a, where the black lines trace out the vessel path in 3D space.
- the corresponding 2D projection view is illustrated in FIG. 5b.
- visualization mode will provide a visual representation of the vascular architecture / distribution in the space of the imaged retina.
- the data included in the 3D representation could be limited to a particular plexus, using methods described above, to limit the data displayed to a particular location in the retina.
- the location in the retina can be defined by adjacency to a particular location, such as the centroid or a boundary identified by segmentation, or the location may be limited to be between two such boundaries. In this way the 3D vessel model of each plexus could be independently reviewed.
- the examiner could select a vessel within a volume using a data input device such as a mouse or touch screen interface and the program would then highlight the connecting vessels down to the capillary level.
- a data input device such as a mouse or touch screen interface
- Such visualization may improve the identification of blockages or leakages.
- Visualizing connected vasculature could be done, e.g. by only displaying the specific connected vasculature or by only highlighting it within the volume in order to contrast it from the other vasculature.
- Visualization of connected vasculature could also involve an image series or a movie, where the movie starts with only the initially selected vessel, which then grows until all the connected vessels are shown.
- the speed at which connecting vessels are added could be normalized by their vessel diameter in order to mimic the propagation speed of the blood within the network.
- the information created by the vessel segmentation algorithms may serve as additional quantitative parameters, which could be used for comparing data sets with a normative data base or for tracking changes in a particular patient over time.
- Such parameters may be total vessel length, number of bifurcations, capillary density of arteriole vs. venous capillaries, vessel diameter parent / daughter vessels, bifurcation angles, vascular tortuosity, capillary network volume vs. static tissue volume.
- the actual depths of the vascular locations in the retina can be used as disparity maps to generate a stereoscopic image pair of the vascular network that can be viewed by the clinician to get a better idea of the vascular distribution in space.
- a preferred scale is selected to map the range of possible distances (for example, from 0 to 2 mm), after which a left and right volume pair can be generated and fused into a stereoscopic pair as illustrated in FIG. 6. This can be rendered either through a special pair of 3D glasses, or via a 3D display technology.
- Extensions to this alternative embodiment also allows for augmenting this stereoscopic view by the retinal anatomy, depth encoded in the magnitude image to generate vascular maps in relation to the various layers of the retina.
- the vessels derived from the three plexus can be color coded preferentially to generate a more informative image fusion approach to visualization.
- the preferred embodiment employs the use of vasculature en face image to derive clinically significant information, the most important of which is the automated detection of the foveal avascular zone, or FAZ.
- the detection of the FAZ used the output of process A (FIG. 1), and can be accomplished by creating an intensity histogram of the vasculature en face image.
- the intensity histogram (FIG. 7) is a very popular graphing technique in image processing to generate a frequency distribution (represented along the ordinate of the graph) of the intensity values in the image which are represented along the abscissa.
- Many software systems contain standard and optimized libraries for the generation of the image histogram.
- the preferred embodiment starts the detection process by detecting the peaks of the histogram. As illustrated in FIG.
- histogram of the vascular en face image shows a characteristic bi-modal distribution, and, the transition between the FAZ and the rest of the map is located between these peaks A and B.
- the en- face image is interrogated for pixels which have this specific intensity value. Lines of similar intensity are called isophotes, and the trace of the isophote at intensity level i provides the preferred embodiment with a discriminating contour which maximally contains the FAZ on the inside, and the vascular retina on the outside (FIG. 8).
- the isophotes are generally delineated by a contour operation which entails thresholding the intensity levels on the image with a small neighborhood of the required isophote at level I as indicated by the arrow in FIG. 8.
- the results of the isophote delineation are shown in FIG. 8. Because of the noisy nature of the signal, some small isolated contours can also be seen, but the algorithm employed in the preferred embodiment ensures that the isophote that is convex and has the largest perimeter will always enclose the FAZ.
- the preferred embodiment reports multiple matches (small isolated contours in FIG. 8) as a list of a list of x- and y- coordinates of each contour. These coordinates can be used in a straightforward way to calculate the perimeter of each contour.
- the contour that best delineates the FAZ is the one with the highest perimeter.
- the region inside the FAZ contour can be isolated from the rest of the contours as illustrated in FIG. 9.
- morphological parameters of the resulting FAZ region can be calculated to quantify the FAZ on a case by case basis.
- the area is calculated as counting the number of "on" pixels, scaled by the area of each pixel, and is 5675 pixel units.
- the delineated FAZ has an eccentricity of 0.71 with a major axis of 104.7 pixel units (523.5 microns) and a minor axis of 73.1 pixel units (548.5 microns).
- the remainder of the vascular map (exterior to the delineated FAZ) can be used to calculate capillary density, especially as a function of distance from the center of the fovea, and generate a map of the density in radially distributed sectors around the fovea as illustrated in FIG. 10.
- radial sectors can be analyzed and the ratio of "on” to "off pixels can be used to generate a coarse sectorial vessel density map.
- the definition of the "on” and "off pixels can be established by a judicious threshold selection.
- an alternative embodiment (process C, FIG. 1) allows the use of prior knowledge of the fovea from the magnitude image using a fovea detection algorithm (see for example US Patent No. 8,079,711 hereby incorporated by reference) to create a localized histogram and hence confine the search for the vascular-avascular transition zone to a smaller area.
- a fovea detection algorithm see for example US Patent No. 8,079,711 hereby incorporated by reference
- the FAZ detection by the preferred embodiment on images generated by process A will most likely fail to qualify the maximum perimeter criteria.
- Such a case will be flagged by a null (zero) FAZ detection (i.e., a blank image in FIG. 9) after morphological cleaning steps.
- Other clinically significant quantifiable characteristics or metrics that can be derived from the en face vasculature image (from one or more plexus) or from a 3D OCT angiography data volume include total capillary volume (sum of all pixels above a given threshold intensity), as well as metrics that could be derived from the blood vessel patterns, such as tortuosity, regularity, segment length, total crossings, the number of bifurcations, vessel width parameters, ratio of small to large vessels, capillary density, capillary density ratio between arteriole and venous capillaries, capillary diameter, inter-capillary distance, area bounded by capillary loops, etc.
- a metric can be compared to a database of normal eyes or eyes with a known pathology to diagnose or to track progression of a particular disease or condition.
- Another way to use this information is to use the foveal avascular zone as a region over which to evaluate other features such as layer thickness or other parameters derived from the OCT image, or other registered images.
- the thickness of the photoreceptor layer within the FAZ should be specifically related to cones rather than rods.
- Registration to images that contain other information about the photoreceptors, such as adaptive optics images, might allow quantification of multiple metrics that affect the foveal region.
- the preferred embodiment described above refers to visualization of retinal capillary vessels and metrics associated with these. Similar methods could be applied to the choroidal vasculature, including the choriocapillaris, Sattler's layer and Haller's layer. Although various applications and embodiments that incorporate the teachings of the present invention have been shown and described in detail herein, those skilled in the art can readily devise other varied embodiments that still incorporate these teachings. Although the description of the present invention is discussed herein with respect to the sample being a human eye, the applications of this invention are not limited to eye and can be applied to any application using OCT.
- Leitgeb "Imaging of the parafoveal capillary network and its integrity analysis using fractal dimension,” Biomed. Opt. Express 2, 1159-1168 (2011).
- Leitgeb et al. "Real-time assessment of retinal blood flow with ultrafast acquisition by color Doppler FDOCT,” Optics Express, 11, 3116-3121 (2003).
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Animal Behavior & Ethology (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Ophthalmology & Optometry (AREA)
- Biomedical Technology (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Eye Examination Apparatus (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
Methods for analyzing and visualizing OCT angiography data are presented. In one embodiment, an automated method for identifying the foveal avascular zone in a two dimensional en face image generated from motion contrast data is presented. Several 3D visualization techniques are presented including one in which a particular vessel is selected in a motion contrast image and all connected vessels are highlighted. A further embodiment includes a stereoscopic visualization method. In addition, a variety of metrics for characterizing OCT angiography image data are described.
Description
ANALYSIS AND VISUALIZATION OF OCT ANGIOGRAPHY DATA
PRIORITY
The following application claims priority to US Provisional Application Serial No.
61/645,513 filed May 10, 2012 and US Provisional Application Serial No. 61/691,219 filed August 20, 2012, the contents of both of which are hereby incorporated by reference.
TECHNICAL FIELD
The present invention relates to medical imaging, and in particular to analysis and visualization methods for data acquired through optical coherence tomography.
BACKGROUND
Optical coherence tomography is a noninvasive, noncontact imaging modality that uses coherence gating to obtain high-resolution cross-sectional images of tissue microstructure. In Fourier domain OCT (FD-OCT), the interferometric signal between light from a reference and the back-scattered light from a sample point is recorded in the frequency domain rather than the time domain. After a wavelength calibration, a one-dimensional Fourier transform is taken to obtain an A-line spatial distribution of the object scattering potential. The spectral information discrimination in FD-OCT can be accomplished by using a dispersive spectrometer in the detection arm in the case of spectral-domain OCT (SD-OCT) or rapidly tuning a swept laser source in the case of swept-source OCT (SS-OCT).
Recently there has been a lot of interest in using intensity based and phase-sensitive based OCT techniques, collectively named OCT Angiography, to map the retinal vasculature or identify regions with flow in the tissue (see for example An et al. "Optical microangiography provides correlation between microstructure and microvasculature of optic nerve head in human subjects," J. Biomed. Opt. 17, 116018 (2012), Zhao et al, "Doppler standard deviation imaging for clinical monitoring of in vivo human skin blood flow," Optics Letters 25, 1358-1360 (2000), Fingler et al. "Mobility and transverse flow visualization using phase variance contrast with spectral domain optical coherence tomography" Optics Express. Vol. 15, No. 20. pp 12637 - 12653 (2007), Makita et al, "Optical Coherence Angiography," Optics Express, 14(17), 7821-7840 (2006), Mariampillai et al, "Optimized speckle variance OCT imaging of microvasculature," Optics Letters 35, 1257-1259 (2010), and Wang et al, "Frequency domain phase-resolved optical Doppler and Doppler variance tomography"
Optics Communications 242 345-350 (2004) hereby incorporated by reference). OCT Angiography provides a non-invasive technique to visualize and indirectly quantify the integrity of retinal circulation pathways. Anomalies in retinal circulation have a direct relation to ocular pathologies, especially within the macula, wherein compromised hemodynamics may not only be related to decreased visual acuity, but could also be a surrogate biomarker for ocular pathologies like retinal vein occlusion (RVO), diabetic retinopathy (DR), and intra retinal microvasculature abnormality (IRMA). Specifically, correlation between retinal vasculature and blood flow are attributes of interest in a number of ocular defects. DR and RVO are pathologies that could lead to early changes to the vascular structure and function, and may, in turn, be etiologic to numerous complications like macular edema, retinal ischemia and optic neuropathy. For these cases, quantification and
visualization of vasculature, capillaries and flow can be a versatile diagnostic tool. For example, ischemic regions in the retina can be mapped to evaluate the extent of damage and further management of the disease. In addition to the vascular rich retina, there is a small area in the macula, at the fovea, which is devoid of any capillaries. This is called the Foveal Avascular zone (FAZ), and abnormal changes in the size of this region are also indicative of pathologies like ischemic maculopathy and DR. The quantification of the FAZ and measuring changes in its size over time can be a clinically significant numerical score for disease presence and progression, especially for DR.
Conventional techniques to visualize retinal vasculature are invasive in nature, and use pharmacological techniques to modify contrast in the imaged retina. Contemporary clinical practice involves injection of a fluorescent dye (such as fluorescein or indocyanine green (ICG)) into the systemic circulation, and the eye is then scanned to generate an image, which selectively shows the path of the dye through the vascular network (FA, Fluorescein
Angiography). No information of the depth structure of the vasculature is captured by this method. In contrast, vascular images generated by examining the OCT intensity or phase signal are non-invasive, and provide comparable fidelity in capturing the existing vascular network with blood flow contrast along with its depth encoding.
There have been a few descriptions detailing the detection of FAZ in contemporary literature, but all the methods discussed are either manual, performed by experts, or are semi- automated, requiring an informed bootstrapping of the downstream method with a manually selected starting point or region. A few relevant publications, and references therein, serve to
inform about the existing prior art (see for example Kim et al. "Noninvasive Imaging of the Foveal Avascular Zone with High-Speed, Phase- Variance Optical Coherence Tomography" Investigative Ophthalmology & Visual Science, 53 (1), 85 - 92 (2012), Zheng et al.
"Automated segmentation of foveal avascular zone in fundus fluorescein angiography" Retina. 51(7): 3653-3659 (2010), Yong et al. "Novel Noninvasive Detection of the Fovea Avascular Zone Using Confocal Red-Free Imaging in Diabetic Retinopathy and Retinal Vein Occlusion" Retina. 52: 2649 - 2655 (2011), and Wang et al. "Imaging Retinal Capillaries Using Ultrahigh-Resolution Optical Coherence Tomography and Adaptive Optics" Invest. Ophthalmol. Vis. Sci. 52. 6292-6299 (2011) hereby incorporated by reference).
Diagnostically, changes to both the vascular and the typically avascular retina are important indicators of developing retinal pathologies. Although visualization of the vascular structure helps in boosting the diagnostic efficacy of this imaging technique, it can be further leveraged by augmenting the visualization with some salient quantifications and metrics derived from the identified vascular and avascular sections of the retina. The primary quantity of interest is the global or structure-specific retinal blood flow kinetics, which can be challenging to quantify because of low flow velocities relative to the temporal resolution of the technique, and the almost perpendicular orientation of the capillaries with respect to the probing beam. In addition to visualization, derived quantifiers from the angiography data which serve to aid in differentiating capillary networks in healthy and diseased eyes are also desirable.
Recently, a few research groups have explored quantitative methods for angiography data to construct meaningful numerical indicators of vascular pathology. Techniques such as fractal dimension analysis have been used to study vessel morphology, distribution and allied features. Avakian et al. demonstrated the use of fractal characterization of fluorescein angiography (FA) images of the human retina to distinguish between healthy and diseased retina (see for example Avakian, et al., "Fractal analysis of region-based vascular change in the normal and non-proliferative diabetic retina," Curr. Eye Res. 24, 274-280, 2002). Schmoll et al. applied a related fractal dimension algorithm to analyze the integrity of the parafoveal capillary network non-invasively using OCT angiography images (see for example Schmoll et al. "Imaging of the parafoveal capillary network and its integrity analysis using fractal dimension" Biomed. Opt. Express 2, 1159-1168, 2011). Also, Jia et al. and An et al. applied simpler vessel density measurements to quantitatively evaluate the capillary network within the human optic nerve head using OCT angiography methods (see for example Jia et al.,
"Quantitative OCT angiography of optic nerve head blood flow," Biomed. Opt. Express 3, 3127-3137, 2012 and An et al., "Optical microangiography provides correlation between microstructure and microvasculature of optic nerve head in human subjects," J. Biomed. Opt. 17, 116018, 2012).
One piece of important anatomical information that is captured by OCT angiography is the depth information, or the spatial distribution of the vessels in the retinal tissue. To visualize the complex capillary networks and to make use of the additional depth information gained by OCT angiography compared to traditional angiography methods such as FA, OCT angiography data is often displayed as 2D projections with the color encoded depth information (see Kim et al. "In vivo volumetric imaging of human retinal circulation with phase variance OCT," Biomedical Optics Express, 2(6), 1504-1513 (2011)). Such 2D projections at least allow distinguishing capillary layers of different depths. They however lack the 3D impression and also don't provide easily accessible information of which larger retinal vessels feed and drain different capillary network regions.
Retinal vessel connectivity measures are also known for fundus photography, they however only focus on a few major retinal vessels in 2D fundus images, rather than visualizing the supply of dense, complex parafoveal capillary networks (see for example Al-Diri et al.
"Automated analysis of retinal vascular network connectivity," Computerized Medical Imaging and Graphics, 34, 462-470 (2010)). Ganesan et al. investigates the connectivity of vessels in mouse retinas from the largest vessels to the smallest capillaries in confocal microscopy images in order to develop a network model (see for example Ganesan et al. "Development of an Image-Based Network Model of Retinal Vasculature," Annals of Biomedical Engineering 38(4) 1566-1585 (2010)). They however don't describe using this as a way to interactively visualize human angiography acquisitions.
SUMMARY
The method described herein is a non-invasive, computational technique to generate images of retinal vasculature (or blood flow) that are then used to either extract various
diagnostically relevant metrics related to retinal micro-circulation, and/or can subsequently be used to visualize the vascular and capillary structure in relation to the structure of the retinal tissue. The anatomical location of the vasculature is defined as locations where there is an appreciable motion contrast, which is typically due to flow of blood. There are a variety of
OCT based methods to detect motion contrast such as Doppler OCT, speckle or intensity variance, and phase-resolved methods. In most of these methods, the motion contrast is determined by obtaining at least two OCT measurements at approximately the same location, where the two measurements are separated in time by a pre-determined interval and by applying an algorithm to look at the changes in the complex OCT signal or its components such as intensity or phase. The accuracy of these measurements can be improved by minimizing motion related errors including but not limited to removing signal due to bulk motion of the sample in the axial direction.
In a preferred embodiment of the present invention, the vascular structure extracted is post- processed to remove outliers and smooth the vessel structure. This derived structure is then depth coded and displayed over the rendered retinal anatomy (magnitude image). Further, the post-processed vascular structure within a specific depth range can be summed or integrated along the axial direction to generate a projection map that shows the vasculature as an en face view, devoid of any depth information. By examining the order or intensity statistics of this image, the regions devoid of any vasculature (such as the FAZ) can be delineated
automatically, and its shape and size can be quantified. Alternatively, previously acquired fundus images, along with a fovea detector, can be registered to this synthetic en face image of the vasculature to assist in the detection of the fovea around which there is a high chance of finding the FAZ. The proposed invention deviates from the known prior art by detailing a completely automated (no manual intervention) method to accurately determine the capillary devoid regions of retina, by examination of the statistical properties of the intensity content of the retinal image which preferentially contrasts vascular regions.
In addition to the avascular zone, other metrics, like the vasculature density, capillary density, vessel geometry, capillary diameter, inter-capillary distance, area bounded by capillary loops, etc. can be determined by standard mathematical models and tools. These metrics can be identified in the vicinity of the fovea, or in other areas of interest, such as the perifoveal or peripapillary regions, the papillomacular bundle, or within the optic nerve head. This technique could be further extended to automatically identify regions of retinal ischemia in pathologies such as branch retinal vein occlusion (BRVO) and central retinal vein occlusion (CRVO). Furthermore, these techniques could also assist in identifying intraretinal microvascular abnormalities (IRMA). IRMA is typically a DR-related condition that results in areas of capillary dilatation and intraretinal formation of new capillary beds. Often the
IRMA related new vessel formation occurs in retinal tissues to act as shunts through areas of nonperfusion. or ischemia. Change in the metrics defined above could be used as a criterion to monitor if there has been new growth of vasculature or change in the regions of nonperfusion or ischemia.
In a further embodiment of the invention, a novel method for effectively visualizing OCT angiography acquisitions in a meaningful way and quantitatively characterizing vasculature networks is presented. The examiner could select a vessel within an OCT angiography acquisition and the program would show all connecting vessels down to the capillary network. The information about the connectivity of different retinal vessels may also be used to quantitatively evaluate OCT angiography acquisitions and compare them to a normative database.
BRIEF DESCRIPTION OF THE FIGURES
FIG. 1 shows a flow chart of various step involved with processing motion contrast OCT data according to the present invention.
FIG. 2 illustrates a generalized ophthalmic OCT imaging system that could be used for collection of motion contrast data.
FIG. 3 shows an en face vasculature image generated from OCT data using normalized vector difference variance.
FIG. 4 shows an OCT image of the retina illustrating three different plexus and the layers that they include.
FIG. 5a shows a 3D visualization of OCT angiography data and FIG. 5b shows a
corresponding 2D projection image.
FIG. 6 illustrates how a stereoscopic image pair could be generated from motion contrast data to enable a type of 3D visualization.
FIG. 7 shows an intensity histogram of an en face vasculature image that can be used to identify the foveal avascular zone (FAZ).
FIG. 8 shows the results of an isophote delineation of an en face vasculature image according to one aspect of the present invention.
FIG. 9 illustrates how the FAZ can be isolated from the rest of the image data after the isophote delineation in FIG. 8.
FIG. 10 shows a map of the density of vessels in radially distributed sectors around the fovea. DETAILED DESCRIPTION
Preferred and alternative embodiments for the processing of vasculature enhanced OCT data are illustrated in the schematic of FIG. 1. This figure illustrates three possible ways to generate and use vasculature data from complex OCT data 102 acquired and reconstructed in an OCT system 101. The text below refers to each possible combination as a "Process", A, B or C, depending on which process path has been taken to generate the input data for the embodiment under discussion. In each process, different portions of the complex data (intensity only, phase only, or both intensity and phase) undergo different processing steps (layer segmentation, motion contrast, integration/summation for en face image generation) to generate different vasculature related information.
OCT data can be collected with any type of OCT system employing a variety of scan patterns, for example, a spectral domain OCT system, or a swept source OCT system, employing laser sources of different wavelength like 840 nm or 1060 nm. A diagram of a generalized OCT system is shown in FIG. 2. Light from source 201 is routed, typically by optical fiber 205, to illuminate the sample 210, a typical sample being tissues in the human eye. The source 201 can be either a broadband light source with short temporal coherence length in the case of SD-OCT or a wavelength tunable laser source in the case of SS-OCT. The light is scanned, typically with a scanner 207 between the output of the fiber and the sample, so that the beam of light (dashed line 208) is scanned laterally (in x and y) over the area or volume to be imaged. Light scattered from the sample is collected, typically into the same fiber 205 used to route the light for sample illumination. Reference light derived from the same source 201 travels a separate path, in this case involving fiber 203 and retro- reflector 204 with an adjustable optical delay. Those skilled in the art recognize that a transmissive reference path can also be used and that the adjustable delay could be placed in the sample or reference arm of the interferometer. Collected sample light is combined with reference light, typically in a fiber coupler 202, to form light interference in a detector 220. Although a single fiber port is shown going to the detector, those skilled in the art recognize that various designs of interferometers can be used for balanced or unbalanced detection of the interference signal. The output from the detector is supplied to a processor 221. The results can be stored in the processor 221 or displayed on display 222. The processing and storing functions may be localized within the OCT instrument or functions may be performed
on an external processing unit to which the collected data is transferred. This unit could be dedicated to data processing or perform other tasks which are quite general and not dedicated to the OCT device.
The sample and reference arms in the interferometer could consist of bulk-optics, fiber-optics or hybrid bulk-optic systems and could have different architectures such as Michelson, Mach- Zehnder or common-path based designs as would be known by those skilled in the art. Light beam as used herein should be interpreted as any carefully directed light path. In time-domain systems, the reference arm needs to have a tunable optical delay to generate interference. Balanced detection systems are typically used in TD-OCT and SS-OCT systems, while spectrometers are used at the detection port for SD-OCT systems. The invention described herein could be applied to any type of OCT system capable of generating data for functional analysis.
The interference causes the intensity of the interfered light to vary across the spectrum. The Fourier transform of the interference light reveals the profile of scattering intensities at different path lengths, and therefore scattering as a function of depth (z-direction) in the sample (see for example Leitgeb et al. "Ultrahigh resolution Fourier domain optical coherence tomography," Optics Express 12(10):2156 (2004)). Typically, the Fourier transform results in complex data, and the absolute values of the complex data are tabulated to construct the intensity image. The complex OCT signal also encodes information related to the phase shifts arising from local sample motion, and can be used to deduce quantities related to physical motion of dominant scatterers in the sample with high sensitivity. The profile of scattering as a function of depth is called an axial scan (A-scan). A set of A-scans measured at neighboring locations in the sample produces a cross-sectional image (tomogram or B-scan) of the sample. A collection of B-scans collected at different transverse locations on the sample makes up a data volume or cube. For a particular volume of data, the term fast axis refers to the scan direction along a single B-scan whereas slow axis refers to the axis along which multiple B-scans are collected.
Ideally the data will be collected while monitoring and correcting for any motion as described in US Patent Publication No. 2012/0249956 hereby incorporated by reference. Any one of a number of OCT angiography techniques (phase variance, speckle variance, Doppler, ultrahigh sensitive optical microangiography (UHS-OMAG), etc.) can be applied to the
resulting complex OCT data set to examine the motion contrast. The result of this analysis can provide an image containing the volumetric definition of the location of the blood vessels as hyperintense signals (see for example Kim et al. "In vivo volumetric imaging of human retinal circulation with phase variance OCT," Biomedical Optics Express, 2(6), 1504-1513 (2011)). Intensity based local searches or global approaches can be used on the magnitude images to extract prominent layers which are then used as boundaries for the summation of intensities. The boundaries extracted serve to include only that tissue extent in the summation which is known a-priori to have blood vessels. The result of this summing procedure is a flat view (projection) of the volume looking along (and into) the imaging axis, and the features in this projection (en face vasculature image) capture the vascular distribution (Process A in FIG. 1). Since the summation integrates out the depth along the axial direction, this view only captures the omnibus morphology of the vasculature, and not position in the depth direction.
FIG. 3 shows the en face vasculature image generated by selective summation (FIG. 1, Process A) through the motion contrasted volume generated by performing normalized vector difference variance as described in US Patent Publication No. 2012/0277579, hereby incorporated by reference. The bright vasculature indicated by the arrows 301 and 302 stands out against the dark background. The absence of vessels in the central region indicates the foveal avascular zone (FAZ). It is clear from FIG. 3 that the technique generates a good representation of the blood circulatory circuit of the retina. In this example, the bounding layers used for the selective summation were the inner limiting membrane (ILM) and a layer positioned 50 microns below it. Anatomically the vasculature is distributed in the 3 major sections of the retina (FIG. 4): the superficial capillary plexus (SCP), intermediate capillary plexus (ICP) and the deep capillary plexus (DCP) (see for example Kim et al. "Noninvasive Imaging of the Foveal Avascular Zone with High-Speed, Phase- Variance Optical Coherence Tomography" Investigative Ophthalmology & Visual Science, 53 (1), 85 - 92 (2012) hereby incorporated by reference). Each of these plexus is made up of a finite number of retinal layers, which, in an alternate embodiment, can be used as bounding layers for selective summation to generate vasculature en face views showing the three specific types of vasculature networks. Selective summation is not required to generate an en face image and other examples of selective summations or other ways to represent a plurality of intensity values as a single representative value (e.g. integration, summing, minimum, maximum, median value, etc.) can be envisioned by those skilled in the art (see for example US Patent
No. 7,301,644, US Patent Publication No. 2011/0034803 and US Patent Publication No. 2008/0100612 hereby incorporated by reference).
As mentioned previously, the projection views are devoid of any depth information, because the hyperintense pixels, which signal the presence of vasculature are summed along the axial direction. Anatomically, the retinal vasculature is distributed in the 3 -dimensional space of the retinal tissue, with the distribution and characteristics of the vasculature varying also within each specific plexus (FIG. 4). In one preferred embodiment of the present invention, the depth information in the view presented in FIG. 3 can be preserved by explicitly using the volumetric definition of the vasculature (after motion contrasting the phase data), and rendering the volume in three-dimensions (3D, process B in FIG. 1). Typically, this process, which extends process B, would entail preserving the high intensity locations in the acquired volume using a thresholding or selection criteria, and performing some post-processing to enforce the anatomical connectivity of the vasculature. A schematic of the expected form of the 3D vasculature map is shown in FIG. 5a, where the black lines trace out the vessel path in 3D space. The corresponding 2D projection view is illustrated in FIG. 5b. The 3D
visualization mode will provide a visual representation of the vascular architecture / distribution in the space of the imaged retina. The data included in the 3D representation could be limited to a particular plexus, using methods described above, to limit the data displayed to a particular location in the retina. The location in the retina can be defined by adjacency to a particular location, such as the centroid or a boundary identified by segmentation, or the location may be limited to be between two such boundaries. In this way the 3D vessel model of each plexus could be independently reviewed.
In a further embodiment of the present invention, the examiner could select a vessel within a volume using a data input device such as a mouse or touch screen interface and the program would then highlight the connecting vessels down to the capillary level. Such visualization may improve the identification of blockages or leakages. Visualizing connected vasculature could be done, e.g. by only displaying the specific connected vasculature or by only highlighting it within the volume in order to contrast it from the other vasculature.
Visualization of connected vasculature could also involve an image series or a movie, where the movie starts with only the initially selected vessel, which then grows until all the connected vessels are shown. The speed at which connecting vessels are added could be
normalized by their vessel diameter in order to mimic the propagation speed of the blood within the network.
The information created by the vessel segmentation algorithms may serve as additional quantitative parameters, which could be used for comparing data sets with a normative data base or for tracking changes in a particular patient over time. Such parameters may be total vessel length, number of bifurcations, capillary density of arteriole vs. venous capillaries, vessel diameter parent / daughter vessels, bifurcation angles, vascular tortuosity, capillary network volume vs. static tissue volume.
In an alternative embodiment of the present invention, the actual depths of the vascular locations in the retina can be used as disparity maps to generate a stereoscopic image pair of the vascular network that can be viewed by the clinician to get a better idea of the vascular distribution in space. With the knowledge of the actual physical depth of the vascular features in the retina, a preferred scale is selected to map the range of possible distances (for example, from 0 to 2 mm), after which a left and right volume pair can be generated and fused into a stereoscopic pair as illustrated in FIG. 6. This can be rendered either through a special pair of 3D glasses, or via a 3D display technology. Extensions to this alternative embodiment also allows for augmenting this stereoscopic view by the retinal anatomy, depth encoded in the magnitude image to generate vascular maps in relation to the various layers of the retina. The vessels derived from the three plexus can be color coded preferentially to generate a more informative image fusion approach to visualization.
The preferred embodiment employs the use of vasculature en face image to derive clinically significant information, the most important of which is the automated detection of the foveal avascular zone, or FAZ. The detection of the FAZ used the output of process A (FIG. 1), and can be accomplished by creating an intensity histogram of the vasculature en face image. The intensity histogram (FIG. 7) is a very popular graphing technique in image processing to generate a frequency distribution (represented along the ordinate of the graph) of the intensity values in the image which are represented along the abscissa. Many software systems contain standard and optimized libraries for the generation of the image histogram. The preferred embodiment starts the detection process by detecting the peaks of the histogram. As illustrated in FIG. 7, histogram of the vascular en face image shows a characteristic bi-modal distribution, and, the transition between the FAZ and the rest of the map is located between
these peaks A and B. This is an empirical observation, and holds true for a large set of vascular enface image. The preferred embodiment estimates this transition zone as intensity level i = il + (i2 - il)/3. After calculating this characteristic intensity value, the en- face image is interrogated for pixels which have this specific intensity value. Lines of similar intensity are called isophotes, and the trace of the isophote at intensity level i provides the preferred embodiment with a discriminating contour which maximally contains the FAZ on the inside, and the vascular retina on the outside (FIG. 8). The isophotes are generally delineated by a contour operation which entails thresholding the intensity levels on the image with a small neighborhood of the required isophote at level I as indicated by the arrow in FIG. 8.
The results of the isophote delineation are shown in FIG. 8. Because of the noisy nature of the signal, some small isolated contours can also be seen, but the algorithm employed in the preferred embodiment ensures that the isophote that is convex and has the largest perimeter will always enclose the FAZ. During the contour generation process to select the desired isophote, the preferred embodiment reports multiple matches (small isolated contours in FIG. 8) as a list of a list of x- and y- coordinates of each contour. These coordinates can be used in a straightforward way to calculate the perimeter of each contour. The contour that best delineates the FAZ is the one with the highest perimeter.
The region inside the FAZ contour can be isolated from the rest of the contours as illustrated in FIG. 9. Once the isophote distribution has been isolated as described in the previous paragraph, and cleaned by basic morphological operations like erosion and dilation, or any combination thereof, morphological parameters of the resulting FAZ region can be calculated to quantify the FAZ on a case by case basis. As an example of a possible quantification, for the case of FIG. 9, the area is calculated as counting the number of "on" pixels, scaled by the area of each pixel, and is 5675 pixel units. In this case the delineated FAZ has an eccentricity of 0.71 with a major axis of 104.7 pixel units (523.5 microns) and a minor axis of 73.1 pixel units (548.5 microns).
In an alternative embodiment, the remainder of the vascular map (exterior to the delineated FAZ) can be used to calculate capillary density, especially as a function of distance from the center of the fovea, and generate a map of the density in radially distributed sectors around the fovea as illustrated in FIG. 10. With the knowledge of the position of FAZ in the vascular
en- face, radial sectors can be analyzed and the ratio of "on" to "off pixels can be used to generate a coarse sectorial vessel density map. The definition of the "on" and "off pixels can be established by a judicious threshold selection.
For cases for which there is a significant encroachment of capillary network in the FAZ, an alternative embodiment (process C, FIG. 1) allows the use of prior knowledge of the fovea from the magnitude image using a fovea detection algorithm (see for example US Patent No. 8,079,711 hereby incorporated by reference) to create a localized histogram and hence confine the search for the vascular-avascular transition zone to a smaller area. In such cases, the FAZ detection by the preferred embodiment on images generated by process A will most likely fail to qualify the maximum perimeter criteria. Such a case will be flagged by a null (zero) FAZ detection (i.e., a blank image in FIG. 9) after morphological cleaning steps. Using the prior knowledge of the location of the fovea using the fovea detection algorithm will help boot- strap the detection process very close to the expected location of the FAZ. Those skilled in the art may realize that it is possible to modify the control flow to take alternative routes to detecting the FAZ in case the vascular pathology is severe at the fovea.
Other clinically significant quantifiable characteristics or metrics that can be derived from the en face vasculature image (from one or more plexus) or from a 3D OCT angiography data volume include total capillary volume (sum of all pixels above a given threshold intensity), as well as metrics that could be derived from the blood vessel patterns, such as tortuosity, regularity, segment length, total crossings, the number of bifurcations, vessel width parameters, ratio of small to large vessels, capillary density, capillary density ratio between arteriole and venous capillaries, capillary diameter, inter-capillary distance, area bounded by capillary loops, etc. It is also possible to consider including relationships between metrics derived in different plexuses, such as ratios or differences, or relationships between metrics derived in different regions of the eye, such as in the peri-fovea and extra-fovea, the fovea, the papillomacular bundle, and the optic nerve head area. Once a metric is determined, it can be compared to a database of normal eyes or eyes with a known pathology to diagnose or to track progression of a particular disease or condition.
Another way to use this information is to use the foveal avascular zone as a region over which to evaluate other features such as layer thickness or other parameters derived from the OCT image, or other registered images. For instance, the thickness of the photoreceptor layer
within the FAZ should be specifically related to cones rather than rods. Registration to images that contain other information about the photoreceptors, such as adaptive optics images, might allow quantification of multiple metrics that affect the foveal region.
Presentation of the vessel image compared to a simultaneously acquired image using a different modality (such as fundus imaging) would assist in using the vessel data to guide review of other modalities and vice versa. Such a common display is described in US Patent Publication No. 2008/0100612 hereby incorporated by reference.
Visual comparison of the capillary network on different visits (in one or more plexus) might reveal changes associated with progressive diseases such as diabetic retinopathy.
Longitudinal analysis of any of the metrics discussed above, especially the area of the FAZ could also reveal progressive damage. The longitudinal analysis could also measure modifications in the retinal vasculature such as changes in areas of non-perfusion due to ischemia or development of DR related conditions such as IRMA. After selection of a vessel in one image, the extent of the connecting vessels down to the capillary network could potentially be evaluated serially, in the initial image and in subsequently acquired and registered datasets to determine if the extent of the vessel network for that particular vessel is expanding with treatment or contracting with worsening pathology.
The preferred embodiment described above refers to visualization of retinal capillary vessels and metrics associated with these. Similar methods could be applied to the choroidal vasculature, including the choriocapillaris, Sattler's layer and Haller's layer. Although various applications and embodiments that incorporate the teachings of the present invention have been shown and described in detail herein, those skilled in the art can readily devise other varied embodiments that still incorporate these teachings. Although the description of the present invention is discussed herein with respect to the sample being a human eye, the applications of this invention are not limited to eye and can be applied to any application using OCT.
The following references are hereby incorporated by reference:
Patent References
US Patent No. 6,549,801 Chen et al. "Phase-resolved optical coherence tomography and optical Doppler tomography for imaging fluid flow in tissue with fast scanning speed and high velocity sensitivity"
US Patent No. 7,301,644 Knighton et al. "Enhanced optical coherence tomography for anatomical mapping"
US Patent No. 7,359,062 Chen et al. "High speed spectral domain functional optical coherence tomography and optical Doppler tomography for in vivo blood flow dynamics and tissue structure"
US Patent 8,079,711 Stetson et al. "Method for finding the lateral position of the fovea in an SD-OCT image volume"
US Publication No. 2008/0025570 Fingler et al. "Dynamic motion contrast and transverse flow estimation using OCT,"
US Patent Publication No. 2008/0100612 Dastmalchi et al. "User interface for efficiently displaying relevant OCT imaging data"
US Patent Publication No. 2011/0034803 Stetson et al. "Non-linear projections of 3-D medical imaging data"
US Publication No. 2010/0027857 Wang "In vivo structural and flow imaging"
US Patent Publication No. 2012/0249956 Iyer et al. "Systems and methods for efficiently obtaining measurements of the human eye using tracking"
US Patent Publication No. 2012/0277579 Sharma et al. "Inter-frame complex OCT data analysis techniques"
WO 2010/129494 Wang et al "Method and apparatus for quantitative imaging of blood perfusion in living tissue"
WO 2011/097631 Wang "Method and apparatus for ultrahigh sensitive optical
microangiography"
Non-Patent Literature
Al-Diri et al. "Automated analysis of retinal vascular network connectivity," Computerized Medical Imaging and Graphics, 34, 462-470 (2010).
An et al. "Optical microangiography provides correlation between microstructure and microvasculature of optic nerve head in human subjects," J. Biomed. Opt. 17, 116018 (2012).
Avakian, et al., "Fractal analysis of region-based vascular change in the normal and nonproliferative diabetic retina," Curr. Eye Res. 24, 274-280 (2002).
Fingler et al. "Mobility and transverse flow visualization using phase variance contrast with spectral domain optical coherence tomography" Optics Express. Vol. 15, No. 20. pp 12637 - 12653 (2007).
Fingler et al. "Volumetric microvascular imaging of human retina using optical coherence tomography with a novel motion contrast technique" Optics Express. Vol. 17, No. 24, pp 22190 - 22200 (2009).
Ganesan et al. "Development of an Image-Based Network Model of Retinal Vasculature," Annals of Biomedical Engineering 38(4) 1566-1585 (2010).
Jia et al., "Quantitative OCT angiography of optic nerve head blood flow," Biomed. Opt. Express 3, 3127-3137 (2012).
Kim et al. "In vivo volumetric imaging of human retinal circulation with phase variance OCT," Biomedical Optics Express, 2(6), 1504-1513 (2011).
Kim et al. "Noninvasive Imaging of the Foveal Avascular Zone with High-Speed, Phase- Variance Optical Coherence Tomography" Investigative Ophthalmology & Visual Science, 53 (1), 85 - 92 (2012)
Makita et al, "Optical Coherence Angiography," Optics Express, 14(17), 7821-7840 (2006).
Makita et al., "Comprehensive in vivo micro-vascular imaging of the human eye by dual- beam-scan Doppler optical coherence angiography" Optics Express 19(2) 1271-1283 (2011).
Mariampillai et al., "Optimized speckle variance OCT imaging of microvasculature," Optics Letters 35, 1257-1259 (2010).
Leitgeb, "Imaging of the parafoveal capillary network and its integrity analysis using fractal dimension," Biomed. Opt. Express 2, 1159-1168 (2011).
Leitgeb et al., "Real-time assessment of retinal blood flow with ultrafast acquisition by color Doppler FDOCT," Optics Express, 11, 3116-3121 (2003).
Leitgeb et al. "Ultrahigh resolution Fourier domain optical coherence tomography," Optics Express 12(10):2156 (2004)
Liu et al., "Intensity-based modified Doppler variance algorithm: application to phase instable and phase stable optical coherence tomography systems" Optics Express 19(12), 11429-11440 (2011).
Schmoll et al. "Imaging of the parafoveal capillary network and its integrity analysis using fractal dimension" Biomed. Opt. Express 2, 1159-1168 (2011).
Wang et al., "Frequency domain phase-resolved optical Doppler and Doppler variance tomography" Optics Communications 242 345-350 (2004).
Wang et al, "Three dimensional optical angiography," Optics Express 15, 4083-4097 (2007).
Wang et al., "Depth-resolved imaging of capillary networks in retina and choroid using ultrahigh sensitive optical microangiography," Optics Letters, 35(9), 1467-1469 (2010).
Wang et al. "Imaging Retinal Capillaries Using Ultrahigh-Resolution Optical Coherence Tomography and Adaptive Optics" Invest. Ophthalmol. Vis. Sci. 52. 6292-6299 2011
White et al, "In vivo dynamic human retinal blood flow imaging using ultra-high-speed spectral domain optical Doppler tomography," Optics Express, 11(25), 3490-3497 (2003).
Yazdanfar "Imaging and velocimetry of the human retinal circulation with color Doppler OCT," Optics Letters 25, 1448-1450 (2000).
Yong et al. "Novel Noninvasive Detection of the Fovea Avascular Zone Using Confocal Red- Free Imaging in Diabetic Retinopathy and Retinal Vein Occlusion" Retina. 52: 2649 - 2655 2011
Zhao et al., "Doppler standard deviation imaging for clinical monitoring of in vivo human skin blood flow," Optics Letters 25, 1358-1360 (2000).
Zheng et al. "Automated segmentation of foveal avascular zone in fundus fluorescein angiography" Retina. 51(7): 3653-3659 2010
Claims
1. An automated method for identifying areas of interest in optical coherence tomography (OCT) image data of an eye, said method comprising:
collecting OCT image data over a plurality of transverse locations of the eye of a patient;
processing the data to identify motion contrast;
generating a two-dimensional en face vasculature image from the processed data; analyzing the intensity of the en face vasculature image to identify a particular area; displaying or storing the identified area.
2. A method as recited in claim 1, wherein the identified area is the foveal avascular zone (FAZ).
3. A method as recited in claim 1, wherein the identified area is a region of retinal ischemia.
4. A method as recited in claim 1, further comprising identifying layers in the retina and using these layers as boundaries in generating the en face vasculature image.
5. A method as recited in claim 1, wherein the analysis of the en face image involves generating an intensity histogram from the en face image, identifying the intensities corresponding to peaks in the histogram, determining the transition between the identified area and the rest of the vasculature as a combination of these intensities, and locating, in the en face vasculature image, regions corresponding to these intensity values.
6. A method as recited in claim 1, further comprising dividing the en face vasculature image into sectors and using intensity to determine a proportion of a vessel in a sector in relation to the background.
7. A method as recited in claim 1, further comprising generating a three dimensional representation of the retinal vasculature.
8. A method as recited in claim 1, further comprising using an additional property of the en face vasculature image in addition to intensity to identify the particular area.
9. A method for analyzing disease in optical coherence tomography (OCT) image data, said method comprising:
collecting OCT image data over a plurality of transverse locations of the eye of a patient;
processing the data to identify motion contrast;
determining at least one metric from the processed data that characterizes the vasculature; and
comparing the determined metric to a database of normal eyes or eyes with known pathology to diagnose disease.
10. A method as recited in claim 9, wherein the metric is selected from the group consisting of: foveal avascular zone (FAZ) area, capillary density, capillary volume, vessel tortuosity, vessel regularity, vessel segment length, vessel total crossings, the number of bifurcations, bifurcation angles, vessel width parameters, ratio of small to large vessels, capillary density, capillary diameter, capillary density ratio between arteriole and venous capillaries, ratio of vessel diameters of parent and daughter vessels and inter-capillary distance.
11. A method for visualizing optical coherence tomography (OCT) image data, said method comprising:
collecting OCT image data over a plurality of transverse locations of the eye of a patient;
processing the data to resolve motion contrast; and
generating a 3D visualization of the motion contrast information.
12. A method as recited in claim 11, further comprising selecting a particular vessel and highlighting all the vessels connected to that vessel.
13. A method as recited in claim 12, wherein the particular vessel is selected by the user.
14. A method as recited in claim 12, wherein the connecting vessels are highlighted propagating from the point the operator chooses.
15. A method as recited in claim 13, wherein the speed of highlighting propagation is determined by the vessel diameter.
16. A method as recited in claim 11, further comprising using the axial depth information in the volume to generate a stereoscopic image pair from the 3D vessel map, and presenting the stereoscopic image pair to the user for visualization.
17. A method as recited in claim 11, further comprising registering two 3D visualizations of the same patient taken at different times and comparing them to determine regions of increased or decreased capillary extent.
18. A method for analyzing disease in optical coherence tomography (OCT) image data, said method comprising:
collecting OCT image data over a plurality of transverse locations of the eye of a patient;
processing the data to identify motion contrast;
determining at least one metric from the processed data that characterizes the vasculature; and
comparing the determined metric to a metric determined from OCT image data collected on the same patient at a different time.
19. A method as recited in claim 18, wherein the metric is selected from the group consisting of: foveal avascular zone (FAZ) area, capillary density, capillary volume, vessel tortuosity, vessel regularity, vessel segment length, vessel total crossings, the number of bifurcations, bifurcation angles, vessel width parameters, ratio of small to large vessels, capillary density, capillary diameter, capillary density ratio between arteriole and venous capillaries, ratio of vessel diameters of parent and daughter vessels and inter-capillary distance.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015510811A JP6371762B2 (en) | 2012-05-10 | 2013-05-08 | Analysis and visualization of OCT angiography data |
EP13724195.6A EP2852317B1 (en) | 2012-05-10 | 2013-05-08 | Analysis and visualization of oct angiography data |
IN8852DEN2014 IN2014DN08852A (en) | 2012-05-10 | 2013-05-08 | |
EP22185690.9A EP4122377A1 (en) | 2012-05-10 | 2013-05-08 | Analysis and visualization of oct angiography data/analyse und visualisierung von oct-angiografie-daten |
CN201380023147.6A CN104271031B (en) | 2012-05-10 | 2013-05-08 | The analysis and visualization of OCT angiographic datas |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261645513P | 2012-05-10 | 2012-05-10 | |
US61/645,513 | 2012-05-10 | ||
US201261691219P | 2012-08-20 | 2012-08-20 | |
US61/691,219 | 2012-08-20 | ||
US13/781,375 | 2013-02-28 | ||
US13/781,375 US9357916B2 (en) | 2012-05-10 | 2013-02-28 | Analysis and visualization of OCT angiography data |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013167641A1 true WO2013167641A1 (en) | 2013-11-14 |
Family
ID=48470922
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2013/059560 WO2013167641A1 (en) | 2012-05-10 | 2013-05-08 | Analysis and visualization of oct angiography data |
Country Status (6)
Country | Link |
---|---|
US (2) | US9357916B2 (en) |
EP (2) | EP4122377A1 (en) |
JP (2) | JP6371762B2 (en) |
CN (1) | CN104271031B (en) |
IN (1) | IN2014DN08852A (en) |
WO (1) | WO2013167641A1 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016010658A (en) * | 2014-06-30 | 2016-01-21 | 株式会社ニデック | Optical coherence tomography device, optical coherence tomography calculation method and optical coherence tomography calculation program |
JP2016010657A (en) * | 2014-06-30 | 2016-01-21 | 株式会社ニデック | Optical coherence tomography device, optical coherence tomography calculation method and optical tomography calculation program |
JP2016010656A (en) * | 2014-06-30 | 2016-01-21 | 株式会社ニデック | Optical coherence tomography device, optical coherence tomography calculation method and optical coherence tomography calculation program |
JP2016026521A (en) * | 2014-06-30 | 2016-02-18 | 株式会社ニデック | Optical coherence tomography device and data processing program |
JP2016209198A (en) * | 2015-05-01 | 2016-12-15 | キヤノン株式会社 | Image generation apparatus, image generation method, and program |
JP2016209200A (en) * | 2015-05-01 | 2016-12-15 | キヤノン株式会社 | Image generation apparatus, image generation method, and program |
JP2019217389A (en) * | 2019-10-02 | 2019-12-26 | キヤノン株式会社 | Image generation apparatus, image generation method, and program |
JP2019217388A (en) * | 2019-10-02 | 2019-12-26 | キヤノン株式会社 | Image generation apparatus, image generation method, and program |
EP3696721A2 (en) | 2019-01-24 | 2020-08-19 | Topcon Corporation | Ophthalmologic apparatus, method of controlling the same, and recording medium |
US11071452B2 (en) | 2014-06-30 | 2021-07-27 | Nidek Co., Ltd. | Optical coherence tomography device, optical coherence tomography calculation method, and optical coherence tomography calculation program |
JP7249102B2 (en) | 2015-03-23 | 2023-03-30 | アルコン インコーポレイティド | Systems, apparatus and methods for optimization of laser photocoagulation |
US11963750B2 (en) | 2017-10-16 | 2024-04-23 | Massachusetts Institute Of Technology | Systems, devices and methods for non-invasive hematological measurements |
Families Citing this family (72)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9033510B2 (en) | 2011-03-30 | 2015-05-19 | Carl Zeiss Meditec, Inc. | Systems and methods for efficiently obtaining measurements of the human eye using tracking |
WO2014168930A1 (en) * | 2013-04-09 | 2014-10-16 | University Of Washington Through Its Center For Commercialization | Methods and systems for determining hemodynamic properties of a tissue |
US9778021B2 (en) | 2013-08-29 | 2017-10-03 | Carl Zeiss Meditec, Inc. | Evaluation of optical coherence tomographic data prior to segmentation |
WO2015044366A1 (en) * | 2013-09-30 | 2015-04-02 | Carl Zeiss Meditec Ag | High temporal resolution doppler oct imaging of retinal blood flow |
US9471975B2 (en) * | 2013-10-22 | 2016-10-18 | Bioptigen, Inc. | Methods, systems and computer program products for dynamic optical histology using optical coherence tomography |
EP2865323B1 (en) * | 2013-10-23 | 2022-02-16 | Canon Kabushiki Kaisha | Retinal movement tracking in optical coherence tomography |
US9933246B2 (en) | 2013-12-13 | 2018-04-03 | Nidek Co., Ltd. | Optical coherence tomography device |
US9782175B2 (en) * | 2014-04-24 | 2017-10-10 | The Johns Hopkins University | Systems, methods and apparatuses for real-time anastomosis guidance and surgical evaluation using optical coherence tomography |
WO2015165989A2 (en) * | 2014-05-02 | 2015-11-05 | Carl Zeiss Meditec, Inc. | Enhanced vessel characterization in optical coherence tomograogphy angiography |
JP6375760B2 (en) * | 2014-07-31 | 2018-08-22 | 株式会社ニデック | Optical coherence tomography apparatus and fundus image processing program |
EP3240466A4 (en) * | 2014-12-30 | 2019-02-06 | Optovue, Inc. | Methods and apparatus for retina bloodvessel assesment with oct angiography |
US10070796B2 (en) * | 2015-02-04 | 2018-09-11 | General Electric Company | Systems and methods for quantitative microcirculation state monitoring |
US9700206B2 (en) | 2015-02-05 | 2017-07-11 | Carl Zeiss Meditec, Inc. | Acquistion and analysis techniques for improved outcomes in optical coherence tomography angiography |
US9713424B2 (en) * | 2015-02-06 | 2017-07-25 | Richard F. Spaide | Volume analysis and display of information in optical coherence tomography angiography |
US10758122B2 (en) * | 2015-02-06 | 2020-09-01 | Richard F. Spaide | Volume analysis and display of information in optical coherence tomography angiography |
CN105796053B (en) * | 2015-02-15 | 2018-11-20 | 执鼎医疗科技(杭州)有限公司 | Utilize the method for OCT measurement dynamic contrast and the lateral flow of estimation |
US10368734B2 (en) | 2015-02-19 | 2019-08-06 | Carl Zeiss Meditec, Inc. | Methods and systems for combined morphological and angiographic analyses of retinal features |
WO2016154485A1 (en) * | 2015-03-25 | 2016-09-29 | Oregon Health & Science University | Optical coherence tomography angiography methods |
US9984459B2 (en) * | 2015-04-15 | 2018-05-29 | Kabushiki Kaisha Topcon | OCT angiography calculation with optimized signal processing |
CN104881872B (en) * | 2015-05-27 | 2018-06-26 | 浙江大学 | A kind of optics microangiography image segmentation and evaluation method |
US10123761B2 (en) | 2015-07-01 | 2018-11-13 | William E. Butler | Device and method for spatiotemporal reconstruction of a moving vascular pulse wave in the brain and other organs |
JP6602108B2 (en) * | 2015-08-27 | 2019-11-06 | キヤノン株式会社 | Ophthalmic apparatus, information processing method, and program |
JP6627342B2 (en) | 2015-09-04 | 2020-01-08 | 株式会社ニデック | OCT motion contrast data analysis device, OCT motion contrast data analysis program. |
JP6843125B2 (en) * | 2015-09-24 | 2021-03-17 | カール ツァイス メディテック インコーポレイテッドCarl Zeiss Meditec Inc. | High-sensitivity flow visualization method |
US10492682B2 (en) * | 2015-10-21 | 2019-12-03 | Nidek Co., Ltd. | Ophthalmic analysis device and ophthalmic analysis program |
JP6922152B2 (en) * | 2015-10-21 | 2021-08-18 | 株式会社ニデック | Ophthalmology analyzer, ophthalmology analysis program |
JP6922151B2 (en) * | 2015-10-21 | 2021-08-18 | 株式会社ニデック | Ophthalmology analyzer, ophthalmology analysis program |
JP2017104309A (en) | 2015-12-10 | 2017-06-15 | 株式会社トプコン | Ophthalmologic image displaying device and ophthalmologic imaging device |
CN105686795B (en) * | 2016-01-13 | 2018-05-29 | 深圳市斯尔顿科技有限公司 | A kind of dynamic display method of en face OCT images |
JP6624945B2 (en) | 2016-01-21 | 2019-12-25 | キヤノン株式会社 | Image forming method and apparatus |
WO2017139760A1 (en) | 2016-02-12 | 2017-08-17 | The General Hospital Corporation | Apparatus and methods for high-speed and long depth range imaging using optical coherence tomography |
JP2017153543A (en) | 2016-02-29 | 2017-09-07 | 株式会社トプコン | Ophthalmology imaging device |
JP6702764B2 (en) * | 2016-03-08 | 2020-06-03 | キヤノン株式会社 | Optical coherence tomographic data processing method, program for executing the method, and processing apparatus |
US9978140B2 (en) * | 2016-04-26 | 2018-05-22 | Optos Plc | Retinal image processing |
JP6843521B2 (en) * | 2016-04-28 | 2021-03-17 | キヤノン株式会社 | Image processing device and image processing method |
CN105942972B (en) * | 2016-05-24 | 2017-07-14 | 中国科学院长春光学精密机械与物理研究所 | A kind of system to inner nuclear layer of retina fine vascular adaptive optical imaging |
US20180012359A1 (en) * | 2016-07-06 | 2018-01-11 | Marinko Venci Sarunic | Systems and Methods for Automated Image Classification and Segmentation |
US10426331B2 (en) * | 2016-07-20 | 2019-10-01 | Oregon Health & Science University | Automated quantification of nonperfusion in the retina using optical coherence tomography angiography |
JP6779690B2 (en) | 2016-07-27 | 2020-11-04 | 株式会社トプコン | Ophthalmic image processing equipment and ophthalmic imaging equipment |
JP7182350B2 (en) | 2016-09-07 | 2022-12-02 | 株式会社ニデック | Ophthalmic analysis device, ophthalmic analysis program |
JP6815798B2 (en) | 2016-09-09 | 2021-01-20 | 株式会社トプコン | Ophthalmic imaging equipment and ophthalmic image processing equipment |
JP7308144B2 (en) * | 2016-10-13 | 2023-07-13 | トランスレイタム メディカス インコーポレイテッド | System and method for detection of eye disease |
JP6987495B2 (en) * | 2016-11-18 | 2022-01-05 | キヤノン株式会社 | Image processing device, its operation method, and program |
US10896490B2 (en) * | 2016-12-23 | 2021-01-19 | Oregon Health & Science University | Systems and methods for reflectance-based projection-resolved optical coherence tomography angiography |
KR101855012B1 (en) | 2016-12-28 | 2018-05-08 | 부산대학교 산학협력단 | Method for visual acuity and field detecting with optical coherence tomography |
CN106778036B (en) * | 2017-01-10 | 2017-12-29 | 首都医科大学附属北京友谊医院 | A kind of method and device of data processing |
JP7013134B2 (en) * | 2017-03-09 | 2022-01-31 | キヤノン株式会社 | Information processing equipment, information processing methods and programs |
WO2018183304A1 (en) * | 2017-03-27 | 2018-10-04 | The Board Of Trustees Of The University Of Illinois | An optical coherence tomography (oct) system and method that measure stimulus-evoked neural activity and hemodynamic responses |
WO2018201253A1 (en) * | 2017-05-03 | 2018-11-08 | Uti Limited Partnership | System and method for measuring cardiorespiratory response |
US10896507B2 (en) * | 2017-09-21 | 2021-01-19 | The Regents Of The University Of Michigan | Techniques of deformation analysis for quantification of vascular enlargement |
JP7220509B2 (en) * | 2017-09-27 | 2023-02-10 | 株式会社トプコン | OPHTHALMIC DEVICE AND OPHTHALMIC IMAGE PROCESSING METHOD |
EP3716835A1 (en) * | 2017-11-30 | 2020-10-07 | Alcon Inc. | Improving segmentation in optical coherence tomography imaging |
KR102045883B1 (en) * | 2017-12-06 | 2019-12-04 | 한국광기술원 | Apparatus and method of opltical coherence tomography based angiography |
JP6947226B2 (en) * | 2017-12-28 | 2021-10-13 | 株式会社ニコン | Image processing method, image processing program, image processing device, image display device, and image display method |
DE102018107621A1 (en) * | 2018-03-29 | 2019-10-02 | Imedos Systems GmbH | Apparatus and method for studying metabolic autoregulation |
JP7086683B2 (en) * | 2018-04-06 | 2022-06-20 | キヤノン株式会社 | Image processing equipment, image processing methods and programs |
JP2020039667A (en) | 2018-09-12 | 2020-03-19 | 株式会社トプコン | Ophthalmic imaging apparatus, control method thereof, program, and recording medium |
JP7215862B2 (en) | 2018-09-26 | 2023-01-31 | 株式会社トプコン | OPHTHALMIC PHOTOGRAPHIC APPARATUS, CONTROL METHOD, PROGRAM, AND RECORDING MEDIUM THEREOF |
CN109741335B (en) * | 2018-11-28 | 2021-05-14 | 北京理工大学 | Method and device for segmenting vascular wall and blood flow area in blood vessel OCT image |
US11751762B2 (en) * | 2018-12-19 | 2023-09-12 | Topcon Corporation | Method and apparatus for low coherence interferometry |
WO2020163629A1 (en) | 2019-02-06 | 2020-08-13 | Butler William E | Spatiotemporal reconstruction of a moving vascular pulse wave from a plurality of lower dimensional angiographic projections |
CN113423438A (en) | 2019-02-06 | 2021-09-21 | 威廉·E·巴特勒 | Improved method for angiography |
WO2020198592A1 (en) | 2019-03-27 | 2020-10-01 | Butler William E | Reconstructing cardiac frequency phenomena in angiographic data |
AU2020252576B2 (en) | 2019-04-04 | 2023-05-18 | William E. Butler | Intrinsic contrast optical cross-correlated wavelet angiography |
JP7341422B2 (en) | 2019-09-10 | 2023-09-11 | 国立大学法人 筑波大学 | Scanning imaging device, control method thereof, scanning imaging method, program, and recording medium |
JP6849776B2 (en) * | 2019-11-27 | 2021-03-31 | キヤノン株式会社 | Information processing device and information processing method |
JP6870723B2 (en) * | 2019-12-04 | 2021-05-12 | 株式会社ニデック | OCT motion contrast data analysis device, OCT motion contrast data analysis program. |
US20230200643A1 (en) | 2020-05-29 | 2023-06-29 | University Of Tsukuba | Image Generation Device, Program, and Image Generation Method |
CN112493982A (en) * | 2020-11-24 | 2021-03-16 | 浙江大学 | OCT structure and blood flow imaging's device in art |
CN112529906B (en) * | 2021-02-07 | 2021-05-14 | 南京景三医疗科技有限公司 | Software-level intravascular oct three-dimensional image lumen segmentation method and device |
KR102566442B1 (en) * | 2021-06-04 | 2023-08-14 | 고려대학교 산학협력단 | Apparatus and method for choroidal stroma analysis using optical coherence tomography |
CN114842306A (en) * | 2022-07-01 | 2022-08-02 | 深圳市海清视讯科技有限公司 | Model training method and device applied to retina focus image type recognition |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6549801B1 (en) | 1998-06-11 | 2003-04-15 | The Regents Of The University Of California | Phase-resolved optical coherence tomography and optical doppler tomography for imaging fluid flow in tissue with fast scanning speed and high velocity sensitivity |
US7301644B2 (en) | 2004-12-02 | 2007-11-27 | University Of Miami | Enhanced optical coherence tomography for anatomical mapping |
US20080025570A1 (en) | 2006-06-26 | 2008-01-31 | California Institute Of Technology | Dynamic motion contrast and transverse flow estimation using optical coherence tomography |
US7359062B2 (en) | 2003-12-09 | 2008-04-15 | The Regents Of The University Of California | High speed spectral domain functional optical coherence tomography and optical doppler tomography for in vivo blood flow dynamics and tissue structure |
US20080100612A1 (en) | 2006-10-27 | 2008-05-01 | Dastmalchi Shahram S | User interface for efficiently displaying relevant oct imaging data |
US20090268162A1 (en) * | 2008-04-24 | 2009-10-29 | Carl Zeiss Meditec, Inc. | Method for finding the lateral position of the fovea in an sdoct image volume |
US20100027857A1 (en) | 2006-09-26 | 2010-02-04 | Wang Ruikang K | In vivo structural and flow imaging |
WO2010129494A2 (en) | 2009-05-04 | 2010-11-11 | Oregon Health & Science University | Method and apparatus for quantitative imaging of blood perfusion in living tissue |
WO2010138645A2 (en) * | 2009-05-29 | 2010-12-02 | University Of Pittsburgh - Of The Commonwealth System Of Higher Education | Blood vessel segmentation with three-dimensional spectral domain optical coherence tomography |
US20110034803A1 (en) | 2009-08-04 | 2011-02-10 | Carl Zeiss Meditec, Inc. | Non-linear projections of 3-d medical imaging data |
WO2011097631A2 (en) | 2010-02-08 | 2011-08-11 | Oregon Health & Science University | Method and apparatus for ultrahigh sensitive optical microangiography |
US20120249956A1 (en) | 2011-03-30 | 2012-10-04 | Carl Zeiss Meditec, Inc. | Systems and methods for efficiently obtaining measurements of the human eye using tracking |
US20120277579A1 (en) | 2011-07-07 | 2012-11-01 | Carl Zeiss Meditec, Inc. | Inter-frame complex oct data analysis techniques |
Family Cites Families (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4836670A (en) * | 1987-08-19 | 1989-06-06 | Center For Innovative Technology | Eye movement detector |
JP4027016B2 (en) * | 2000-07-12 | 2007-12-26 | キヤノン株式会社 | Image processing apparatus, image processing method, and storage medium |
JP2002269539A (en) * | 2000-12-01 | 2002-09-20 | Shigehiro Masui | Image processor, image processing method, and computer- readable storage medium with image processing program stored therein, and diagnosis support system using them |
IL160645A0 (en) * | 2001-08-30 | 2004-07-25 | Philadelphia Ophthalmic Imaging | System and method for screening patients for diabetic retinopathy |
US7020314B1 (en) * | 2001-11-13 | 2006-03-28 | Koninklijke Philips Electronics N.V. | Black blood angiography method and apparatus |
US7039452B2 (en) * | 2002-12-19 | 2006-05-02 | The University Of Utah Research Foundation | Method and apparatus for Raman imaging of macular pigments |
AU2003214146A1 (en) | 2002-03-12 | 2003-09-29 | The Regents Of The University Of California | Imaging transverse flow velocity using spectral bandwidth of the doppler frequency shift in phase-resolved optical doppler tomography |
US6928142B2 (en) * | 2002-10-18 | 2005-08-09 | Koninklijke Philips Electronics N.V. | Non-invasive plaque detection using combined nuclear medicine and x-ray system |
WO2005023086A2 (en) | 2003-08-25 | 2005-03-17 | University Of North Carolina At Chapel Hill | Systems, methods, and computer program products for analysis of vessel attributes for diagnosis, disease staging, and surgical planning |
JP4505805B2 (en) * | 2004-08-02 | 2010-07-21 | 横河電機株式会社 | Region extraction method and apparatus |
KR20120062944A (en) * | 2004-08-24 | 2012-06-14 | 더 제너럴 하스피탈 코포레이션 | Method and apparatus for imaging of vessel segments |
WO2006069379A2 (en) | 2004-12-22 | 2006-06-29 | Bio-Tree Systems, Inc. | Medical imaging methods and apparatus for diagnosis and monitoring of diseases and uses therefor |
US20080159604A1 (en) | 2005-12-30 | 2008-07-03 | Allan Wang | Method and system for imaging to identify vascularization |
US8125648B2 (en) | 2006-06-05 | 2012-02-28 | Board Of Regents, The University Of Texas System | Polarization-sensitive spectral interferometry |
JP4855150B2 (en) * | 2006-06-09 | 2012-01-18 | 株式会社トプコン | Fundus observation apparatus, ophthalmic image processing apparatus, and ophthalmic image processing program |
ATE553457T1 (en) | 2006-06-28 | 2012-04-15 | Bio Tree Systems Inc | BINNED MICRO VESSEL SEALING METHODS AND APPARATUS |
US20080170205A1 (en) * | 2007-01-11 | 2008-07-17 | Munger Rejean J | Method and apparatus for correlated ophthalmic measurements |
BRPI0810177A2 (en) * | 2007-04-10 | 2014-12-30 | Univ Southern California | METHODS AND SYSTEMS FOR BLOOD FLOW MEASUREMENT USING DOPPLER COHERENCE TOMOGRAPHY |
JP4940069B2 (en) * | 2007-09-10 | 2012-05-30 | 国立大学法人 東京大学 | Fundus observation apparatus, fundus image processing apparatus, and program |
JP4940070B2 (en) * | 2007-09-10 | 2012-05-30 | 国立大学法人 東京大学 | Fundus observation apparatus, ophthalmic image processing apparatus, and program |
US7798647B2 (en) * | 2007-09-18 | 2010-09-21 | Carl Zeiss Meditec, Inc. | RNFL measurement analysis |
EP2277146A4 (en) | 2008-01-02 | 2017-11-08 | Bio-Tree Systems, Inc. | Methods of obtaining geometry from images |
JP5166889B2 (en) * | 2008-01-17 | 2013-03-21 | 国立大学法人 筑波大学 | Quantitative measurement device for fundus blood flow |
JP5182689B2 (en) * | 2008-02-14 | 2013-04-17 | 日本電気株式会社 | Fundus image analysis method, apparatus and program thereof |
US8571617B2 (en) * | 2008-03-04 | 2013-10-29 | Glt Acquisition Corp. | Flowometry in optical coherence tomography for analyte level estimation |
WO2009128912A1 (en) * | 2008-04-14 | 2009-10-22 | Optovue, Inc. | Method of eye registration for optical coherence tomography |
US8718743B2 (en) * | 2008-04-24 | 2014-05-06 | Duke University | Methods for single-pass volumetric bidirectional blood flow imaging spectral domain optical coherence tomography using a modified hilbert transform |
WO2009148067A1 (en) * | 2008-06-04 | 2009-12-10 | 株式会社 網膜情報診断研究所 | Retinal information diagnosis system |
WO2010004365A1 (en) * | 2008-07-10 | 2010-01-14 | Ecole Polytechnique Federale De Lausanne (Epfl) | Functional optical coherent imaging |
EP2161564A1 (en) * | 2008-09-05 | 2010-03-10 | Optopol Technology S.A. | Method and apparatus for imaging of semi-transparent matter |
MY142859A (en) | 2008-09-10 | 2011-01-14 | Inst Of Technology Petronas Sdn Bhd | A non-invasive method for analysing the retina for ocular manifested diseases |
US8500279B2 (en) * | 2008-11-06 | 2013-08-06 | Carl Zeiss Meditec, Inc. | Variable resolution optical coherence tomography scanner and method for using same |
JP4850892B2 (en) * | 2008-12-19 | 2012-01-11 | キヤノン株式会社 | Fundus image display apparatus, control method therefor, and computer program |
WO2010107930A1 (en) * | 2009-03-17 | 2010-09-23 | The Uwm Research Foundation, Inc. | Ultrasonic imaging device |
US8335552B2 (en) * | 2009-03-20 | 2012-12-18 | Medtronic, Inc. | Method and apparatus for instrument placement |
JP5725697B2 (en) * | 2009-05-11 | 2015-05-27 | キヤノン株式会社 | Information processing apparatus and information processing method |
MY147093A (en) | 2009-05-13 | 2012-10-31 | Inst Of Technology Petronas Sdn Bhd | Apparatus for monitoring and grading diabetic retinopathy |
WO2010135820A1 (en) * | 2009-05-28 | 2010-12-02 | Annidis Health Systems Corp. | Method and system for retinal health management |
JP4909377B2 (en) * | 2009-06-02 | 2012-04-04 | キヤノン株式会社 | Image processing apparatus, control method therefor, and computer program |
JP5626687B2 (en) * | 2009-06-11 | 2014-11-19 | 国立大学法人 筑波大学 | 2-beam optical coherence tomography system |
JP2011087672A (en) * | 2009-10-21 | 2011-05-06 | Topcon Corp | Fundus image processor and fundus observation device |
US8911089B2 (en) * | 2009-11-20 | 2014-12-16 | University of Pittsburgh—of the Commonwealth System of Higher Education | Normalization of retinal nerve fiber layer thickness measurements made by time domain-optical coherence tomography |
JP5582772B2 (en) * | 2009-12-08 | 2014-09-03 | キヤノン株式会社 | Image processing apparatus and image processing method |
EP2525706A2 (en) * | 2010-01-21 | 2012-11-28 | Physical Sciences, Inc. | Multi-functional adaptive optics retinal imaging |
WO2011116347A1 (en) * | 2010-03-19 | 2011-09-22 | Quickvein, Inc. | Apparatus and methods for imaging blood vessels |
US8711364B2 (en) * | 2010-05-13 | 2014-04-29 | Oprobe, Llc | Optical coherence tomography with multiple sample arms |
US8750615B2 (en) * | 2010-08-02 | 2014-06-10 | Case Western Reserve University | Segmentation and quantification for intravascular optical coherence tomography images |
JP5588291B2 (en) * | 2010-09-29 | 2014-09-10 | キヤノン株式会社 | Information processing apparatus, information processing method, information processing system, and program |
JP5721411B2 (en) * | 2010-12-02 | 2015-05-20 | キヤノン株式会社 | Ophthalmic apparatus, blood flow velocity calculation method and program |
WO2012112675A2 (en) * | 2011-02-15 | 2012-08-23 | Ivan Bodis-Wollner | Layer-by-layer quantification of the remodeling of the human fovea in neurodegerative disease |
JP5818458B2 (en) | 2011-02-25 | 2015-11-18 | キヤノン株式会社 | Image processing apparatus, photographing system, image processing method, and program |
JP5792967B2 (en) * | 2011-02-25 | 2015-10-14 | キヤノン株式会社 | Image processing apparatus and image processing system |
EP2701605A4 (en) | 2011-04-27 | 2014-10-01 | Univ Virginia Commonwealth | 3d tracking of an hdr source using a flat panel detector |
US8760499B2 (en) * | 2011-04-29 | 2014-06-24 | Austin Russell | Three-dimensional imager and projection device |
US8570372B2 (en) * | 2011-04-29 | 2013-10-29 | Austin Russell | Three-dimensional imager and projection device |
US20140221827A1 (en) * | 2011-06-07 | 2014-08-07 | California Institute Of Technology | Enhanced optical angiography using intensity contrast and phase contrast imaging methods |
US8781189B2 (en) * | 2011-10-12 | 2014-07-15 | Siemens Aktiengesellschaft | Reproducible segmentation of elliptical boundaries in medical imaging |
US9883810B2 (en) * | 2012-02-03 | 2018-02-06 | Oregon Health & Science University | In vivo optical flow imaging |
AU2013302966B2 (en) * | 2012-08-15 | 2017-06-08 | Lucid, Inc. | Systems and methods for imaging tissue |
US9420945B2 (en) * | 2013-03-14 | 2016-08-23 | Carl Zeiss Meditec, Inc. | User interface for acquisition, display and analysis of ophthalmic diagnostic data |
-
2013
- 2013-02-28 US US13/781,375 patent/US9357916B2/en active Active
- 2013-05-08 IN IN8852DEN2014 patent/IN2014DN08852A/en unknown
- 2013-05-08 CN CN201380023147.6A patent/CN104271031B/en active Active
- 2013-05-08 WO PCT/EP2013/059560 patent/WO2013167641A1/en unknown
- 2013-05-08 EP EP22185690.9A patent/EP4122377A1/en active Pending
- 2013-05-08 JP JP2015510811A patent/JP6371762B2/en active Active
- 2013-05-08 EP EP13724195.6A patent/EP2852317B1/en active Active
-
2016
- 2016-05-05 US US15/147,402 patent/US20160317029A1/en not_active Abandoned
-
2018
- 2018-07-13 JP JP2018132927A patent/JP2018175888A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6549801B1 (en) | 1998-06-11 | 2003-04-15 | The Regents Of The University Of California | Phase-resolved optical coherence tomography and optical doppler tomography for imaging fluid flow in tissue with fast scanning speed and high velocity sensitivity |
US7359062B2 (en) | 2003-12-09 | 2008-04-15 | The Regents Of The University Of California | High speed spectral domain functional optical coherence tomography and optical doppler tomography for in vivo blood flow dynamics and tissue structure |
US7301644B2 (en) | 2004-12-02 | 2007-11-27 | University Of Miami | Enhanced optical coherence tomography for anatomical mapping |
US20080025570A1 (en) | 2006-06-26 | 2008-01-31 | California Institute Of Technology | Dynamic motion contrast and transverse flow estimation using optical coherence tomography |
US20100027857A1 (en) | 2006-09-26 | 2010-02-04 | Wang Ruikang K | In vivo structural and flow imaging |
US20080100612A1 (en) | 2006-10-27 | 2008-05-01 | Dastmalchi Shahram S | User interface for efficiently displaying relevant oct imaging data |
US20090268162A1 (en) * | 2008-04-24 | 2009-10-29 | Carl Zeiss Meditec, Inc. | Method for finding the lateral position of the fovea in an sdoct image volume |
US8079711B2 (en) | 2008-04-24 | 2011-12-20 | Carl Zeiss Meditec, Inc. | Method for finding the lateral position of the fovea in an SDOCT image volume |
WO2010129494A2 (en) | 2009-05-04 | 2010-11-11 | Oregon Health & Science University | Method and apparatus for quantitative imaging of blood perfusion in living tissue |
WO2010138645A2 (en) * | 2009-05-29 | 2010-12-02 | University Of Pittsburgh - Of The Commonwealth System Of Higher Education | Blood vessel segmentation with three-dimensional spectral domain optical coherence tomography |
US20110034803A1 (en) | 2009-08-04 | 2011-02-10 | Carl Zeiss Meditec, Inc. | Non-linear projections of 3-d medical imaging data |
WO2011097631A2 (en) | 2010-02-08 | 2011-08-11 | Oregon Health & Science University | Method and apparatus for ultrahigh sensitive optical microangiography |
US20120249956A1 (en) | 2011-03-30 | 2012-10-04 | Carl Zeiss Meditec, Inc. | Systems and methods for efficiently obtaining measurements of the human eye using tracking |
US20120277579A1 (en) | 2011-07-07 | 2012-11-01 | Carl Zeiss Meditec, Inc. | Inter-frame complex oct data analysis techniques |
Non-Patent Citations (27)
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11071452B2 (en) | 2014-06-30 | 2021-07-27 | Nidek Co., Ltd. | Optical coherence tomography device, optical coherence tomography calculation method, and optical coherence tomography calculation program |
JP2016010657A (en) * | 2014-06-30 | 2016-01-21 | 株式会社ニデック | Optical coherence tomography device, optical coherence tomography calculation method and optical tomography calculation program |
JP2016010656A (en) * | 2014-06-30 | 2016-01-21 | 株式会社ニデック | Optical coherence tomography device, optical coherence tomography calculation method and optical coherence tomography calculation program |
JP2016026521A (en) * | 2014-06-30 | 2016-02-18 | 株式会社ニデック | Optical coherence tomography device and data processing program |
JP2016010658A (en) * | 2014-06-30 | 2016-01-21 | 株式会社ニデック | Optical coherence tomography device, optical coherence tomography calculation method and optical coherence tomography calculation program |
JP7249102B2 (en) | 2015-03-23 | 2023-03-30 | アルコン インコーポレイティド | Systems, apparatus and methods for optimization of laser photocoagulation |
JP2016209198A (en) * | 2015-05-01 | 2016-12-15 | キヤノン株式会社 | Image generation apparatus, image generation method, and program |
US10420461B2 (en) | 2015-05-01 | 2019-09-24 | Canon Kabushiki Kaisha | Image generating apparatus, image generating method, and storage medium |
JP2016209200A (en) * | 2015-05-01 | 2016-12-15 | キヤノン株式会社 | Image generation apparatus, image generation method, and program |
US11963750B2 (en) | 2017-10-16 | 2024-04-23 | Massachusetts Institute Of Technology | Systems, devices and methods for non-invasive hematological measurements |
EP3696721A2 (en) | 2019-01-24 | 2020-08-19 | Topcon Corporation | Ophthalmologic apparatus, method of controlling the same, and recording medium |
EP3789915A1 (en) | 2019-01-24 | 2021-03-10 | Topcon Corporation | Ophthalmologic apparatus |
EP3792824A1 (en) | 2019-01-24 | 2021-03-17 | Topcon Corporation | Ophthalmologic apparatus |
JP2019217389A (en) * | 2019-10-02 | 2019-12-26 | キヤノン株式会社 | Image generation apparatus, image generation method, and program |
JP2019217388A (en) * | 2019-10-02 | 2019-12-26 | キヤノン株式会社 | Image generation apparatus, image generation method, and program |
JP6992030B2 (en) | 2019-10-02 | 2022-01-13 | キヤノン株式会社 | Image generator, image generation method and program |
JP6992031B2 (en) | 2019-10-02 | 2022-01-13 | キヤノン株式会社 | Image generator, image generation method and program |
Also Published As
Publication number | Publication date |
---|---|
JP2018175888A (en) | 2018-11-15 |
EP2852317A1 (en) | 2015-04-01 |
JP6371762B2 (en) | 2018-08-08 |
US20160317029A1 (en) | 2016-11-03 |
US9357916B2 (en) | 2016-06-07 |
JP2015515894A (en) | 2015-06-04 |
US20130301008A1 (en) | 2013-11-14 |
CN104271031B (en) | 2017-08-08 |
IN2014DN08852A (en) | 2015-05-22 |
EP2852317B1 (en) | 2022-07-27 |
EP4122377A1 (en) | 2023-01-25 |
CN104271031A (en) | 2015-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9357916B2 (en) | Analysis and visualization of OCT angiography data | |
US10398302B2 (en) | Enhanced vessel characterization in optical coherence tomograogphy angiography | |
US10743763B2 (en) | Acquisition and analysis techniques for improved outcomes in optical coherence tomography angiography | |
Eladawi et al. | Classifcation of retinal diseases based on oct images | |
US10299677B2 (en) | Volume analysis and display of information in optical coherence tomography angiography | |
DeBuc | A review of algorithms for segmentation of retinal image data using optical coherence tomography | |
EP3102090B1 (en) | Optical coherence tomography (oct) system with phase-sensitive b-scan registration | |
US9418423B2 (en) | Motion correction and normalization of features in optical coherence tomography | |
US10264963B2 (en) | Methods for high sensitivity flow visualization | |
US20140276025A1 (en) | Multimodal integration of ocular data acquisition and analysis | |
US20160317026A1 (en) | Optical coherence tomography system for health characterization of an eye | |
EP2892414A1 (en) | Quantification of local circulation with oct angiography | |
Huang et al. | In vivo microvascular network imaging of the human retina combined with an automatic three-dimensional segmentation method | |
Eladawi et al. | Optical coherence tomography: A review | |
JP2018046958A (en) | Ophthalmologic photographing apparatus and ophthalmologic image processing apparatus | |
Ţălu et al. | Use of OCT imaging in the diagnosis and monitoring of age related macular degeneration | |
Huang et al. | In vivo microvascular network imaging of the human retina | |
Patel | Automated Three-Dimensional Image Segmentation of Retinal OCT Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13724195 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015510811 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |