WO2020106792A1 - System and method for retina template matching in teleophthalmology - Google Patents

System and method for retina template matching in teleophthalmology

Info

Publication number
WO2020106792A1
WO2020106792A1 PCT/US2019/062327 US2019062327W WO2020106792A1 WO 2020106792 A1 WO2020106792 A1 WO 2020106792A1 US 2019062327 W US2019062327 W US 2019062327W WO 2020106792 A1 WO2020106792 A1 WO 2020106792A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
template
retina
baseline
Prior art date
Application number
PCT/US2019/062327
Other languages
French (fr)
Inventor
Eric J. Seibel
Chen Gong
Steven L. BRUNTON
Nils Benjamin ERICHSON
Laura TRUTOIU
Brian T. Schowengerdt
Original Assignee
University Of Washington
Magic Leap, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University Of Washington, Magic Leap, Inc. filed Critical University Of Washington
Priority to CN201980076416.2A priority Critical patent/CN113164041A/en
Priority to JP2021527904A priority patent/JP2022507811A/en
Priority to EP19886194.0A priority patent/EP3883455A4/en
Priority to US17/295,586 priority patent/US20220015629A1/en
Publication of WO2020106792A1 publication Critical patent/WO2020106792A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • A61B3/1208Multiple lens hand-held instruments
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0041Operational features thereof characterised by display arrangements
    • A61B3/0058Operational features thereof characterised by display arrangements for multiple images
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/102Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • G06T7/0016Biomedical image inspection using an image reference approach involving temporal comparison
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/76Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries based on eigen-space representations, e.g. from pose or different illumination conditions; Shape manifolds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • A61B2576/02Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

A retina image template matching method is based on the registration and comparison between the images captured with portable low-cost fundus cameras (e.g., a consumer grade camera typically incorporated into a smartphone or tablet computer) and a baseline image. The method solves the challenges posed by registering small and low-quality retinal template images captured with such cameras. Our method combines dimension reduction methods with a mutual information (MI) based image registration technique. In particular, principle components analysis (PCA) and optionally block PCA are used as a dimension reduction method to localize the template image coarsely to the baseline image, then the resulting displacement parameters are used to initialize the Ml metric optimization for registration of the template image with the closest region of the baseline image.

Description

System and Method for Retina Template Matching in Teleophthalmology
Priority
This application claims priority benefits of U.S. Provisional application serial no. 62/770,612, filed November 21, 2018, the content of which is incorporated by reference herein.
Field
This disclosure relates to systems and methods for matching and registering a small field of view image of the fundus to a previously obtained baseline, wide field of view image. The baseline image can take the form of a wide field of view fundus image obtained by a fundus camera, e.g., of the type found in an eye clinic. Alternatively, the baseline image could be a composite or mosaic image of some or all of the fundus from previously obtained stitched or registered images. The term“baseline image” is intended to be interpreted to cover either situation.
In this document, the term“template” or alternatively“template image” is used to refer to the small field of view image to be matched and registered with the baseline image. “Teleophthalmology” is used to refer to the practice of monitoring of retinal health and visual performance of a patient remotely, i.e., with the patient not physically present in the traditional eye clinic. The monitoring could be done by the patient with the aid of one of the portable fundus cameras of this disclosure and without direct input from their eye doctor, or by the remotely- located patient and their eye doctor using computer networks to share information and fundus images.
Background
Recently, teleophthalmology has been facilitated by the ability of consumer grade cameras, such as those found in smartphones, to obtain images of the fundus. Images of the fundus captured by a person in the home setting can be sent over computer networks and studied by the person’s eye doctor and thereby allow the monitoring of the health of the eye without the person physically making a trip to the eye clinic. In addition, the emerging virtual and mixed reality sector may enable new teleophthalmology scenarios for head-worn eye imaging and monitoring. By combining these emerging technologies with advanced image processing algorithms, long-term or longitudinal monitoring of health can be provided for the first time by imaging the retina in everyday life of the person. Examples of these new portable fundus camera (PFC) systems are commercially available from many sources, or else described in the literature, including devices which are designed to be combined with a smartphone. See e.g., US patent no. 8,836,778, a digital fundus camera product incorporating a smartphone, known as“D-Eye” see https://www.d- eyecare.com, and RN Maamari, et al., A mobile phone-based retinal camera for portable wide field imaging. British Journal of Ophthalmology98(4):438 (2014). Other portable laser- based imaging systems are forthcoming as they are becoming the mainstay of clinical ophthalmic imaging, which are portable scanning laser ophthalmoscopes (PSLO), see US patent no. 6,758,564, and portable optical coherence Tomography (POCT) systems, see US patent no. 7,648,242. Unlike fundus cameras that use visible light imaging, these laser-based retinal imaging systems typically use infrared wavelengths of light. Additionally, augmented reality headsets can be adapted with cameras and ancillary optical components to image the fundus. The term“portable fundus camera” is intended to refer to any portable, e.g., handheld or head-worn device designed or adapted to be used to capture images of the fundus, and is interpreted to encompass the devices described above and in the above patent and scientific literature.
Retinal template matching and registration is an important challenge in tele ophthalmology with these low-cost and portable imaging devices. It allows the regular screening and comparison of retina changes by matching the template images captured with low-quality imaging devices onto a previously obtained large Field of View (FOV) baseline image. Changes between the current and prior images can indicate disease progression or improvement, or the onset of disease. Typically, the images from such low-cost devices are low in quality, defined as generally having a much smaller FOV because the pupil is not being dilated at the eye clinic. Furthermore, the lower-cost detectors and lower power light sources acquire images with untrained users having many different quality degradations. These attributes of new portable retinal imaging devices present major challenges to matching the small FOV images (i.e.,“templates”) to the large FOV or panoramic baseline image of the retina for determining changes in health status of the user.
Retina image registration approaches can be classified into area-based and feature- based methods. Feature based methods optimize the correspondence between extracted salient objects in retina images. See e.g., C. V. Stewart, et al.,“ The dual-bootstrap iterative closest point algorithm with application to retinal image registration ,” IEEE transactions on medical imaging, vol. 22, no. 11, pp. 1379-1394, 2003. Typically, bifurcations, fovea, and the optic disc are common features used for retinal image registration. A small FOV template has little probability of containing specific landmarks on the retina, thus the fovea and optic disc are not applicable. Vascular bifurcations are more common, while similarly, the small amount of bifurcations in the template cannot form the basis of a robust registration. Besides, the extraction of the vascular network in poor quality images is difficult. It can cause ambiguous vascular directions when labelling the bifurcations. General feature point based approaches are also implemented in retina registration, such as SIFT-based (see Y. Wang, et al.“ Automatic fundus images mosaic based on sift feature in Image and Signal Processing (CISP), 2010 3rd International Congress on, vol. 6. IEEE, 2010, pp. 2747-2751; C.-L. Tsai, et al.,“ The edge-driven dual-bootstrap iterative closest point algorithm for registration of multimodal fluorescein angiogram sequence ,” IEEE transactions on medical imaging, vol. 29, no. 3, pp. 636-649, 2010, and SURFbased methods (G. Wang, et al., “ Robust point matching method for multimodal retinal image registration ,” Biomedical Signal Processing and Control, vol. 19, pp. 68-76, (2015), C. Hernandez-Matas, et al., “Retinal image registration based on keypoint correspondences, spherical eye modeling and camera pose estimation ,” in Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE, IEEE, 2015, pp. 5650-5654. These approaches can register the images in complex scenarios and are computationally efficient. They assume the feature point pairs can be reliably detected and matched to estimate the transformation. Although feasible in most cases, the process can fail on low-quality retina images without enough distinct features.
Area-based approaches match the intensity differences of an image pair under a similarity measure, such as SSD (sum of squared differences) K. J. Friston, et al.,“ Spatial registration and normalization of images,” Human brain mapping, vol. 3, no. 3, pp. 165-189, 1995, CC (Cross-Correlation) (see A. V. Cideciyan,“ Registration of ocular fundus images: an algorithm using cross-correlation of triple invariant image descriptors ,” IEEE Engineering in Medicine and Biology Magazine, vol. 14, no. 1, pp. 52-58, 1995) and MI (mutual information) (see Y.-M. Zhu,“ Mutual information-based registration of temporal and stereo retinal images using constrained oplimizalionf Computer methods and programs in biomedicine, vol. 86, no. 3, pp. 210-215, (2007)), then optimize the similarity measure by searching in the transformation space. Avoiding pixel level feature detection, such approaches are more robust to poor quality images than feature-based approaches. However, retina images with sparse features and similar backgrounds are likely to lead the optimization into local extrema. Additionally, when the size difference between the template and full image is too large, the registration with mutual information (MI, described below) can be computationally very expensive.
The small retina template images with low quality result in homogeneous nonvascular surfaces which are similar to homogeneous nonvascular surfaces which are present in other areas, which makes the current retina image registration methods not applicable. Overcoming the challenges in retina template matching, a method is disclosed in this document for matching templates images from low-cost imaging devices onto a baseline image. This approach is an improvement over the area-based method with a MI metric, since it is more reliable to achieve accurate and robust template matching near the alignment position. This is also the first time that a retina template matching method is proposed in teleophthalmology for remote retina health monitoring.
Summary
A retina image template matching method, referred to herein as“RetinaMatch,” which is particularly suitable for remote retina health monitoring is disclosed. The methods of this disclosure can be of use especially in rural areas where access to clinics and regular eye care is limited by distance and the difficulty and cost of travel. The retina monitoring is based on the registration and comparison between the images remotely captured with portable low-cost fundus cameras (PLFCs, e.g.., a consumer grade camera such as a camera incorporated into a smartphone or tablet computer) and a baseline image. RetinaMatch solves the challenges posed by registering small and low-quality retinal template images captured with such cameras.
Our method combines dimension reduction methods with a mutual information (MI) based image registration technique. In particular, principle components analysis (PCA) and optionally block PCA are used as a dimension reduction method to localize the template image coarsely to the baseline image, then the resulting displacement parameters are used to initialize the MI metric optimization for registration of the template image with the closest region of the baseline image. With the initialization near the optimal position, the transformation search space for optimization is narrowed significantly.
We also disclose methods of constructing a panorama or mosaic image from a set of individual template images. Dimension reduction can also be implemented in a process of template image mosaicking, which accelerates the matching of overlapped image patches. Additionally, a new image mosaicking method is presented using the coarse alignment methodology discussed herein. PCA is used to determine the adjacent images to be stitched and Mi-based registration is applied on adjacent image pairs.
In one specific embodiment, a method is disclosed for monitoring a retina of a subject. The method includes the steps of (a) obtaining a set of small field of view (FOV) (“template”) images of the retina captured with a portable fundus camera, (b) matching the template images to a previously captured wide FOV baseline image of the retina using dimension reduction for the baseline image and template images and a mutual information registration method for registering the template images to portions of the baseline image, and (c) comparing the registered set of template images to the baseline image to detect any differences between the registered set of template images and the baseline image, wherein any differences indicate occurrence or change of a condition of the retina or the subject. For example, the differences can indicate progression (e.g., a worsening or improvement) of a disease or condition, subject response to treatment or therapy, onset of an eye disease such as glaucoma, or the onset or progression of disease generally in the subject, such as diabetes. The change can be part of a progression, or alternatively independent of any detected trend or progression. Example applications of the method in the context of teleophthalmology are explained in detail later in this document.
In another aspect, a computer-implemented method of registering a narrow field of view template image to a wide field of view, previously obtained, baseline image is disclosed. As explained previously, the baseline image could be a single image for example obtained from a conventional fundus camera in an eye clinic, or a mosaic of previously obtained images. The method includes step of:
(1) cropping the baseline image into a multitude of smaller offset target images;
(2) applying a dimension reduction method to map the offset target images to a representation in a lower dimensional space;
(3) mapping the template image into the lower dimensional space using the dimension reduction method;
(4) finding the corresponding nearest target image for the template image in the lower dimensional space;
(5) registering the template image to the nearest target image;
(6) identifying the location of the template image on the baseline image based on the position of the nearest target image; and
(7) registering the template image to the baseline image at the location identified in step (6). In still another aspect, a novel portable fundus camera is contemplated, which includes a camera, an optical device coupled to the camera facilitating collecting images of the interior of the eye, a processing unit and a memory storing instructions for a processing unit, the instructions in the form of code for performing the procedure recited the previous paragraph. In this embodiment the portable fundus camera includes the software for matching the template to the baseline image. In one configuration, the camera is incorporated in a smartphone or tablet computer. In another configuration, the camera is incorporated into a head-mounted virtual or augmented reality unit.
In still another aspect, a method for assembling a wide field of view mosaic image from a multitude of small field of view images is disclosed. The method includes steps of:
(a) mapping the small field of view images X = Xi, X2, . .. Xn to a lower dimensional space using Principal Component Analysis (PCA);
(b) for each of the small field of view images X\:
(1) finding the nearest neighbor(s) small field of view images by minimizing the feature distance A(Zj,Zj) where Zj,Zj represent principal components of the ith and jth images Xi and Xj; and
(2) computing the Mutual Information (MI) between each X; and the nearest neighbor(s) found in step (1) and designating as the adjacent image that image with the highest MI; and
(c) aligning at least some of the adjacent images determined from step (b) (2)) using a Mi-based registration method.
Brief Description of the Drawings
The appended drawing figures are offered by way of example and not limitation of currently preferred embodiments of this disclosure.
Figure 1 is an illustration of one example of a teleophthalmology environment in which the features of this disclosure can be practiced.
Figure 2 illustrates the overview of our template matching method, including four panels or processing steps:
(a) create a multitude of offset target images from a full/baseline image;
(b) create low-dimensional representations of each of the target images using
PCA; (c) perform coarse localization of the template image: find the nearest target image in the low-dimensional space; and
(d) Mi-based registration of the template and the nearest target image, and locate the template image onto the baseline image.
Figure 3 is a flow chart of the sequence of processing instructions representing panels (a) and (b) of Figure 2. These processing steps can be done in a pre-computation or“offline” manner, i.e., in advance of the template matching and registration steps of panels (c) and (d) of Figure 2.
Figure 4 is a flow chart of a sequence of processing instructions representing panels (c) and (d) of Figure 2. These processing steps can be done in an“on-line” manner, i.e., at the time the template images are acquired. The processing steps can be executed in the device acquiring the template images (e.g., portable fundus camera, e.g., smartphone, etc.) or in a remote processing unit, such as a computer workstation in an eye clinic that receives the template images from the device the patient uses to acquire the template images.
Figure 5 is a flow chart of the block PCA step 308 in Figure 4.
Figure 6 is a flow chart of a sequence of processing instructions for constructing a mosaic or panorama of template images.
Figure 7 is low dimensional representation of block PCA showing the mapping of template patches onto a target image patches using the procedure of Figure 5. The T dictionary for each target image saves information of the open circled dots in the figure.
Detailed Description
This document discloses an efficient and accurate retinal matching system and method combining dimension reduction and mutual information (MI), we refer to the technique here as RetinaMatch. By way of overview, the dimension reduction initializes the MI optimization as a coarse localization process, which narrows the optimization domain and avoids local optima. The disclosed system and method outperforms to-date existing template matching solutions. In addition, we disclose a system and method for image mosaicking with area-based registration, providing a robust approach which may be used when the feature-based methods fail. To the best of our knowledge, this is the first template matching technique for retina images with small template images from unconstrained retinal areas.
Our approach is an improvement over the area-based matching methods with MI metric, since it is more reliable to achieve accurate and robust template matching near the alignment position. One unique aspect of our approach is that we combine dimension reduction methods with the Mi-based registration to reduce the interference of local extrema and improve the matching efficiency.
An example of the practical use of our method in monitoring the retina in a teleophthalmology setting is shown in Figure 1. A subject 10 has a portable fundus camera, in this example in the form of a smartphone 12 with ancillary apparatus 14 adapted for imaging the eye (for example the D-eye device). The subject’s eye need not be chemically dilated. The subject 10 holds the camera 12 and apparatus 14 up to their eye and captures a series of small field of view images perhaps 30 or 50 in all. In one configuration, the smartphone 12 includes a processing unit and memory storing instructions for implementing the template matching procedure of Figure 2 (discussed below). In particular, the smartphone captures a series of template images, registers them to a baseline image stored in the phone, and conducts a comparison between the two to determine if differences exist, where the differences can indicate an occurrence or change of a condition of the retina in the interim. The changes can be reported to the subject 10, e.g., via a template matching app that performs the comparison. In the event that some condition has developed or worsened based on the comparison, the subject can alert their eye doctor and/or send the mosaicked template images via the cellular network 16, internet 18 and to the eye clinic. The eye clinic includes a workstation 24 where the eye doctor can view the currently obtained mosaicked image 26 as well as stored prior mosaicked or baseline images 28 and make comparisons, make measurements of particular retinal features, etc., and coordinate treatment or further evaluation of the subject. While the above description indicates that the subject 10 could be a patient, that is not necessarily the case and could be any user, e.g., a user of a virtual reality (VR) headset, see the discussion in the applications section below.
It is also contemplated that the template images captured by the smartphone 12 could be sent over the network 16, 18 to a computing system 22 in the eye clinic and the processing steps of Figure 2 could be performed in the eye clinic computing system 22, including the off-line processing steps (see discussion below), template matching and registration of template images to the baseline image. This configuration may be suitable when the portable fundus camera used by the patient 10 has limited processing power. The portable fundus camera may have the capability to connect to a computer network to share the images with the eye clinic, either directly (e.g., using WIFI) or the images can be otherwise downloaded to another device such as a personal computer or smartphone and then transmitted to the eye clinic for processing. Specific examples of the applications of the retinal template matching in a teleophthalmology setting will be discussed at length later in this document.
With the above description in mind, one of the principal aspects of this disclosure is application of dimension reduction methods with the Mi-based registration to reduce the inference of local extrema and improve the matching efficiency.
Figure 2 illustrates an overview and schematic of the retinal template matching method shown in four panels from (a) to (d). In panel (a) a wide-FOV full or baseline image 10 is sampled, or cropped, into many overlapping or offset“target” images 102, 104, etc. The arrows 106 and 108 indicate the cropping or creation of target images occurs in the X and Y directions such that the full image 10 shown in the dashed lines is cropped into offset, smaller target images similar to images 102 and 104. In panel (b), each target image (shown as images 102A, 102B, 102C 102D etc.) is mapped into a low-dimensional space W2 according to its positional relationship using PC A. The dots PI, P2, P3, P4 show the low dimensional representations of the images 102A, 102B, 102C 102D in the low dimensional space. (It will be understood that the panel (b) shows only a two dimensional space but in practice the representations may be made in a low dimensional space of say 20 dimensions, depending on the implementation of PCA.) In panel (c) a small field of view template image 110 is also mapped into this space using PCA (represented by the dot 112) and its nearest target image (image 102B, represented by the dot P2) is found. In panel (d), the template image 110 is registered to its nearest target image 102B with mutual information (MI). Specifically, in panel (c) the principal component analysis (PCA) and block PCA are used to localize the template image 110 coarsely, then the resulting displacement parameters are used to initialize the MI metric optimization for the registration procedure of panel (d). The initial parameters provided by the coarse localization are in the convergence domain of the MI metric. With the initialization near the optimal position, the transformation search space for optimization is narrowed significantly. The PCA computations shown in panels (b) and (c) are accelerated with randomized methods, which improve the coarse localization efficiency. As shown in panel (d), the template 1 10 is located or matched on to the full or baseline image 100 using the information of the location of the nearest target image 102B.
The process of Figure 2 panels (c) and (d) repeats for all the template images 110 that are captured; they are all registered to their nearest target image and then located onto the baseline image. After completion of the panel (d) process for all template images 1 10 a comparison between the registered template images and baseline image can then be performed. Additionally, the template images can be mosaicked into a new baseline image such that when a subsequent set of template images are obtained they can be compared to the updated baseline image created from a previous iteration of the procedure of Figure 2.
The procedures shown in panels (a) and (b) of Figure 2 can be pre-processed when the full (or baseline) image is obtained, while panels (c) and (d) could be considered as an“online” stage when the template images are acquired in real time, or alternatively come into the clinic from the patient in a remote location. The procedures of panels (a)-(d) could be all executed in the processing unit of the portable fundus camera, smartphone, etc. that is used to acquire the template images. Alternatively, some or all of the processing steps of panels (a)- (d) could be done on a remote computing platfonn, such as for example a workstation in an eye clinic as shown in Figure 1. The schematic of Figure 2 describes the method without using the improvement of block PCA for finding the nearest target image. Figure 5 shows the details of block PCA and will be discussed in detail below.
With the above explanation in mind, attention will now be directed to Figures 3 and 4 and the steps of panels (a)-(d) described in further detail. Figure 3 illustrates detailed steps 200 of the pre-processed part, referring to (a) and (b) panels in Figure 2. The input is the large FOV full image F (100), which at step 202 is split or cropped into image patches (target images) Ij with a certain offset, e.g., 10 pixels. At step 204, a dimensional reduction process (e.g., PCA) is performed to the target images mapping them into the low dimensional space (Figure 2 panel (b)) and the output is the low-dimension representations Z of F’s patches. PCA is used for the dimension reduction of images, generating the representations in a low dimension space. At step 206 this low dimension representation is saved in computer memory.
Figure 4 illustrates the sequence of processing steps 300 for a template image, referring to (c) and (d) panels in Figure 2. Together, they form a matching step of dimension reduction for the template image and registration using mutual information. The process has two steps, coarse localization (steps 302 and 304, Figure 2 panel (c)) and accurate registration (steps 308, 310 and 312, Figure 2 panel (d)). The input is the template images S to be matched (1 10), and the output is the mapped template on the full image F, after step 312 is performed. In step 302, S is mapped to the low dimensional space. At step 304, we find the nearest target representation in the low dimensional pace, 306. At step 308 we use block PCA to update the target image region to 1 **. At step 310 we perform an accurate registration tween S and 1** with a MI metric. At step 312 we determine the location of the template image on the full image F based on the position of the updated target image region I**. A nearest target image I* of S on the full image is obtained in the coarse localization, and image I* and S have a large overlap. Block PCA is used to update /* to /** on the full image F, getting more overlap with S. The template S can be matched on F based on the location of /**.
A specific embodiment of our procedure of Figure 2 used is set forth below.
1. Figure 2, panel (a): create target images from the full (baseline) image
We define the full image and template as F and S respectively. The full image F is split into target images li, , Ijv··
= 0(bi, F).
The function 0 crops the target images I; from F at b, and bi = [Xi, yi, h, w], where (.xi>yd denotes the center position and (h,w) denotes the width and height of the cropped image. There is a certain displacement, of neighboring target images in the x and y axes. As shown in Fig. 2 panel (a), each target image has a large overlap with its neighbors. The overlap forms the redundancy of the data which can indicate the location distribution between each image and its neighbors. Applying dimension reduction techniques on such data, as explained below, we can obtain the low-dimensional distribution map of all target images.
Target images are resized to vectors and form the matrix X e Rnxd.
2. Figure 2 panel (b): create low dimensional representations of the target images with PCA
Dimension reduction methods allow the construction of low-dimensional summaries, while eliminating redundancies and noise in the data. To estimate the template location in the 2d space, the full image dimension is redundancy, thus we apply dimension reduction methods for the template coarse localization.
Generally, we can distinguish between linear and nonlinear dimension reduction techniques. The most prominent linear technique PCA. PCA is selected as the dimension reduction method in RetinaMatch since PCA is simple and versatile. Specifically, PCA forms a set of new variables as a weighted linear combination of the input variables. Consider a matrix X = [x/, 2, .... xj of dimension n d, where n denotes the number of observations and d is the number of variables. Further, we assume that the matrix X is column-wise mean centered. The idea of PCA is to form a set of uncorrelated new variables (referred to as principal components) as a linear combination of the input variables:
Zj = w (1) where z ,· is the ,th principal component (PC) and w, is the weight vector. The first PC explains most of the variation in the data, the subsequent PCs then account for the remaining variation in descending order. Thereby, PCA imposes the constraint that the weight vectors are orthogonal. This problem can be expressed compactly as the following minimization: minimize || X- ZW || F 2 subject to WTW= I where || . || F is the Frobenius norm. The weight matrix W that maps the input data to a subspace turns out to be the right singular vectors of the input matrix X. Often a low-rank approximation is desirable, e.g., we compute the k dominant PCs using a truncated weight matrix Wk = [w/,w , ...,w/J. k is some integer, such as 20.
PCA is generally computed by the singular value decomposition (SVD). Many algorithms have been developed to streamline the computation of the SVD or PCA for highdimensional data that exhibits low-dimensional patterns, see J. N. Kutz, et al., Dynamic mode decomposition: data-driven modeling of complex systems. SIAM, 2016, vol. 149. In particular, tremendous strides have been made accelerating the SVD and related computations using randomized methods for linear algebra. See the references 24-31 cited in the manuscript portion of the priority U.S. provisional application. Since we have demonstrated high performance with less than 20 principal components, the randomized SVD is used to compute the principal components, improving the efficiency in this retinal mapping application for mobile device platforms (e.g., smartphone, tablet). The randomized algorithm proceeds by forming a sketch Y of the input matrix
Y = CW,
Where W is a d x / random test matrix, say with independent and identically distributed random standard normal entries. Thus, the l columns of Y are formed as a randomly weighted linear combination of the columns of the input matric, providing a basis for the column space of X. Note, that / is chosen to be slightly larger than the desired number of principal components. Next, we form an orthonormal basis Q using the QR decomposition Y = QR. Now, we use this basis matrix to project the input data matrix to low-dimensional space
B = QTX.
This smaller matrix B of dimension I x d can then be used to efficiently compute the low-rank SVD and subsequently the dominant principal components. Given the SVD of B = UåVT, we obtain the approximate principal components as
Z = QUå = XV.
Here, U and V are the left and right singular vectors and the diagonal elements of å are the corresponding singular values. The approximation accuracy can be controlled via additional oversampling and power iterations.
Referring again to panel (b) of Figure 2, in our particular implementation, we obtain the low-dimensional distribution representation of the target image distribution by
implementing PC A on X:
z = xw,
Where Z = [z1, z2< ¾- - ,zN]T e Mnxi, WeRdxl and l « d. The image space Wc is mapped to a low-dimensional space P2 with the mapping W. W and Z are saved in memory, in what we have called a“dictionary”, D.
It is important to note that PCA is sensitive to outliers, occlusions, and corruption in the data. In ophthalmological imaging applications, there are several potential sources of corruption and outliers when imaging the full image, including blur, uncorrected astigmatism, inhomogeneous illumination, glare from crystalline lens opacity, internal reflections (e.g., from the vitreoretinal interface and lens), transient floaters in the vitreous, and shot noise in the camera. Further, there is often a trade-off between illumination and image quality, and there is strong motivation to introduce as little light as necessary for the patient comfort and health. The robust principal component analysis (RPCA) was introduced specifically to address this issue, decomposing a data matrix into the sum of a matrix containing low-rank coherent structure and a sparse matrix of outliers and corrupt entries. In general, RPCA is more expensive than PCA, requiring an iterative optimization to decompose the original matrix into sparse and low-rank components. Each step of the iteration is as expensive as regular PCA, and typically on the order of tens of iterations are required; however, PCA may be viewed as an offline step in our procedure, so that this additional computational cost is manageable. RPCA has been applied with success in retinal imaging applications to improve image quality. In the examples presented in this work, the data appears to have few enough outliers so that RPCA is not necessary, although it is important to keep RCPA as an option for data with outliers and corruption. Further details on RPCA are contained in the references cited in the manuscript portion of our prior provisional application.
3. Figure 2 panel (c): coarse localization - find the nearest target image in the lowdimensional space
Given a template S, the coarse position can be estimated by recognizing its nearest target image. The nearest target image in the image space Wc should also be the nearest representation of S in the lower dimensional space W2. Accordingly, we obtain the low- dimension feature zs of the template in W2:
¾ = ' where s e t1* is the reshaped vector of template S. Let D (¾, z) be the Euclidean distance between zs and a feature z in Z. z* is the nearest target feature of the source image S in W2 : z* = argzmin D (zs,z .
The corresponding target image location is used as the coarse location of S. Ideally, the difference between the coarse location and the ground truth in x and y axes should be less than fll pixels.
In one of the experiments we performed, PCA outperforms other non-linear dimension reduction methods, while the error is larger than f/2. The main reason is that the image degradation creates spurious features that contribute to the final classification. To reduce the influence of local features, we implement block PCA to further improve the accuracy of the coarse localization. By computing the PCA of different local patches in the template, the effect of local features, which cannot be located correctly, is reduced. This procedure is shown in Figure 5. The input is the template S (110) and nearest target image I from coarse localization 102B. To reduce the effect of the local deformation in coarse localization, S and I are split into small patches respectively (step 402A, 402B) and PCA is applied on the small patches (steps 404, 406). Similar to the coarse localization, the nearest target patch is determined for each template patch (step 408). The average position of all chosen target patches is computed as new center position of I (step 410) and new position of I is updated (step 412).
Obtaining the nearest target image, we crop a larger image at the same position from the full image as the new target image I. In this way, the template can have more overlap with the new target image when there is a large offset between two images. We segment I and the template S into small patches with the cropping function ø, where the patch size is smaller than the source image with the axial displacement of neighboring patches / ' Similarly, all image patches from I are mapped into the low-dimensional space 123with W Let Z ' denote the low-dimensional representation of the target image distribution. Each template patch is then mapped to the space with W '. The nearest target patch for each template patch is determined with the Euclidean distance as described before. We use the same weight for each region of the template for localization. Let bm be the mean of the coordinates of selected nearest target patches, which then represents the center of the template on I. Accordingly, the template location on the full image can be estimated and the region is cropped as the image S. We store the representation of each of the target image patches in lower dimensional space in memory, referred to as“dictionary” T. The accurate registration is then applied to the template S and image S. In this way the coarse localization provides an estimate of a good initial point for the accurate registration (panel (d) of Figure 2)·
In the implementation of the proposed coarse localization, the full (baseline) image is assumed to exist so the dictionary D and dictionary T for each target image can be built in advance. This is the pre-computed part as shown in Fig. 2 panels (a) and (b).
Figure 7 is low dimensional representation of block PCA showing the mapping of template patches (represented by solid dots) onto a target image patches (represented by open circled dots) using the procedure of Figure 5. The T dictionary for each target image saves information of the open circled dots in the figure.
Example processing instructions for coarse localization: 1 Map template S into space W2: Zs = sW.
2 Determine closest target image I with corresponding
z*: z* = arg minz A(zs, z). z* e Z.
3 Segment S into [Sp l , Sp, . . . , 5”]: Sp l = f i, S);
Segment I into [lj}, i , . . . , /"]: Ip l - yyi , ΐ).
4 Map target patches Vp into space W3: U = IpW , where
Ip is formed with vectorized V .
5 For each template patch Sp‘:
6 (i)Map Sp' into space W3:. zV = Sp l W'.
7 (ii)Determine its closest target patch
Figure imgf000018_0001
index
8 bm = coordinate of
Figure imgf000018_0002
selected target patch
Figure imgf000018_0003
9 return localization region S =
Figure imgf000018_0004
F)).
4. Figure 2 panel (d) accurate registration and location of template onto baseline
Panel (d) of Figure 2 includes two sub-steps: (1) image registration between the template image and the nearest target image, found in the procedure of Figure 2 panel (c), and (2) locate the template onto the full (or baseline) image.
(1) image registration between template and nearest target image using
Mutual Information (MI) (Figure 4 step 310)
In this section, we describe the maximization of MI for multimodal image registration. We define images S and S as the template and target images, respectively. A transform u is defined to map pixel locations x e 5 to pixel locations in S.
The main idea of the registration is to find a deformation u at each pixel location x that maximizes the MI between the deformed template image S(u(x)) and the target image S(x). Accordingly,
Figure imgf000018_0005
Where
Figure imgf000019_0001
Here, i\ and are the image intensity values in S(u(x)) and S(x), respectively and p(i\) and p(i2) are their marginal probability distributions while p(i 1 i2) is their joint probability distribution. The probability distributions p(i1 t2) reflect the degree to which the greyscale (image intensity) values of each pixel in S(u(x)) and S(x) are similar; p(i1; t2) has a high value (closer to 1) if the pixel values are similar, and low value (closer to 0) if the pixel values are dissimilar. In more detail, in terms of mutual information, based on discrete data like images, each pixel has a grayscale value for 0 to 255. (Although examples herein may describe use of grayscale images for the fundus image work, embodiments are not so limited and may also employ color images as appropriate). We first compute the joint histogram of two images: the joint histogram will be 256x256, which counts the number of corresponding pixels' grayscale from two images. For example, if in the first pixel, one image has a grayscale of 100 and another one is 120, then the joint histogram map (100, 120) will add one. After we finished the joint histogram, the joint probability p(ii, i ) can be obtained by normalizing the joint histogram. Then the marginal probability is computed according to:
Figure imgf000019_0002
(2) locate the template onto the full image (Figure 4 step 312)
In this step images S and S are accurately registered with maximization of mutual information, as per sub-step (d)(1) above. The location of mage S on the full image F becomes the estimated displacement of the template S. In our work, the transform u for alignment is given as an affine transformation: an ai2
u a21 a22 iy
- 1 1 0
It will be appreciated that the processing to create the target images and map them into lower dimensional space (panels (a) and (b) of Figure 2) can be done off-line, e.g., in advance of receiving a set of template images from a patient. The processing of Figure 2, panels (c) and (d) described above could be said to be“online”, performed at the time of collecting the images in the portable fundus camera, or receiving the images from the patient’s device at an eye clinic. Once the procedure of Figure 2 has been performed and the template images matched to the baseline image, a mosaic of the entire retina can be created from the template images, and then the differences between the current retinal image mosaic and the baseline image ascertained from a comparison, e.g., in order to monitor the subject’s health or check for onset, progression or improvement in eye disease (see the Applications section below).
Image mosaicking
Figure 6 illustrates an overview of a new image mosaicking method based on the dimension reduction idea. Given a series of images to be stitched (1 10), PC A can map them into a low dimensional space (step 502), where it is easy to find near images with overlap. The image registration method with MI metric is then applied to adjacent image pairs iteratively to stitch all images.
As pointed out previously, the full retina image can be stitched into a panorama by using many small templates. Users must capture a series of images in naturally unconstrained eye positions to explore different regions of the retina. It is problematic to determine adjacent images before the registration when we apply area-based registration approaches, because at that time they may not have effective descriptors for matching.
Related to the dimension reduction in the proposed template matching method, here we present the procedure shown in the table below to learn the positional relationship of images to be stitched. In this way, the adjacent images can be recognized and registered efficiently.
For a series of small images Xj, we form the matrix X. PCA is applied to X and returns the low-dimensional features for each image in W2. The distance between features in W2 indicates the distance between images. We find the nearest N (e.g., N = 3) target neighbors in the low dimensional space. The nearest neighbor Xj of image Xj is the one with largest overlap; the image pair is then registered with Mi-based approach. To improve the algorithm robustness, the first N nearest neighbors for each image are first selected to compute MI with, and we keep the one with the largest metric value. The above procedure can be represented in the following pseudocode.
Processing instructions: Image stitching (with reference to Figure 6) 1 Map images into space W2: Z = XW. (step 502)
2 For each image Xp
3 (i).Find the nearest N (e.g., N=3) neighbors Xy· minimizing the
feature distance A(Zj,Zj). (step 504)
4 (ii). Compute the Mutual Information between each
Xj and X, and take the adjacent image with highest MI. (step 506, 508)
5 Panorama R Mosaicking: Align all the adjacent images
with mutual information based registration method (step 510)
6 Panorama blending (step 512)
7 return mosaicked panorama R. (step 514)
Applications
Our method of template matching with baseline images and image mosaicking allows for longitudinal comparisons with previously obtained fundus images of the patient. Such longitudinal comparisons have several applications in the field of ophthalmology as will be described below. Such applications are examples of how the methods of this disclosure can be practiced in a teleophthalmology setting. Other suitable applications are also supported by the embodiments described herein, including options outside of the field of retinal template matching.
Hypertension
In the retinal symptom of hypertension, the larger arteries constrict and the venous vessels enlarge in diameter. Ophthalmologists can select several detection points on the vessels. With the captured images coming from the patient as per Figure 1, we construct a mosaic image of the fundus and can detect those images which cover the selected detection points. Then, the vessel width at the select points can be compared with the previous state by making measurements of vessel width and comparing them with previously stored fundus images of the patient. For more precise vessel width measurement, our method of Figure 2 can be combined with vessel segmentation. The vessel width corresponding to each selected point is obtained by segmentation around the mapped location. The vessel segmentation here then is applied on very small retina patches around the point, which is more robust and accurate than segmentation of wide FOV retina images.
Abusive Head Trauma The biomarkers of abusive head trauma (AHT) is another example. The most common retinal manifestation of AHT is multiple retinal hemorrhages in multiple layers of the retina. Matching the captured images onto the full retina image, the hemorrhagic spots can be easily segmented after the subtraction of the current retina regions and previous status. The AHT then can be recognized automatically when such spots are detected. This method permits identification of AHT from images obtained with portable fundus cameras.
Diabetic Retinopathy
The obvious symptoms of diabetic retinopathy (DR) are retina hemorrhages and the presence of exudate. They can be monitored follow the similar process of AHT screening.
Glaucoma
Glaucoma can cause the optic nerve cup to enlarge. Our matching method can automatically select the images that cover the optic nerve. The following segmentation can be easily implemented and a computation of the optic cup diameter performed. Enlargement of the optic nerve cup over time can be ascertained by comparing the computations from a current image with an image from a previous point in time.
Use RetinaMatch as a General Image Template Matching Method
Besides the retina images, the technique of RetinaMatch can be used in other type of image template matching tasks. Note that our method of Figure 2 does not use the specific features of retina. Rather, our method is a combination of coarse localization and accurate localization based on MI. The accurate localization can be replaced by any other existing image registration method, and the coarse localization can always reduce the error caused by the small template size and sparse image features. Thus, the procedure of Figure 2 is generally applicable to the problem of matching small field of view images to a previously obtained wide field of view image.
Use RetinaMatch for camera localization
Having the image of the full view, our method of Figure 2 can be used for camera localization when matching the captured field of view onto the full or baseline image. In the case of endoscopic guidance of therapy by a surgical robot, the current limited-sized FOV can be matched onto the panorama for endoscope localization. Thus, this image template matching technique can be used to create a more reliable closed-loop control for the robot arm and surgical tool guidance. For example, after registering the template images the resulting mosaicked image can be inspected, e.g., to locate a surgical tool in the eye.
Augmented Reality (AR), Eye Glasses, etc. and monitoring changes over time
A retinal imaging system (e.g., consumer grade camera with ancillary imaging device, e.g., D-eye) can be portable and further, can be worn as integrated into, for example, glasses, or an Augmented Reality (AR), Virtual Reality (VR) and/or Mixed Reality (MR) headset, allowing a series of images to be taken and analyzed, either daily, weekly, monthly, or when the user or ophthalmologist requests. These measurements can be discrete, continual, but in a time series and analyzed longitudinally over the increasing time period. Change in a retina can detected by registering and comparing the captured small FOV images to a full baseline retina image using our template matching method.
AR, VR and/or MR devices can be used to optically scan the retina to form images and thereby acquire the template images. Even more pragmatically, spectacles or sunglasses can be used because of the smaller size, lower costs, and increasing utility to the user. A scanned light beam entering the pupil of the eye and striking the retina to form video rate images perceived by the user’s brain can also be used to acquire images of high contrast structures, such as the vasculature containing blood.
A device can operate without major changes in performance during its lifetime and can be used as a monitor of the condition of a user’s eye. By comparing retinal images from such a device over time, the changes in the user’s optical system (such as cornea, intraocular lens, retina, and liquid environments) can be monitored to alert the user in possible health changes. For example, these changes can be gradual, like increasing light scattering from the crystalline or intraocular lens due to cataract formation, or the appearance and structural changes in the retina due to diabetic retinopathy. In addition, chronic diseases which may have variations over time in the blood vessel size and shape in conditions of hypertension are another example. Acute changes such as bleeding within the retinal can indicate brain trauma. Relative and repeatable changes in number, size, and shape of structures in the retinal images may indicate that the measured change is due to a particular disease type and not that the AR, VR, MR, glasses, or other type of monitoring device has slowly or suddenly changed its imaging performance or has become unstable.
However, in many healthy users the optical system will be unchanging over time. In this case, the vasculature of the retina can be used as a test target for detecting optical misalignments, focus errors, light scanning errors and distortions, non-uniform and color imbalance in the illumination, and aberrations in the imaging system. This situation can occur if the monitoring device, such as an AR, VR, or MR device is degraded due to mechanical impact, breakage, applied stresses, applied vibration, thermal changes, and opto- electrical disruption or interference. These changes can be observed in a measurable change to the current retinal images compared to before these changes happened to the AR, VR or MR device. Retinal vasculature images can be used to measure the level of image distortion within an imaging system by resolving a specific pattern of high contrast lines. By processing the retinal images or their panoramic mosaic into binary (black and white) high contrast by intensity thresholding and/or segmentation, the vascular network can be made into a RetinaTest Target.
By measuring the change in the images of the RetinaTest Target before and after a change in performance of the AR, MR or MR device, a calibration measurement of imaging performance can be made dynamically. This calibration measurement can be transmitted to a local computing device or to a remote location for analysis and diagnosis of the change of performance of the AR, VR or MR device. Furthermore, the calibration measurement can be updated when corrective actions are implemented within the AR, VR or MR device which can be used in a feedback loop as an error signal for the purpose of regaining optimal performance of the AR, VR or MR device. Since the blood has a distinct optical absorption spectrum in the arteries and veins and scattering differences can be determined, the calibrated imaging performance should be performed across the spectral range of visible to near infrared wavelengths being used by the AR, VR or MR device.
Gaze tracking
The acquisition of template images and registration onto a baseline image as described above can be further used to determine the gaze position the user. In particular, as the user’s gaze changes position, the angle between the optical axis of the camera and the fovea or other structures at the back of the eye will change accordingly, and by measuring the shift in this angle the gaze position can be determined.
While the above discussion has been directed primarily to detecting changes in the retina and monitoring for change, progression, occurrence etc. of eye disease, more generally the present methods can be used to monitor for other conditions (e.g., diabetes, etc.) that are not retinal conditions per se, but that may be measured in the retina. Furthermore, our methods can be also used to monitor improvement in a condition of the retina, for example, monitor effectiveness of a treatment or therapy, in addition to detecting onset or worsening of disease.
Other applications are of course possible as would be apparent to one skilled in the art.
The manuscript portion of our priority US provisional application includes data regarding experiments we conducted using our template matching method, including validation on a set of simulated images from the STARE dataset, and in-vivo templated images captured from the D-eye smartphone device matched to full fundus images and mosaicked full images. The interested reader is directed to that portion of the provisional application for further details.
As used in the claims, the term“head-worn retinal imaging device” is intended to refer broadly to any device worn or supported by the head which includes a detector or camera and associated optical components designed for imaging the retina, including but not limited to glasses, and augmented, mixed or virtual reality headsets. As another example, devices which include scanned light (from laser or LED) display using a near-infrared (NIR) wavelength can also be a camera with the addition of a fast NIR detector, and such a device could be adapted as a head-worn retinal imaging device.
The particulars shown herein are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of various embodiments of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for the fundamental understanding of the invention, the description taken with the drawings and/or examples making apparent to those skilled in the art how the several forms of the invention may be embodied in practice.
As used herein and unless otherwise indicated, the terms“a” and“an” are taken to mean“one”,“at least one” or“one or more”. Unless otherwise required by context, singular terms used herein shall include pluralities and plural terms shall include the singular. Unless the context clearly requires otherwise, throughout the description and the claims, the words ‘comprise’,‘comprising’, and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in the sense of“including, but not limited to”. Words using the singular or plural number also include the plural and singular number, respectively. Additionally, the words“herein,”“above,” and“below” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of the application.

Claims

CLAIMS What is claimed is:
1. A method for monitoring a retina of a subject, comprising the steps of:
(a) obtaining a set of small field of view (FOV) (“template”) images of the retina captured with a portable fundus camera,
(b) matching the template images to a previously captured wide FOV baseline image of the retina using dimension reduction for the baseline image and template images and a mutual information registration method for registering the template images to portions of the baseline image; and
(c) comparing the registered set of template images to the baseline image to detect any differences between the registered set of template images and the baseline image, wherein any differences indicate occurrence or change of a condition of the retina or the subject.
2. The method of claim 1 , wherein in step b) the baseline image is cropped into a multitude of smaller offset target images, each of the template images and the target images are converted to a representation in a lower dimensional space by Principal Component Analysis, and wherein in the matching step b) a coarse localization step is performed to find the nearest target image in the lower dimensional space to each of the template images, following by registration of the template images and their nearest target image using the mutual information registration method and location of the template images onto the baseline image.
3. The method of claim 1 or claim 2, wherein the portable fundus camera is selected from the group of devices consisting of: a head-worn retinal imaging device; a fundus camera and flying spot(s) scanner design which further includes an optical coherence tomography (OCT) feature; a portable scanning laser ophthalmoscope; a special purpose hand-held device including a camera adapted for imaging the retina; and a camera embodied in a smartphone or tablet computer configured with apparatus to assist in taking a photograph of the eye.
4. The method of any of claims 1 -3, further comprising the step of updating the baseline image with a mosaic composed of the template images registered to the baseline image.
5. The method of any of claims 1-4, wherein the condition is selected from the group of medical conditions consisting of hypertension, abusive head trauma, diabetic retinopathy, and glaucoma.
6. A computer-implemented method of registering a narrow field of view template image to a wide field of view, previously obtained, baseline image, comprising the step of:
(1) cropping the baseline image into a multitude of smaller offset target images;
(2) applying a dimension reduction method to map the offset target images to a representation in a lower dimensional space;
(3) mapping the template image into the lower dimensional space using the dimension reduction method;
(4) finding the corresponding nearest target image for the template image in the lower dimensional space;
(5) registering the template image to the nearest target image;
(6) identifying the location of the template image on the baseline image based on the position of the nearest target image; and
(7) registering the template image to the baseline image at the location identified in step (6).
7. The method of claim 6, wherein the baseline image comprises a fundus image.
8. The method of claim 7, wherein the template image comprises an image captured by a portable fundus camera.
9. The method of claim 8, wherein the portable fundus camera comprises a camera embodied in a smartphone or tablet computer configured with apparatus to assist in taking a photograph of the eye.
10. The method of claim 9, wherein the processing steps of claim 6 are performed in a processing unit in the smartphone or tablet computer.
11. The method of claim 6, wherein the registration in step (5) employs a mutual information procedure.
12. The method of claim 6, wherein the dimension reduction method of steps (2) and (3) comprises Principal Component Analysis.
13. The method of claim 6, wherein in step (4) the finding is performed using block Principal Component Analysis.
14. The method of claim 7, wherein the fundus image is obtained without chemical dilation of the pupil of the subject.
15. The method of claim 6, further comprising the step of determining the gaze position of the subject.
16. The method of claim 6, further comprising the step of locating a surgical tool in the eye from the registered template images.
17. A portable fundus camera, comprising:
a camera,
an optical device coupled to the camera facilitating collecting images of the interior of the eye,
a processing unit and
a memory storing instructions for a processing unit, the instructions in the form of code for performing the procedure recited in any of claims 6-15.
18. The portable fundus camera of claim 17, wherein the camera is incorporated in a smartphone or tablet computer.
19. The portable fundus camera, wherein the camera is incorporated into a head- mounted virtual or augmented reality unit.
20. A method for assembling a wide field of view mosaic image from a multitude of small field of view images, comprising the steps of
(a) mapping the small field of view images X = X], X2, . .. Xn to a lower dimensional space using PCA;
(b) for each of the small field of view images X; : (1) finding the nearest neighbor(s) small field of view images by minimizing the feature distance A(Zb Zj) where Z Zj represent principal components of the ith andjth images X and Xj; and
(2) computing the Mutual Information (MI) between each Xj and the nearest neighbor(s) found in step (1) and designating as the adjacent image that image with the highest MI; and
(c) aligning at least some of the adjacent images determined from step (b) (2)) using a Mi-based registration method.
21. The method of claim 20, wherein the small field of view images comprise fundus images.
22. A portable retinal monitoring system configured to monitor retina over time and to detect a change in the retina by registering and comparing captured small field of view (FOV) images to a previously captured wide FOV baseline image of the retina.
23. The system of Claim 22 comprising a portable fundus camera (PFC).
24. The system of Claim 22 comprising a head-worn retinal imaging device.
25. The system of Claim 24 wherein the head-worn retinal imaging device comprises a fundus camera and flying spot(s) scanner design which further includes optical coherence tomography (OCT).
26. The system of Claim 22 wherein template matching includes the registering and comparing captured small FOV images to a previously captured wide FOV baseline image of the retina and further comprises a coarse localization with dimension reduction and an accurate registration using a Mutual Information (MI) metric.
27. The system of Claim 26 wherein the template matching is configured to match varying quality of images.
28. The system of Claim 26 further comprising a dimension reduction capability, which comprises a randomized singular value decomposition to increase efficiency of the Principle Component Analysis (PCA) method.
29. The system of Claim 22 wherein the retinal image is acquired by a portable fundus camera (PFC).
30. The system of Claim 22 wherein the retinal image is acquired by an optical scanning device, such as a portable scanning laser ophthalmoscope (PSLO).
31. The system of Claim 22 wherein the retinal image is acquired by an optical coherence measuring device, such as portable optical coherence tomography (POCT).
32. The system of Claim 22 wherein the full baseline retina image is updated by the captured small FOV retina images.
33. An image mosaicking method utilizing a dimension reduction in matching adjacent images and a Mutual Information (Ml)-based registration method registering adjacent image pairs for mosaicking.
34. A template matching method configured to perform template matching of general images.
35. A system comprising one or more components as described and/or illustrated herein.
36. A device comprising one or more elements as described and/or illustrated herein.
37. A method comprising one or more steps as described and/or illustrated herein.
38. A non-transitory computer readable medium having computer executable instructions stored thereon that, if executed by one or more processors of a computing device, cause the computing device to perform one or more steps as described and/or illustrated herein.
PCT/US2019/062327 2018-11-21 2019-11-20 System and method for retina template matching in teleophthalmology WO2020106792A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201980076416.2A CN113164041A (en) 2018-11-21 2019-11-20 System and method for retinal template matching in remote ophthalmology
JP2021527904A JP2022507811A (en) 2018-11-21 2019-11-20 Systems and methods for retinal template matching in remote ophthalmology
EP19886194.0A EP3883455A4 (en) 2018-11-21 2019-11-20 System and method for retina template matching in teleophthalmology
US17/295,586 US20220015629A1 (en) 2018-11-21 2019-11-20 System and method for retina template matching in teleophthalmology

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862770612P 2018-11-21 2018-11-21
US62/770,612 2018-11-21

Publications (1)

Publication Number Publication Date
WO2020106792A1 true WO2020106792A1 (en) 2020-05-28

Family

ID=70774267

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/062327 WO2020106792A1 (en) 2018-11-21 2019-11-20 System and method for retina template matching in teleophthalmology

Country Status (5)

Country Link
US (1) US20220015629A1 (en)
EP (1) EP3883455A4 (en)
JP (1) JP2022507811A (en)
CN (1) CN113164041A (en)
WO (1) WO2020106792A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022129591A1 (en) * 2020-12-17 2022-06-23 Delphinium Clinic Ltd. System for determining one or more characteristics of a user based on an image of their eye using an ar/vr headset
CN115409689A (en) * 2021-05-28 2022-11-29 南京博视医疗科技有限公司 Multi-mode retina fundus image registration method and device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3760967A3 (en) * 2019-07-02 2021-04-07 Topcon Corporation Method of processing optical coherence tomography (oct) data

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020036750A1 (en) * 2000-09-23 2002-03-28 Eberl Heinrich A. System and method for recording the retinal reflex image
US6758564B2 (en) 2002-06-14 2004-07-06 Physical Sciences, Inc. Line-scan laser ophthalmoscope
US20070252951A1 (en) * 2006-04-24 2007-11-01 Hammer Daniel X Stabilized retinal imaging with adaptive optics
US7648242B2 (en) 2006-05-01 2010-01-19 Physical Sciences, Inc. Hybrid spectral domain optical coherence tomography line scanning laser ophthalmoscope
US20120229764A1 (en) 2011-03-10 2012-09-13 Canon Kabushiki Kaisha Ophthalmologic apparatus and control method of the same
US20130208241A1 (en) * 2012-02-13 2013-08-15 Matthew Everett Lawson Methods and Apparatus for Retinal Imaging
US20140198300A1 (en) 2013-01-16 2014-07-17 Canon Kabushiki Kaisha Ophthalmic Apparatus and Ophthalmic Method
US8836778B2 (en) 2009-12-04 2014-09-16 Lumetrics, Inc. Portable fundus camera
US20150110348A1 (en) 2013-10-22 2015-04-23 Eyenuk, Inc. Systems and methods for automated detection of regions of interest in retinal images
US20170076136A1 (en) * 2014-05-27 2017-03-16 Samsung Electronics Co., Ltd. Image processing method and apparatus

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6009212A (en) * 1996-07-10 1999-12-28 Washington University Method and apparatus for image registration
US8081808B2 (en) * 2007-11-08 2011-12-20 Topcon Medical Systems, Inc. Retinal thickness measurement by combined fundus image and three-dimensional optical coherence tomography
US20100080425A1 (en) * 2008-09-29 2010-04-01 Board of regents of the Nevada System of Higher Education, on Behalf of the University of Minutiae-based template synthesis and matching
EP2563206B1 (en) * 2010-04-29 2018-08-29 Massachusetts Institute of Technology Method and apparatus for motion correction and image enhancement for optical coherence tomography
AU2012219026B2 (en) * 2011-02-18 2017-08-03 Iomniscient Pty Ltd Image quality assessment
JP5930450B2 (en) * 2013-09-06 2016-06-08 Necソリューションイノベータ株式会社 Annotation device and annotation system
CN105934193A (en) * 2013-12-23 2016-09-07 Rsbv有限责任公司 Wide field retinal image capture system and method
NZ773822A (en) * 2015-03-16 2022-07-29 Magic Leap Inc Methods and systems for diagnosing and treating health ailments
JP2017104343A (en) * 2015-12-10 2017-06-15 キヤノン株式会社 Image processing apparatus, image processing method and program
WO2018125812A1 (en) * 2017-01-02 2018-07-05 Gauss Surgical, Inc. Tracking surgical items with prediction of duplicate imaging of items

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020036750A1 (en) * 2000-09-23 2002-03-28 Eberl Heinrich A. System and method for recording the retinal reflex image
US6758564B2 (en) 2002-06-14 2004-07-06 Physical Sciences, Inc. Line-scan laser ophthalmoscope
US20070252951A1 (en) * 2006-04-24 2007-11-01 Hammer Daniel X Stabilized retinal imaging with adaptive optics
US7648242B2 (en) 2006-05-01 2010-01-19 Physical Sciences, Inc. Hybrid spectral domain optical coherence tomography line scanning laser ophthalmoscope
US8836778B2 (en) 2009-12-04 2014-09-16 Lumetrics, Inc. Portable fundus camera
US20120229764A1 (en) 2011-03-10 2012-09-13 Canon Kabushiki Kaisha Ophthalmologic apparatus and control method of the same
US20130208241A1 (en) * 2012-02-13 2013-08-15 Matthew Everett Lawson Methods and Apparatus for Retinal Imaging
US20140198300A1 (en) 2013-01-16 2014-07-17 Canon Kabushiki Kaisha Ophthalmic Apparatus and Ophthalmic Method
US20150110348A1 (en) 2013-10-22 2015-04-23 Eyenuk, Inc. Systems and methods for automated detection of regions of interest in retinal images
US20170076136A1 (en) * 2014-05-27 2017-03-16 Samsung Electronics Co., Ltd. Image processing method and apparatus

Non-Patent Citations (11)

* Cited by examiner, † Cited by third party
Title
A. V. CIDECIYAN: "Registration of ocular fundus images: an algorithm using cross-correlation of triple invariant image descriptors", IEEE ENGINEERING IN MEDICINE AND BIOLOGY MAGAZINE, vol. 14, no. 1, 1995, pages 52 - 58
C. HERNANDEZ-MATAS ET AL.: "Retinal image registration based on keypoint correspondences, spherical eye modeling and camera pose estimation", ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2015 37TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE, IEEE, 2015, pages 5650 - 5654, XP032811446, DOI: 10.1109/EMBC.2015.7319674
C. V. STEWART ET AL.: "The dual-bootstrap iterative closest point algorithm with application to retinal image registration", IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 22, no. 11, 2003, pages 1379 - 1394, XP001230753, DOI: 10.1109/TMI.2003.819276
C.-L. TSAI ET AL.: "The edge-driven dual-bootstrap iterative closest point algorithm for registration of multimodal fluorescein angiogram sequence", IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 29, no. 3, 2010, pages 636 - 649
G. WANG ET AL.: "Robust point matching methodfor multimodal retinal image registration", BIOMEDICAL SIGNAL PROCESSING AND CONTROL, vol. 19, 2015, pages 68 - 76
J. N. KUTZ ET AL.: "Dynamic mode decomposition: data-driven modeling of complex systems", SIAM, vol. 149, 2016
K. J. FRISTON ET AL.: "Spatial registration and normalization of images", HUMAN BRAIN MAPPING, vol. 3, no. 3, 1995, pages 165 - 189, XP055607978, DOI: 10.1002/hbm.460030303
RN MAAMARI ET AL.: "A mobile phone-based retinal camera for portable wide field imaging", BRITISH JOURNAL OF OPHTHALMOLOGY, vol. 98, no. 4, 2014, pages 438, XP009182475, DOI: 10.1136/bjophthalmol-2013-303797
See also references of EP3883455A4
Y. WANG ET AL.: "Automatic fundus images mosaic based on sift feature", IMAGE AND SIGNAL PROCESSING (CISP), 2010 3RD INTERNATIONAL CONGRESS, vol. 6, 2010, pages 2747 - 2751, XP031809914
Y.-M. ZHU: "Mutual information-based registration of temporal and stereo retinal images using constrained optimization", COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, vol. 86, no. 3, 2007, pages 210 - 215, XP022054449, DOI: 10.1016/j.cmpb.2007.02.007

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022129591A1 (en) * 2020-12-17 2022-06-23 Delphinium Clinic Ltd. System for determining one or more characteristics of a user based on an image of their eye using an ar/vr headset
CN115409689A (en) * 2021-05-28 2022-11-29 南京博视医疗科技有限公司 Multi-mode retina fundus image registration method and device
CN115409689B (en) * 2021-05-28 2023-09-29 南京博视医疗科技有限公司 Registration method and device for multi-modal retina fundus images

Also Published As

Publication number Publication date
EP3883455A1 (en) 2021-09-29
CN113164041A (en) 2021-07-23
EP3883455A4 (en) 2022-01-05
JP2022507811A (en) 2022-01-18
US20220015629A1 (en) 2022-01-20

Similar Documents

Publication Publication Date Title
US10314484B2 (en) Adaptive camera and illuminator eyetracker
US20220015629A1 (en) System and method for retina template matching in teleophthalmology
CN108985210A (en) A kind of Eye-controlling focus method and system based on human eye geometrical characteristic
Torricelli et al. A neural-based remote eye gaze tracker under natural head motion
US9619874B2 (en) Image processing apparatus and image processing method
Gong et al. Retinamatch: Efficient template matching of retina images for teleophthalmology
US20220100268A1 (en) Eye tracking device and a method thereof
JP7030317B2 (en) Pupil detection device and pupil detection method
JP7478216B2 (en) Ophthalmic device, method for controlling ophthalmic device, and program
US20220148218A1 (en) System and method for eye tracking
WO2019021512A1 (en) Ophthalmic system and ophthalmic information processing device
WO2017020045A1 (en) System and methods for malarial retinopathy screening
JP7332463B2 (en) Control device, optical coherence tomography device, control method for optical coherence tomography device, and program
JP2022155690A (en) Image processing device, image processing method, and program
Leli et al. Near-infrared-to-visible vein imaging via convolutional neural networks and reinforcement learning
De Zanet et al. Retinal slit lamp video mosaicking
JP2021097989A (en) Ophthalmologic apparatus, control method of ophthalmologic apparatus, and program
JP6452236B2 (en) Eyeball identification device and eyeball identification method
Asem et al. Blood vessel segmentation in modern wide-field retinal images in the presence of additive Gaussian noise
Rivas-Villar et al. Joint keypoint detection and description network for color fundus image registration
Prokopetc et al. A comparative study of transformation models for the sequential mosaicing of long retinal sequences of slit-lamp images obtained in a closed-loop motion
JP2021058563A (en) Information processing device and information processing method
EP4142571A1 (en) Real-time ir fundus image tracking in the presence of artifacts using a reference landmark
Nitschke et al. Corneal Imaging
Sharma et al. A comprehensive study of optic disc detection in artefact retinal images using a deep regression neural network for a fused distance-intensity map

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19886194

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021527904

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019886194

Country of ref document: EP

Effective date: 20210621