US9454817B2 - Relating to image processing - Google Patents

Relating to image processing Download PDF

Info

Publication number
US9454817B2
US9454817B2 US14/654,249 US201414654249A US9454817B2 US 9454817 B2 US9454817 B2 US 9454817B2 US 201414654249 A US201414654249 A US 201414654249A US 9454817 B2 US9454817 B2 US 9454817B2
Authority
US
United States
Prior art keywords
image data
clusters
predetermined value
magnitude
intensity gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/654,249
Other versions
US20150324966A1 (en
Inventor
David Clifton
Ralph Allen PINNOCK
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Optos PLC
Original Assignee
Optos PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Optos PLC filed Critical Optos PLC
Assigned to OPTOS PLC reassignment OPTOS PLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CLIFTON, DAVID, PINNOCK, Ralph Allen
Publication of US20150324966A1 publication Critical patent/US20150324966A1/en
Application granted granted Critical
Publication of US9454817B2 publication Critical patent/US9454817B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T7/003
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • G06T7/0026
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • A61B3/1225Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes using coherent radiation
    • G06T2207/20148
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • the present invention relates to improvements in or relating to image processing, particularly, but not exclusively, to a method and apparatus for registering pairs or sequences of vasculature images, such as retinal images.
  • Imaging systems such as scanning laser ophthalmoscopes (SLOs) are known to capture retinal image data using one or more digital image sensors.
  • Digital image sensors for SLOs are commonly a single sensor in which the light intensity signal is synchronised with the scanning position signal in order to produce a single stream of data that can be synchronised into a 2D image.
  • Digital image sensors may alternatively include an array of light sensitive picture elements (pixels) Retinal images produced by SLOs or other retinal imaging apparatuses such as fundus cameras are typically two dimensional pixel arrays and are termed digital retinal images.
  • the set of intensity values derived from the pixel array is known as image data.
  • the “raw” image data output by the pixel array may be subjected to various post-processing techniques in order to reproduce an image either for viewing by a human or for processing by a machine.
  • Post-processing techniques of retinal images include various statistical methods for image analysis and registration of pairs or sequences of retinal images.
  • Registering pairs or sequences of retinal images generally concerns the scaling, rotation and translation of one or more images with respect to a base image in order to align (“register”) the image with the base image.
  • the registered retinal images are typically superimposed with the base retinal image to facilitate comparisons between the images.
  • Algorithms which enable affine registration of pairs or sequences of retinal images are known. Such algorithms may involve “vasculature tracking”, which involves iterative searches and decision trees to map and extract the vasculature. In particular, such approaches commonly search for specific characteristic features such as vasculature branching junctions. While such algorithms provide a reasonable degree of registration accuracy they are computationally inefficient, i.e. computationally expensive. Furthermore, such known algorithms only allow images obtained from common imaging modes to be registered. That is, such known algorithms do not allow images obtained from different imaging modes, such as reflectance or auto-fluorescence, to be registered.
  • EP 2 064 988 A proposes a device and method for creating retinal fundus “maps” by superimposing two or more fundus images on the basis of a matching probability score. Matching is performed on the basis of corner image data identified in a blood vessel extraction image.
  • the technique proposed in EP'988 will not find sufficient corner features in the vasculature in a typical retinal image to enable reliable matching and registration of images, especially between different imaging modes.
  • Retinal images are subject to very variable lighting, and in high-resolution retinal images produced by modern SLOs, the vascular features are relatively smooth-sided features. Therefore corner extraction will not yield a great number of candidate points for matching, or else will be heavily influenced by noise of various types.
  • a method of processing digital vascular images comprising the steps of:
  • the digital vascular images may be retinal images.
  • the digital vascular images may include detail of the vasculature of the retina.
  • the vasculature of the retina includes the blood vessels, arteries and veins in the retina.
  • the vasculature of the retina includes the circulatory system of the retina.
  • the digital vascular images may be vascular images of an organ or body part of a human or an animal.
  • the digital vascular images may include detail of the vasculature of the organ or the body part.
  • the vasculature of the organ or the body part includes the blood vessels, arteries and veins therein.
  • the vasculature of the organ or the body part includes the circulatory system thereof.
  • the first and second digital vascular image data may include the intensity of the illumination incident on the one or more pixels used to produce the image data.
  • the first and second images may be obtained by different imaging modes.
  • the filter may be a matched filter.
  • the filter may have a form or shape which is matched to the form or shape of vascular features in the vascular image data.
  • the filter may be a Gaussian filter.
  • the filter may be a Gabor filter.
  • the one or more filters may be the same filter or different filters. Using different two-dimensional filter kernels at different orientations may be useful in cases where the vasculature has some shape sensitivity with direction.
  • the kernel may be a matched kernel.
  • the kernel may have a form or shape which is matched to the form or shape of the vascular image data.
  • the clusters of orthogonally adjacent image data points may include any number or configuration of orthogonally adjacent image data in which the intensity gradient between each orthogonally adjacent image data point is less than a predetermined value.
  • the step of identifying clusters in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value may include use of a corner detection algorithm.
  • the corner detection algorithm may be used to identify clusters in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value.
  • the corner detection algorithm may be used to identify clusters in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters in two orthogonal directions is simultaneously above a predetermined value. That is, the corner detection algorithm may look for changes in intensity gradients occurring simultaneously in two orthogonal directions above a predetermined threshold.
  • the predetermined value may be for example between 10% and 50% of a maximum possible gradient value.
  • the corner detection algorithm may be a Harris corner detection algorithm.
  • the corner detection algorithm may be a Moravec corner detection algorithm or a Shi-Thomas corner detection algorithm.
  • the step of identifying common clusters between the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value may include cross correlating the convolved first and second image data.
  • the step of identifying common clusters between the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value may include cross correlating the identified clusters in the first and second image data.
  • the step of identifying common clusters between the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value may include multiple cross correlations of the convolved first and second image data. For multiple cross correlations of the convolved first and second image data, each successive cross correlation may be incrementally rotated from the last. The multiple cross correlations may be rotated through approximately 40 degrees, or more. The multiple cross correlations may be rotated in steps through approximately 20 degrees or more around a pivot point located substantially around the optic disc point of the retina.
  • the step of cross correlating the convolved first and second image data may include the further step of determining the differences in position between the identified common clusters in each of the first and second image data.
  • the position of the cluster may include its angular position and/or its translational position.
  • the differences in position between the identified common clusters in each of the first and second image data may be termed the translational parameters.
  • the step of registering the common clusters between the first and second image data uses the determined translational parameters to align the first and second image data.
  • the second image data may be registered with the first image data or the first image data may be registered with the second image data.
  • the registered image data may be superimposed.
  • the method may include the additional initial step of reducing the size of the first and/or second digital vascular image data. This may include removing one or more portions of the image data.
  • the step of reducing the size of the first and/or second digital vascular image data may include the steps of filtering, smoothing, sampling or sub sampling the image data. The steps of filtering, smoothing, sampling or sub sampling the image data may be repeated any number of times.
  • the method may include the additional initial step of down sampling the first and/or second digital vascular image data.
  • the step of down sampling the first and/or second digital vascular image data may include one or more image data scaling computations.
  • the step of down sampling the first and/or second digital vascular image data may include one or more image data pyramid scaling computations.
  • the method may include the additional initial step of increasing the contrast between the vasculature and the background of the first and/or second digital vascular image data.
  • the method may include the additional initial step of optimising the contrast between the vasculature and the background of the first and/or second digital vascular image data.
  • the step of optimising the contrast between the vasculature and the background of the first and/or second digital vascular image data may include using a histogram equalisation.
  • the step of optimising the contrast between the vasculature and the background of the first and/or second digital vascular image data may include using an adaptive histogram equalisation.
  • the method may include the additional step of removing noise from the first and/or second digital vascular image data after the step of increasing the contrast between the vasculature and the background of the first and/or second digital vascular image data.
  • the step of removing noise from the first and/or second digital vascular image data may include use of a low-pass filter.
  • the method may include the additional step of merging, or linking, together the clusters in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value.
  • the clusters may be “merged” by increasing the intensity data between the clusters.
  • a single pass averaging filter, or kernel may be used to adjust the intensity value of the image data between clusters to an average intensity value of the clusters in that region. The effect of this is to blur, or average, the intensity values within a given region so that two clusters close together will, in effect, become one cluster.
  • the method may include the additional step of reducing the size of the first and/or second merged cluster image data. This may include removing one or more portions of the image data.
  • the step of reducing the size of the first and/or second merged cluster image data may include the steps of filtering, smoothing, sampling or sub sampling the image data. The steps of filtering, smoothing, sampling or sub sampling the image data may be repeated any number of times.
  • the method may include the additional initial step of down sampling the first and/or second merged cluster image data.
  • the step of down sampling the first and/or second merged cluster image data may include one or more image data scaling computations.
  • the step of down sampling the first and/or second merged cluster image data may include one or more image data pyramid scaling computations.
  • the method may include the additional step of creating first and/or second digital vascular images from the first and second digital vascular image data.
  • the method may include the additional step of creating a digital image of the first and/or second clusters of orthogonally adjacent image data points in which the intensity gradient between each orthogonally adjacent image data point is less than a predetermined value.
  • the method may include the additional step of creating a digital image of the identified clusters in the first and/or second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value.
  • the method may include the additional step of creating a digital image of the identified common clusters between the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value.
  • the method may include the additional step of creating a digital image of the registered common clusters between the first and second image data.
  • the method may comprise providing a plurality of digital vascular image data and processing each digital image data according to the first aspect of the invention to register all the common clusters between all the image data.
  • an image processing apparatus comprising:
  • a scanning laser ophthalmoscope having an image processing apparatus comprising:
  • a computer program product encoded with instructions that, when run on a computer, cause the computer to receive image data and perform a method of processing digital vascular images comprising:
  • the computer program product may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage media may be any available media that can be accessed by a computer.
  • Such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • any connection is properly termed a computer-readable medium.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-rayTM disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • the instructions or code associated with a computer-readable medium of the computer program product may be executed by a computer, e.g., by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, ASICs, FPGAs, or other equivalent integrated or discrete logic circuitry.
  • processors such as one or more digital signal processors (DSPs), general purpose microprocessors, ASICs, FPGAs, or other equivalent integrated or discrete logic circuitry.
  • FIG. 1 is a flow chart diagram detailing a method of processing digital vascular images
  • FIGS. 2 a and 2 b are first and second digital vascular images
  • FIGS. 3 a and 3 b are the first and second digital vascular images of FIGS. 2 a and 2 b after down sampling, equalisation and filtering;
  • FIGS. 4 a and 4 b are the first and second digital vascular images of FIGS. 3 a and 3 b after convolution with a rotating Gabor kernel;
  • FIGS. 5 a and 5 b are the first and second digital vascular images of FIGS. 4 a and 4 b after processing with a corner detection algorithm, blurring and down sampling;
  • FIG. 6 is the first digital vascular image of FIG. 5 a after identification and marking of the micro-corner stepping structure
  • FIG. 7 is a schematic diagram detailing the cross correlation of the first and second digital vascular images of FIGS. 5 a and 5 b ;
  • FIG. 8 illustrates the digital vascular images produced during the processing method and the registered first and second digital vascular images.
  • FIG. 1 is a flow chart detailing the method steps of a registration algorithm for affine registration of pairs or sequences of retinal images.
  • FIG. 1 illustrates the method steps of processing the digital retinal image data.
  • the first image may be termed the “base image”, with each subsequent image being termed the “input” image.
  • the first step 100 of the method is to provide first and second digital retinal image data.
  • the first and second digital retinal image data is represented by first and second digital retinal images 10 a , 10 b .
  • the first and second digital vascular image data includes the intensity of the illumination incident on the one or more pixels used to produce the image data.
  • the first and second digital retinal images 10 a , 10 b illustrated here are obtained by a wide-field scanning laser ophthalmoscope (SLO), as is known in the art.
  • SLO wide-field scanning laser ophthalmoscope
  • the first and second retinal images 10 a , 10 b show the optic disc 1 and vasculature 2 of the retina 3 .
  • the second step 200 of the method is to reduce the size of the first and second digital vascular image data. This is achieved by down sampling the data.
  • the first and second digital vascular image data is down sampled via a pyramid scaling computation.
  • filtering, smoothing, sampling or sub sampling computation method could be used.
  • Reducing the size of the first and second digital vascular image data increases the speed of subsequent computations and scales the vasculature features so that an optimum degree of resonance occurs during subsequent convolution operations (described below).
  • the down sampling scales the vasculature so that the typical curvature, and hence micro-corners, are within the area of interest, i.e. so that the best “resonance” is achieved of the corner detection filter, or kernel.
  • the third step 300 of the method is to optimise the contrast between the vasculature and the background of the first and second digital vascular image data. This is achieved through use of a histogram equalisation computation.
  • an adaptive histogram equalisation is used to optimise the contrast between the vasculature and the background of the first and second digital vascular image data.
  • the adaptive histogram equalisation attenuates variations in general lighting in the image data while increasing local contrast. This has the effect of accentuating vasculature relative to the image background. This effect is most notable in regions where polarisation effects would otherwise tend to swamp out vasculature information.
  • the fourth step 400 of the method is to remove noise from the first and second digital vascular image data which appears through use of the adaptive histogram equalisation. Removal of this noise reduces the chances of recording false “corner hits” during subsequent convolution operations (described below).
  • the step of removing noise from the first and second digital vascular image data is achieved by a low pass (LP) noise filter.
  • FIGS. 3 a and 3 b are the first and second digital vascular images 10 a , 10 b of FIGS. 2 a and 2 b after down sampling (step 200 ), equalisation (step 300 ) and noise reduction (step 400 ).
  • the contrast between the vasculature 2 and the background 4 of the retina has been enhanced compared to the initial first and second digital retinal images 2 a , 2 b .
  • the enhancement of the vasculature 2 relative to the background 4 of the retina improves the results of the subsequent convolution operations (described below).
  • the fifth step 500 of the method is to process the first and second image data with a directional filter that has the effect of producing clusters of orthogonally adjacent image data points in which the intensity gradient between each orthogonally adjacent image data point is less than a predetermined value.
  • the step 500 of processing the first and second image data with a directional filter is carried out by convolving the first and second image data with a rotating Gabor kernel (or filter).
  • Orthogonally adjacent image data points is considered to mean image data points that are immediately adjacent one another in an array of pixel data, i.e. image data points that are adjacent one another in any given row or column of the array.
  • the clusters may comprise any number or configuration of orthogonally adjacent image data. That is, the clusters could be an arrangement 1 ⁇ 1, 1 ⁇ 2, 2 ⁇ 1, 2 ⁇ 2, 3 ⁇ 2, 2 ⁇ 3 image data points, or the like.
  • the Gabor kernel is a form of 2-dimensional Gaussian filter kernel with a profile that, in the present case, is matched to the form or shape of the intensity of the image data across the vasculature, i.e. the vasculature cross section.
  • the Gaussian shape of the Gabor kernel allows it to “fit” to or “resonate with” the profile of the vasculature, thus accentuating the vasculature, while not fitting as well to other (background) regions, thus attenuating these regions.
  • the Gabor kernel is convolved with each of the first and second image data a multiple number of times. For each successive convolution the Gabor kernel is oriented differently relative to the image data. In the embodiment described here the Gabor kernel is effectively convolved eight times with each of the first and second image data, with the Gabor kernel being rotated 45 degrees relative to the image data for each successive convolution. The Gabor kernel is therefore rotated through 360 degrees relative to the image data over 8 convolutions.
  • the Gabor kernel has been found to be particularly effective in resonating with the characteristics of the vasculature cross sections. In an embodiment where the kernel is symmetrical, the effect of 8 convolutions over 360 degrees can be achieved in practice with only four convolutions spaced over 180 degrees.
  • FIGS. 4 a and 4 b represent an averaged output of all 8 convolutions.
  • the effect of convolving the first and second digital retinal image data with the rotating Gabor kernel described above is to produce clusters 12 of orthogonally adjacent image data points in which the magnitude of the intensity gradient between each orthogonally adjacent image data point is less than a predetermined value.
  • the convolution in other words produces groups of orthogonally adjacent image data points that have similar intensity values.
  • the first and second digital retinal image data after convolution with the rotating Gabor kernel includes a line of clusters 12 having corner (or “micro-corner”) features can be seen that track the vasculature 2 of the retina 3 . It is the rotation of the Gabor kernel relative to the image data which creates the corner (or micro-corner) features of the clusters 12 . In particular, it is the rotation of the Gabor kernel through, at least, 90 degrees which creates the corner (or micro-corner) features of the clusters 12 .
  • the vasculature 2 of the retina 3 has thus been enhanced at a local level into the form of a high intensity gradient “stepping” structure of corner (or micro-corner) features of the clusters 12 between the clusters 12 and the background 4 of the retina 3 .
  • the corner features of the clusters 12 may be any corner of the array of image data, as described above.
  • the term “micro-corner” may refer to a cluster 12 comprising of a 1 ⁇ 2 or 2 ⁇ 1 image data points.
  • the sixth step 600 of the method is to identify (or extract) the clusters 12 in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters 12 is above a predetermined value.
  • the sixth step 600 thus identifies clusters 12 that approximately lie on the boundary between the vasculature 2 of the retina 3 and the background 4 . As illustrated in FIGS. 4 a to 6 , the clusters 12 have a higher intensity than the background 4 .
  • Step 600 of the method thus identifies the clusters 12 that track the vasculature 2 of the retina 3 .
  • the step 600 of identifying the clusters 12 in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters 12 is above a predetermined value is carried out by using a corner detection algorithm.
  • a Harris corner detection algorithm is used to identify the clusters 12 in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters 12 is above a predetermined value.
  • any other suitable corner detection algorithm could be used.
  • the corner detection algorithm looks for orthogonal edge points in the convolved first and second digital retinal image. In particular, it will look for points where the gradient exceeds the threshold in two directions simultaneously.
  • the algorithm is configurable to certain sensitivity and quality thresholds, as required.
  • FIG. 6 illustrates points (+) identified by the corner detection algorithm.
  • Corner detection algorithms are not typically used with retinal images, since the existence of corner points is not commonly found.
  • the pre-processing of the image data with a directional filter such as the rotating Gabor kernel results in image data in which the vasculature 2 has been enhanced at a local level in the form of a high intensity gradient “stepping structure”, or lines of corner (or micro-corner) features throughout the vasculature 2 .
  • This pre-processing with the rotating Gabor kernel which facilitates use of a corner detection algorithm to identify the clusters 12 that outline the vasculature 2 of the retina 3 .
  • the advantage of this method when compared to mapping and extracting the vasculature by use of known tracking algorithms, is a significant decrease in computational loading.
  • the intensity gradient thresholds that are of interest will be approximately the same over the whole image data.
  • the value of the predetermined threshold will be dependent on the size of the area of interest used in the equalisation. Setting the actual value need not be done by any analytical method, but may simply be done by trial and error empirical means.
  • the threshold may be for example between 10% and 50% of the maximum possible gradient. For example, in an embodiment where the maximum gradient is 255, a threshold gradient of 50 might be set, representing about 20% of the maximum possible.
  • the seventh step 700 of the method is to merge together the clusters 12 in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters 12 is above a predetermined value.
  • the seventh step 700 thus blurs, or merges, the clusters 12 together to further highlight the vasculature 2 .
  • the effect of the seventh step 700 is to “join the dots” of the clusters 12 along the vasculature 2 .
  • Step 700 of the method is useful since the positioning of the clusters 12 in each image data could be slightly different, therefore merging the clusters 12 increases the chance that more clusters 12 overlap at characteristic positions.
  • the vasculature corner points that have been “thresholded” from the image are still in greyscale form, i.e. each data point could still have a different intensity value, although all are above the threshold value.
  • the method may therefore include the further step of converting the “thresholded” image data to a binary image in which all “corner” clusters have a value of 1 and all the other image data points have a value of 0.
  • the eighth step 800 of the method is to mask clusters 12 in the image periphery that may be attached to non-relevant features, such as eye lashes.
  • the ninth step 900 of the method is to reduce the size of first and second merged cluster image data. This is achieved by down sampling the data.
  • the first and second digital vascular image data is down sampled via a pyramid scaling computation.
  • filtering, smoothing, sampling or sub sampling computation method could be used.
  • the first and second retinal image data after merging (step 700 ), masking ( 800 ) and down sampling ( 900 ) is represented in FIGS. 5 a and 5 b.
  • the tenth step 1000 of the method is to identify common clusters 12 between the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters 12 is above a predetermined value.
  • the tenth step 1000 thus identifies common clusters 12 between each of the first and second retinal image data.
  • the step 1000 involves cross correlating the image data of FIGS. 5 a and 5 b.
  • FIGS. 7 and 1 The cross correlation of the image data of FIGS. 5 a and 5 b is illustrated in FIGS. 7 and 1 .
  • the second image data of FIG. 5 b is cross correlated with the first image data of FIG. 5 a a number of times. With each subsequent cross correlation the second image data is incrementally rotated relative to the last.
  • the multiple cross correlations may be rotated through approximately 40 degrees.
  • the multiple cross correlations may be rotated through approximately 20 degrees around a pivot point located substantially around the optic disc point of the retina.
  • the multiple cross correlations may, of course, be rotated through approximately any suitable degree around the pivot point located substantially around the optic disc point of the retina.
  • Data is output as a normalised cross correlation.
  • the peak in cross correlation i.e. best match between image data
  • confidence is deduced from a measurement of the peak isolation within the correlation surface 5 for each correlation angle.
  • a confidence coefficient is calculated as a weighting of the peak height multiplied by the total slope of the peak “walls” divided by the mean of the rest (i.e. excluding the region of the peak) of the correlation surface 5 .
  • a high value in this metric is indicative of a sharp, high, isolated peak, which is a characteristic of close correlation and therefore a high confidence in the accuracy of registration.
  • the threshold at which confidence is toggled to low which can be observed as an absence in any one isolated correlation peak, is established by weighting the confidence measure so that values less than unity indicate low confidence.
  • the tenth step 1000 of the method also includes determining the differences in position between the identified common clusters 12 in each of the first and second image data.
  • the position of the cluster 12 includes its angular and translational position (rotational angle and x and y translations).
  • the differences in position between the identified common clusters 12 in each of the first and second image data is termed the translational parameters.
  • the eleventh step 1100 of the method is to register the common clusters 12 between the first and second image data.
  • the translation parameters determined from step 1000 are used to align the second image data to the first image data, or align the first and second digital retinal images 10 a , 10 b , in the known manner.
  • FIG. 8 illustrates the digital vascular images produced during the processing method and the registered first and second digital vascular images 10 a , 10 b .
  • Top-left and top right of FIG. 8 illustrate the first and second digital vascular images 10 a , 10 b of FIGS. 2 a and 2 b after down sampling (step 200 ), equalisation (step 300 ) and noise reduction (step 400 ) ( FIGS. 3 a and 3 b ).
  • Middle-left and middle-right of FIG. 8 illustrate the first and second digital retinal image data after convolution with the rotating Gabor kernel ( FIGS. 4 a and 4 b ).
  • Bottom-left and bottom-right of FIG. 8 illustrate the first and second digital vascular images 10 a , 10 b in their registered positions.
  • FIG. 8 is the correlation surface 5 of FIG. 7 .
  • the centre-middle of FIG. 8 illustrates the peak in cross correlation for each correlation angle.
  • the bottom-middle of FIG. 8 illustrates the registered and overlaid first and second digital vascular images 10 a , 10 b.
  • the method of the invention dramatically reduces the computational requirements of the processor, increases accuracy and allows registration for images obtained across a number of imaging modes.
  • the computational efficiency provided by the method of the invention is a result of the realisation that convolving the image data with a rotating Gabor kernel produces image data in which the vasculature has been modified to provide a high intensity gradient “stepping” structure of image data clusters that track the vasculature.
  • the creation of these corner (or micro-corner) features in the image data facilitates the use of a known corner detection algorithm to extract the position of the clusters for comparison and registration.
  • Convolving the image data with a rotating Gabor kernel and using a corner detection algorithm in this manner reduces the computational requirements of the method.
  • using a corner detection algorithm increases the accuracy of the registration process, since the corners (and vasculature) can be accurately determined compared to known retinal image registration techniques.
  • the method uses feature sets (distances between vasculature points) that are common across different retinal imaging modes (e.g. reflectance, auto-fluorescence etc.), inter-mode registration (i.e. auto-fluorescence to reflectance images) is possible.
  • vascular images may be used, such as vascular images of an organ or body part of a human or an animal.
  • the vasculature of the organ or the body part may include the blood vessels arteries and veins therein.
  • the vasculature of the organ or the body part may also include the circulatory system thereof.
  • any suitably shaped kernel, filter, filter matrix, window, template or mask may be used.
  • a Gaussian filter or kernel may be used.
  • a rotating Gabor kernel has been described that rotates through 360 degrees (with eight convolutions), it should be appreciated that the kernel need not “rotate”, it need only be convolved with the image data in two orthogonal dimensions (i.e. a first convolution and then a second convolution at an angle of 90 degrees from the first). Rotating the kernel through 360 degrees does, however, improve the enhancement of the vasculature, as described above, and is preferred.
  • the method has been described above as using a single kernel convolved with the image data a number of times, it should be appreciated that any number of different kernels could be convolved with the image data at any number of angles relative thereto.
  • the method may comprise the step of providing a plurality of digital vascular image data (and a plurality of digital vascular images) and processing each digital image according to the above described method to register all image data and images.

Abstract

An image processing apparatus uses first and second digital vascular image data to register two images. The two images may be from different imaging modes. The first and second images are processed with a two-dimensional, directional filter (500) that has the effect of producing clusters of orthogonally adjacent image data points in which the magnitude of an intensity gradient between each orthogonally adjacent image data point is less than a predetermined value. Subsequently, common clusters are identified between the first and second image data using a corner detecting algorithm (600). The directional filter produces “stepping” features, where vascular features would otherwise appear with smooth edges. These numerous features are identified by the corner detecting algorithm and can be used (1000) for registering common clusters between the first and second image data. The filter may be a rotating Gabor filter matched to vascular features in the images.

Description

TECHNICAL FIELD
The present invention relates to improvements in or relating to image processing, particularly, but not exclusively, to a method and apparatus for registering pairs or sequences of vasculature images, such as retinal images.
BACKGROUND
Imaging systems, such as scanning laser ophthalmoscopes (SLOs), are known to capture retinal image data using one or more digital image sensors. Digital image sensors for SLOs are commonly a single sensor in which the light intensity signal is synchronised with the scanning position signal in order to produce a single stream of data that can be synchronised into a 2D image. Digital image sensors may alternatively include an array of light sensitive picture elements (pixels) Retinal images produced by SLOs or other retinal imaging apparatuses such as fundus cameras are typically two dimensional pixel arrays and are termed digital retinal images.
The set of intensity values derived from the pixel array is known as image data. The “raw” image data output by the pixel array may be subjected to various post-processing techniques in order to reproduce an image either for viewing by a human or for processing by a machine. Post-processing techniques of retinal images include various statistical methods for image analysis and registration of pairs or sequences of retinal images.
Registering pairs or sequences of retinal images generally concerns the scaling, rotation and translation of one or more images with respect to a base image in order to align (“register”) the image with the base image. The registered retinal images are typically superimposed with the base retinal image to facilitate comparisons between the images.
Algorithms which enable affine registration of pairs or sequences of retinal images are known. Such algorithms may involve “vasculature tracking”, which involves iterative searches and decision trees to map and extract the vasculature. In particular, such approaches commonly search for specific characteristic features such as vasculature branching junctions. While such algorithms provide a reasonable degree of registration accuracy they are computationally inefficient, i.e. computationally expensive. Furthermore, such known algorithms only allow images obtained from common imaging modes to be registered. That is, such known algorithms do not allow images obtained from different imaging modes, such as reflectance or auto-fluorescence, to be registered.
Examples of such known algorithms can be found in the following publications: US 2012/0195481A; Can et al “A feature based, Robust, Hierarchical Algorithm for Registering Pairs of Images of the Curved Human Retina”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 24, No, 3 (March 2002); Zana & Klein, “A Multimodal Registration Algorithm of Eye Fundus Images Using Vessels Detection and Hough Transform”, IEEE Transactions on Medical Imaging, Vol 18, No 5 (May 1999); and Hu et al “Multimodal Retinal Vessel Segmentation From Spectral-Domain Optical Coherence Tomography and Fundus Photography”, IEEE Transactions on Medical Imaging, Vol 31, No 10 (October 2012).
EP 2 064 988 A (Kowa Company, Ltd.) proposes a device and method for creating retinal fundus “maps” by superimposing two or more fundus images on the basis of a matching probability score. Matching is performed on the basis of corner image data identified in a blood vessel extraction image. However, the inventors believe that the technique proposed in EP'988 will not find sufficient corner features in the vasculature in a typical retinal image to enable reliable matching and registration of images, especially between different imaging modes. Retinal images are subject to very variable lighting, and in high-resolution retinal images produced by modern SLOs, the vascular features are relatively smooth-sided features. Therefore corner extraction will not yield a great number of candidate points for matching, or else will be heavily influenced by noise of various types.
SUMMARY
According to a first aspect of the invention there is provided a method of processing digital vascular images comprising the steps of:
    • providing first and second digital vascular image data;
    • processing the first and second image data with a directional, two-dimensional filter that has the effect of producing clusters of orthogonally adjacent image data points in which the magnitude of an intensity gradient between each orthogonally adjacent image data point is less than a predetermined value;
    • identifying clusters in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is greater than a predetermined value;
    • identifying common clusters between the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is greater than a predetermined value; and
    • registering the common clusters between the first and second image data.
The digital vascular images may be retinal images. The digital vascular images may include detail of the vasculature of the retina. The vasculature of the retina includes the blood vessels, arteries and veins in the retina. The vasculature of the retina includes the circulatory system of the retina.
The digital vascular images may be vascular images of an organ or body part of a human or an animal. The digital vascular images may include detail of the vasculature of the organ or the body part. The vasculature of the organ or the body part includes the blood vessels, arteries and veins therein. The vasculature of the organ or the body part includes the circulatory system thereof.
The first and second digital vascular image data may include the intensity of the illumination incident on the one or more pixels used to produce the image data. The first and second images may be obtained by different imaging modes.
The filter may be a matched filter. The filter may have a form or shape which is matched to the form or shape of vascular features in the vascular image data.
The filter may be a Gaussian filter. The filter may be a Gabor filter.
The one or more filters may be the same filter or different filters. Using different two-dimensional filter kernels at different orientations may be useful in cases where the vasculature has some shape sensitivity with direction.
The kernel may be a matched kernel. The kernel may have a form or shape which is matched to the form or shape of the vascular image data.
The clusters of orthogonally adjacent image data points may include any number or configuration of orthogonally adjacent image data in which the intensity gradient between each orthogonally adjacent image data point is less than a predetermined value.
The step of identifying clusters in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value may include use of a corner detection algorithm. The corner detection algorithm may be used to identify clusters in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value. The corner detection algorithm may be used to identify clusters in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters in two orthogonal directions is simultaneously above a predetermined value. That is, the corner detection algorithm may look for changes in intensity gradients occurring simultaneously in two orthogonal directions above a predetermined threshold. The predetermined value may be for example between 10% and 50% of a maximum possible gradient value.
The corner detection algorithm may be a Harris corner detection algorithm. The corner detection algorithm may be a Moravec corner detection algorithm or a Shi-Thomas corner detection algorithm.
The step of identifying common clusters between the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value may include cross correlating the convolved first and second image data.
The step of identifying common clusters between the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value may include cross correlating the identified clusters in the first and second image data.
The step of identifying common clusters between the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value may include multiple cross correlations of the convolved first and second image data. For multiple cross correlations of the convolved first and second image data, each successive cross correlation may be incrementally rotated from the last. The multiple cross correlations may be rotated through approximately 40 degrees, or more. The multiple cross correlations may be rotated in steps through approximately 20 degrees or more around a pivot point located substantially around the optic disc point of the retina.
The step of cross correlating the convolved first and second image data may include the further step of determining the differences in position between the identified common clusters in each of the first and second image data. The position of the cluster may include its angular position and/or its translational position. The differences in position between the identified common clusters in each of the first and second image data may be termed the translational parameters.
The step of registering the common clusters between the first and second image data uses the determined translational parameters to align the first and second image data. The second image data may be registered with the first image data or the first image data may be registered with the second image data. The registered image data may be superimposed.
The method may include the additional initial step of reducing the size of the first and/or second digital vascular image data. This may include removing one or more portions of the image data. The step of reducing the size of the first and/or second digital vascular image data may include the steps of filtering, smoothing, sampling or sub sampling the image data. The steps of filtering, smoothing, sampling or sub sampling the image data may be repeated any number of times.
The method may include the additional initial step of down sampling the first and/or second digital vascular image data. The step of down sampling the first and/or second digital vascular image data may include one or more image data scaling computations. The step of down sampling the first and/or second digital vascular image data may include one or more image data pyramid scaling computations.
The method may include the additional initial step of increasing the contrast between the vasculature and the background of the first and/or second digital vascular image data. The method may include the additional initial step of optimising the contrast between the vasculature and the background of the first and/or second digital vascular image data. The step of optimising the contrast between the vasculature and the background of the first and/or second digital vascular image data may include using a histogram equalisation. The step of optimising the contrast between the vasculature and the background of the first and/or second digital vascular image data may include using an adaptive histogram equalisation.
The method may include the additional step of removing noise from the first and/or second digital vascular image data after the step of increasing the contrast between the vasculature and the background of the first and/or second digital vascular image data. The step of removing noise from the first and/or second digital vascular image data may include use of a low-pass filter.
The method may include the additional step of merging, or linking, together the clusters in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value. The clusters may be “merged” by increasing the intensity data between the clusters. A single pass averaging filter, or kernel, may be used to adjust the intensity value of the image data between clusters to an average intensity value of the clusters in that region. The effect of this is to blur, or average, the intensity values within a given region so that two clusters close together will, in effect, become one cluster.
The method may include the additional step of reducing the size of the first and/or second merged cluster image data. This may include removing one or more portions of the image data. The step of reducing the size of the first and/or second merged cluster image data may include the steps of filtering, smoothing, sampling or sub sampling the image data. The steps of filtering, smoothing, sampling or sub sampling the image data may be repeated any number of times.
The method may include the additional initial step of down sampling the first and/or second merged cluster image data. The step of down sampling the first and/or second merged cluster image data may include one or more image data scaling computations. The step of down sampling the first and/or second merged cluster image data may include one or more image data pyramid scaling computations.
The method may include the additional step of creating first and/or second digital vascular images from the first and second digital vascular image data.
The method may include the additional step of creating a digital image of the first and/or second clusters of orthogonally adjacent image data points in which the intensity gradient between each orthogonally adjacent image data point is less than a predetermined value.
The method may include the additional step of creating a digital image of the identified clusters in the first and/or second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value.
The method may include the additional step of creating a digital image of the identified common clusters between the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is above a predetermined value.
The method may include the additional step of creating a digital image of the registered common clusters between the first and second image data.
The method may comprise providing a plurality of digital vascular image data and processing each digital image data according to the first aspect of the invention to register all the common clusters between all the image data.
According to a second aspect of the invention there is provided an image processing apparatus comprising:
    • a digital vascular image provision module arranged to provide first and second digital vascular image data; and
    • a processor arranged to:
    • process the first and second image data with a two-dimensional, directional filter that has the effect of producing clusters of orthogonally adjacent image data points in which the magnitude of an intensity gradient between each orthogonally adjacent image data point is less than a predetermined value;
    • identify clusters in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is greater than a predetermined value;
    • identify common clusters between the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is greater than a predetermined value; and
    • register the common clusters between the first and second image data.
According to a third aspect of the invention there is provided a scanning laser ophthalmoscope having an image processing apparatus comprising:
    • a digital vascular image provision module arranged to provide first and second digital vascular image data; and
    • a processor arranged to:
    • process the first and second image data with a two-dimensional, directional filter that has the effect of producing clusters of orthogonally adjacent image data points in which the magnitude of the intensity gradient between each orthogonally adjacent image data point is less than a predetermined value;
    • identify clusters in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is greater than a predetermined value;
    • identify common clusters between the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is greater than a predetermined value; and
    • register the common clusters between the first and second image data.
According to a fourth aspect of the invention there is provided a computer program product encoded with instructions that, when run on a computer, cause the computer to receive image data and perform a method of processing digital vascular images comprising:
    • providing first and second digital vascular image data;
    • processing the first and second image data with a two-dimensional, directional filter that has the effect of producing clusters of orthogonally adjacent image data points in which the magnitude of the intensity gradient between each orthogonally adjacent image data point is less than a predetermined value;
    • identifying clusters in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is greater than a predetermined value;
    • identifying common clusters between the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters is greater than a predetermined value; and
    • registering the common clusters between the first and second image data.
The computer program product may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fibre optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fibre optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray™ disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. The instructions or code associated with a computer-readable medium of the computer program product may be executed by a computer, e.g., by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, ASICs, FPGAs, or other equivalent integrated or discrete logic circuitry.
BRIEF DESCRIPTION OF THE DRAWINGS
An embodiment of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart diagram detailing a method of processing digital vascular images;
FIGS. 2a and 2b are first and second digital vascular images;
FIGS. 3a and 3b are the first and second digital vascular images of FIGS. 2a and 2b after down sampling, equalisation and filtering;
FIGS. 4a and 4b are the first and second digital vascular images of FIGS. 3a and 3b after convolution with a rotating Gabor kernel;
FIGS. 5a and 5b are the first and second digital vascular images of FIGS. 4a and 4b after processing with a corner detection algorithm, blurring and down sampling;
FIG. 6 is the first digital vascular image of FIG. 5a after identification and marking of the micro-corner stepping structure;
FIG. 7 is a schematic diagram detailing the cross correlation of the first and second digital vascular images of FIGS. 5a and 5b ; and
FIG. 8 illustrates the digital vascular images produced during the processing method and the registered first and second digital vascular images.
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
FIG. 1 is a flow chart detailing the method steps of a registration algorithm for affine registration of pairs or sequences of retinal images.
FIG. 1 illustrates the method steps of processing the digital retinal image data. The first image may be termed the “base image”, with each subsequent image being termed the “input” image.
With reference to FIGS. 1, 2 a and 2 b, the first step 100 of the method is to provide first and second digital retinal image data. The first and second digital retinal image data is represented by first and second digital retinal images 10 a, 10 b. The first and second digital vascular image data includes the intensity of the illumination incident on the one or more pixels used to produce the image data. The first and second digital retinal images 10 a, 10 b illustrated here are obtained by a wide-field scanning laser ophthalmoscope (SLO), as is known in the art. As illustrated in FIGS. 2a and 2b , the first and second retinal images 10 a, 10 b show the optic disc 1 and vasculature 2 of the retina 3.
The second step 200 of the method is to reduce the size of the first and second digital vascular image data. This is achieved by down sampling the data. In the embodiment of the invention described here the first and second digital vascular image data is down sampled via a pyramid scaling computation. However, it should be appreciated that other known filtering, smoothing, sampling or sub sampling computation method could be used.
Reducing the size of the first and second digital vascular image data increases the speed of subsequent computations and scales the vasculature features so that an optimum degree of resonance occurs during subsequent convolution operations (described below). The down sampling scales the vasculature so that the typical curvature, and hence micro-corners, are within the area of interest, i.e. so that the best “resonance” is achieved of the corner detection filter, or kernel.
The third step 300 of the method is to optimise the contrast between the vasculature and the background of the first and second digital vascular image data. This is achieved through use of a histogram equalisation computation. In the embodiment of the invention described here an adaptive histogram equalisation is used to optimise the contrast between the vasculature and the background of the first and second digital vascular image data. The adaptive histogram equalisation attenuates variations in general lighting in the image data while increasing local contrast. This has the effect of accentuating vasculature relative to the image background. This effect is most notable in regions where polarisation effects would otherwise tend to swamp out vasculature information.
The fourth step 400 of the method is to remove noise from the first and second digital vascular image data which appears through use of the adaptive histogram equalisation. Removal of this noise reduces the chances of recording false “corner hits” during subsequent convolution operations (described below). The step of removing noise from the first and second digital vascular image data is achieved by a low pass (LP) noise filter.
FIGS. 3a and 3b are the first and second digital vascular images 10 a, 10 b of FIGS. 2a and 2b after down sampling (step 200), equalisation (step 300) and noise reduction (step 400). As can be seen from FIGS. 3a and 3b , the contrast between the vasculature 2 and the background 4 of the retina has been enhanced compared to the initial first and second digital retinal images 2 a, 2 b. The enhancement of the vasculature 2 relative to the background 4 of the retina improves the results of the subsequent convolution operations (described below).
The fifth step 500 of the method is to process the first and second image data with a directional filter that has the effect of producing clusters of orthogonally adjacent image data points in which the intensity gradient between each orthogonally adjacent image data point is less than a predetermined value. In the embodiment of the invention described here the step 500 of processing the first and second image data with a directional filter is carried out by convolving the first and second image data with a rotating Gabor kernel (or filter). Orthogonally adjacent image data points is considered to mean image data points that are immediately adjacent one another in an array of pixel data, i.e. image data points that are adjacent one another in any given row or column of the array. It should be appreciated that the clusters may comprise any number or configuration of orthogonally adjacent image data. That is, the clusters could be an arrangement 1×1, 1×2, 2×1, 2×2, 3×2, 2×3 image data points, or the like.
The Gabor kernel is a form of 2-dimensional Gaussian filter kernel with a profile that, in the present case, is matched to the form or shape of the intensity of the image data across the vasculature, i.e. the vasculature cross section. The Gaussian shape of the Gabor kernel allows it to “fit” to or “resonate with” the profile of the vasculature, thus accentuating the vasculature, while not fitting as well to other (background) regions, thus attenuating these regions.
The Gabor kernel is convolved with each of the first and second image data a multiple number of times. For each successive convolution the Gabor kernel is oriented differently relative to the image data. In the embodiment described here the Gabor kernel is effectively convolved eight times with each of the first and second image data, with the Gabor kernel being rotated 45 degrees relative to the image data for each successive convolution. The Gabor kernel is therefore rotated through 360 degrees relative to the image data over 8 convolutions. The Gabor kernel has been found to be particularly effective in resonating with the characteristics of the vasculature cross sections. In an embodiment where the kernel is symmetrical, the effect of 8 convolutions over 360 degrees can be achieved in practice with only four convolutions spaced over 180 degrees.
The first and second digital retinal image data after convolution with the rotating Gabor kernel is represented in FIGS. 4a and 4b . FIGS. 4a and 4b represent an averaged output of all 8 convolutions. As illustrated, the effect of convolving the first and second digital retinal image data with the rotating Gabor kernel described above is to produce clusters 12 of orthogonally adjacent image data points in which the magnitude of the intensity gradient between each orthogonally adjacent image data point is less than a predetermined value. The convolution in other words produces groups of orthogonally adjacent image data points that have similar intensity values.
With reference to FIGS. 4a to 6, the first and second digital retinal image data after convolution with the rotating Gabor kernel includes a line of clusters 12 having corner (or “micro-corner”) features can be seen that track the vasculature 2 of the retina 3. It is the rotation of the Gabor kernel relative to the image data which creates the corner (or micro-corner) features of the clusters 12. In particular, it is the rotation of the Gabor kernel through, at least, 90 degrees which creates the corner (or micro-corner) features of the clusters 12.
The vasculature 2 of the retina 3 has thus been enhanced at a local level into the form of a high intensity gradient “stepping” structure of corner (or micro-corner) features of the clusters 12 between the clusters 12 and the background 4 of the retina 3. The corner features of the clusters 12 may be any corner of the array of image data, as described above. The term “micro-corner” may refer to a cluster 12 comprising of a 1×2 or 2×1 image data points.
The sixth step 600 of the method is to identify (or extract) the clusters 12 in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters 12 is above a predetermined value. The sixth step 600 thus identifies clusters 12 that approximately lie on the boundary between the vasculature 2 of the retina 3 and the background 4. As illustrated in FIGS. 4a to 6, the clusters 12 have a higher intensity than the background 4. Step 600 of the method thus identifies the clusters 12 that track the vasculature 2 of the retina 3.
In the embodiment of the invention described here the step 600 of identifying the clusters 12 in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters 12 is above a predetermined value is carried out by using a corner detection algorithm. In the embodiment described here a Harris corner detection algorithm is used to identify the clusters 12 in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters 12 is above a predetermined value. However, it should be appreciated that any other suitable corner detection algorithm could be used.
The corner detection algorithm looks for orthogonal edge points in the convolved first and second digital retinal image. In particular, it will look for points where the gradient exceeds the threshold in two directions simultaneously. The algorithm is configurable to certain sensitivity and quality thresholds, as required. FIG. 6 illustrates points (+) identified by the corner detection algorithm.
Corner detection algorithms are not typically used with retinal images, since the existence of corner points is not commonly found. However, the pre-processing of the image data with a directional filter such as the rotating Gabor kernel results in image data in which the vasculature 2 has been enhanced at a local level in the form of a high intensity gradient “stepping structure”, or lines of corner (or micro-corner) features throughout the vasculature 2. It is this pre-processing with the rotating Gabor kernel which facilitates use of a corner detection algorithm to identify the clusters 12 that outline the vasculature 2 of the retina 3. The advantage of this method, when compared to mapping and extracting the vasculature by use of known tracking algorithms, is a significant decrease in computational loading.
Because the image data has been histogram equalised, the intensity gradient thresholds that are of interest will be approximately the same over the whole image data. The value of the predetermined threshold will be dependent on the size of the area of interest used in the equalisation. Setting the actual value need not be done by any analytical method, but may simply be done by trial and error empirical means. In some embodiments, the threshold may be for example between 10% and 50% of the maximum possible gradient. For example, in an embodiment where the maximum gradient is 255, a threshold gradient of 50 might be set, representing about 20% of the maximum possible.
The seventh step 700 of the method is to merge together the clusters 12 in each of the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters 12 is above a predetermined value. The seventh step 700 thus blurs, or merges, the clusters 12 together to further highlight the vasculature 2. When the image data is represented in an image the effect of the seventh step 700 is to “join the dots” of the clusters 12 along the vasculature 2. Step 700 of the method is useful since the positioning of the clusters 12 in each image data could be slightly different, therefore merging the clusters 12 increases the chance that more clusters 12 overlap at characteristic positions. The vasculature corner points that have been “thresholded” from the image are still in greyscale form, i.e. each data point could still have a different intensity value, although all are above the threshold value. The method may therefore include the further step of converting the “thresholded” image data to a binary image in which all “corner” clusters have a value of 1 and all the other image data points have a value of 0.
The eighth step 800 of the method is to mask clusters 12 in the image periphery that may be attached to non-relevant features, such as eye lashes.
The ninth step 900 of the method is to reduce the size of first and second merged cluster image data. This is achieved by down sampling the data. In the embodiment of the invention described here the first and second digital vascular image data is down sampled via a pyramid scaling computation. However, it should be appreciated that other known filtering, smoothing, sampling or sub sampling computation method could be used.
The first and second retinal image data after merging (step 700), masking (800) and down sampling (900) is represented in FIGS. 5a and 5 b.
The tenth step 1000 of the method is to identify common clusters 12 between the first and second image data where the magnitude of the intensity gradient between one or more adjacent clusters 12 is above a predetermined value. The tenth step 1000 thus identifies common clusters 12 between each of the first and second retinal image data. In the embodiment described here the step 1000 involves cross correlating the image data of FIGS. 5a and 5 b.
The cross correlation of the image data of FIGS. 5a and 5b is illustrated in FIGS. 7 and 1. In the embodiment described here the second image data of FIG. 5b is cross correlated with the first image data of FIG. 5a a number of times. With each subsequent cross correlation the second image data is incrementally rotated relative to the last. The multiple cross correlations may be rotated through approximately 40 degrees. The multiple cross correlations may be rotated through approximately 20 degrees around a pivot point located substantially around the optic disc point of the retina. The multiple cross correlations may, of course, be rotated through approximately any suitable degree around the pivot point located substantially around the optic disc point of the retina.
Data is output as a normalised cross correlation. The peak in cross correlation (i.e. best match between image data) is recorded for each correlation angle and confidence is deduced from a measurement of the peak isolation within the correlation surface 5 for each correlation angle. More specifically, a confidence coefficient is calculated as a weighting of the peak height multiplied by the total slope of the peak “walls” divided by the mean of the rest (i.e. excluding the region of the peak) of the correlation surface 5. A high value in this metric is indicative of a sharp, high, isolated peak, which is a characteristic of close correlation and therefore a high confidence in the accuracy of registration. The threshold at which confidence is toggled to low, which can be observed as an absence in any one isolated correlation peak, is established by weighting the confidence measure so that values less than unity indicate low confidence.
The tenth step 1000 of the method also includes determining the differences in position between the identified common clusters 12 in each of the first and second image data. The position of the cluster 12 includes its angular and translational position (rotational angle and x and y translations). The differences in position between the identified common clusters 12 in each of the first and second image data is termed the translational parameters.
The eleventh step 1100 of the method is to register the common clusters 12 between the first and second image data. Here the translation parameters determined from step 1000 are used to align the second image data to the first image data, or align the first and second digital retinal images 10 a, 10 b, in the known manner.
FIG. 8 illustrates the digital vascular images produced during the processing method and the registered first and second digital vascular images 10 a, 10 b. Top-left and top right of FIG. 8 illustrate the first and second digital vascular images 10 a, 10 b of FIGS. 2a and 2b after down sampling (step 200), equalisation (step 300) and noise reduction (step 400) (FIGS. 3a and 3b ). Middle-left and middle-right of FIG. 8 illustrate the first and second digital retinal image data after convolution with the rotating Gabor kernel (FIGS. 4a and 4b ). Bottom-left and bottom-right of FIG. 8 illustrate the first and second digital vascular images 10 a, 10 b in their registered positions. The centre-top of FIG. 8 is the correlation surface 5 of FIG. 7. The centre-middle of FIG. 8 illustrates the peak in cross correlation for each correlation angle. The bottom-middle of FIG. 8 illustrates the registered and overlaid first and second digital vascular images 10 a, 10 b.
The method of the invention dramatically reduces the computational requirements of the processor, increases accuracy and allows registration for images obtained across a number of imaging modes. The computational efficiency provided by the method of the invention is a result of the realisation that convolving the image data with a rotating Gabor kernel produces image data in which the vasculature has been modified to provide a high intensity gradient “stepping” structure of image data clusters that track the vasculature. The creation of these corner (or micro-corner) features in the image data facilitates the use of a known corner detection algorithm to extract the position of the clusters for comparison and registration. Convolving the image data with a rotating Gabor kernel and using a corner detection algorithm in this manner reduces the computational requirements of the method. Furthermore, using a corner detection algorithm increases the accuracy of the registration process, since the corners (and vasculature) can be accurately determined compared to known retinal image registration techniques.
Also, since the method uses feature sets (distances between vasculature points) that are common across different retinal imaging modes (e.g. reflectance, auto-fluorescence etc.), inter-mode registration (i.e. auto-fluorescence to reflectance images) is possible.
Modifications and improvements may be made to the above without departing from the scope of the present invention. For example, although the method has been described and illustrated with use of retinal images, it should be appreciated that other digital vascular images may be used, such as vascular images of an organ or body part of a human or an animal. The vasculature of the organ or the body part may include the blood vessels arteries and veins therein. The vasculature of the organ or the body part may also include the circulatory system thereof.
Furthermore, although the method has been described with use of a Gabor kernel, it should be appreciated that any suitably shaped kernel, filter, filter matrix, window, template or mask may be used. For example a Gaussian filter or kernel may be used. Also, although a rotating Gabor kernel has been described that rotates through 360 degrees (with eight convolutions), it should be appreciated that the kernel need not “rotate”, it need only be convolved with the image data in two orthogonal dimensions (i.e. a first convolution and then a second convolution at an angle of 90 degrees from the first). Rotating the kernel through 360 degrees does, however, improve the enhancement of the vasculature, as described above, and is preferred. Furthermore, although the method has been described above as using a single kernel convolved with the image data a number of times, it should be appreciated that any number of different kernels could be convolved with the image data at any number of angles relative thereto.
Also, although the method has been described and illustrated above as registering two retinal images, it should be appreciated that the method may comprise the step of providing a plurality of digital vascular image data (and a plurality of digital vascular images) and processing each digital image according to the above described method to register all image data and images.

Claims (25)

The invention claimed is:
1. A method of processing digital vascular images, the method comprising the steps of:
providing first and second digital vascular image data;
processing the first and second image data with a two-dimensional, directional filter that has the effect of producing clusters of orthogonally adjacent image data points in which the magnitude of an intensity gradient between each orthogonally adjacent image data point is less than a predetermined value;
identifying clusters in each of the first and second image data where the magnitude of the intensity gradient between at least one adjacent cluster is greater than a predetermined value;
identifying common clusters between the first and second image data where the magnitude of the intensity gradient between at least one adjacent cluster is greater than a predetermined value; and
registering the common clusters between the first and second image data.
2. The method as claimed in claim 1, wherein the step of processing the first and second image data with said directional filter includes multiple convolutions of the image data with at least one filter kernel, the at least one filter kernel being oriented differently for each convolution.
3. The method as claimed in claim 2, wherein the filter is incrementally rotated relative to the image data for each successive convolution.
4. The method as claimed in claim 2, wherein the filter is effectively rotated through 360 degrees relative to the image data over a number of convolutions.
5. The method as claimed in claim 1, wherein the directional filter is a Gabor filter.
6. The method as claimed in claim 1, wherein the step of identifying clusters in each of the first and second image data where the magnitude of the intensity gradient between at least one adjacent cluster is above a predetermined value includes use of a corner detection algorithm.
7. The method as claimed in claim 6, wherein the corner detection algorithm is operates by identifying clusters in each of the first and second image data where the magnitude of the intensity gradient between at least one adjacent cluster in two orthogonal directions is simultaneously above said predetermined value.
8. The method as claimed in claim 6, wherein the predetermined value is between 10% and 50% of a maximum possible gradient value.
9. The method as claimed in claim 1, wherein the step of identifying common clusters between the first and second image data where the magnitude of the intensity gradient between at least one adjacent cluster is above a predetermined value includes cross correlating the convolved first and second image data.
10. The method as claimed in claim 1, wherein the step of identifying common clusters between the first and second image data where the magnitude of the intensity gradient between at least one adjacent cluster is above a predetermined value includes cross correlating the identified clusters in the first and second image data.
11. The method as claimed in claim 1, wherein the step of identifying common clusters between the first and second image data where the magnitude of the intensity gradient between at least one adjacent cluster is above a predetermined value includes multiple cross correlations of the convolved first and second image data with different orientations.
12. The method as claimed in claim 11, wherein the multiple cross correlations are rotated in steps through approximately 20 degrees or more around a pivot point located substantially around the optic disc point of the retina.
13. The method as claimed in claim 10, wherein the step of cross correlating the convolved first and second image data includes the further step of determining differences in position between the identified common clusters in each of the first and second image data.
14. The method as claimed in claim 13, wherein the position of the cluster includes its angular position and/or its translational position, the differences in position between the identified common clusters in each of the first and second image data being termed the translational parameters, wherein the step of registering the common clusters between the first and second image data uses the determined translational parameters to align the first and second image data.
15. The method as claimed in claim 1, wherein the method includes the step of merging clusters in each of the first and second image data where the magnitude of the intensity gradient between at least one adjacent cluster is above a predetermined value.
16. The method as claimed in claim 1, wherein an averaging filter is used to adjust the intensity value of the image data between clusters to an average intensity value of the clusters in that region.
17. The method as claimed in claim 1, wherein the method includes the step of creating first and/or second digital vascular images from the first and second digital vascular image data.
18. The method as claimed in claim 1, wherein the method includes the step of creating a digital image of the registered common clusters between the first and second image data.
19. The method as claimed in claim 1, comprising providing a plurality of digital vascular image data and processing each digital image data to register all the common clusters between all the image data.
20. The method as claimed in claim 1, wherein the digital vascular images are vascular images of an organ or a body part of a human or an animal.
21. The method as claimed in claim 1, wherein the digital vascular images are retinal images of a human or an animal.
22. The method as claimed in claim 20, wherein said digital vascular images are images obtained using different modes of imaging.
23. A An image processing apparatus comprising:
a digital vascular image provision module arranged to provide first and second digital vascular image data; and
a processor configured to
provide first and second digital vascular image data;
process the first and second image data with a two-dimensional,
directional filter that has the effect of producing clusters of orthogonally adjacent image data points in which the magnitude of an intensity gradient between each orthogonally adjacent image data point is less than a predetermined value;
identify clusters in each of the first and second image data where the magnitude of the intensity gradient between at least one adjacent cluster is greater than a predetermined value;
identify common clusters between the first and second image data where the magnitude of the intensity gradient between at least one adjacent cluster is greater than a predetermined value; and
register the common clusters between the first and second image data.
24. A scanning laser ophthalmoscope comprising: an image processing apparatus comprising
a digital vascular image provision module arranged to provide first and second digital vascular image data; and
a processor configured to
provide first and second digital vascular image data
process the first and second image data with a two-dimensional,
directional filter that has the effect of producing clusters of orthogonally adjacent image data points in which the magnitude of an intensity gradient between each orthogonally adjacent image data point is less than a predetermined value;
identify clusters in each of the first and second image data where the magnitude of the intensity gradient between at least one adjacent cluster is greater than a predetermined value;
identify common clusters between the first and second image data where the magnitude of the intensity gradient between at least one adjacent cluster is greater than a predetermined value; and
register the common clusters between the first and second image data.
25. A non-transitory computer program product encoded with instructions that, when run on a computer, cause the computer to
provide first and second digital vascular image data;
process the first and second image data with a two-dimensional,
directional filter that has the effect of producing clusters of orthogonally adjacent image data points in which the magnitude of an intensity gradient between each orthogonally adjacent image data point is less than a predetermined value;
identify clusters in each of the first and second image data where the magnitude of the intensity gradient between at least one adjacent cluster is greater than a predetermined value;
identify common clusters between the first and second image data where the magnitude of the intensity gradient between at least one adjacent cluster is greater than a predetermined value; and
register the common clusters between the first and second image data.
US14/654,249 2013-02-19 2014-02-19 Relating to image processing Active US9454817B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB201302887A GB201302887D0 (en) 2013-02-19 2013-02-19 Improvements in or relating to image processing
GB1302887.3 2013-02-19
PCT/GB2014/050480 WO2014128456A1 (en) 2013-02-19 2014-02-19 Improvements in or relating to image processing

Publications (2)

Publication Number Publication Date
US20150324966A1 US20150324966A1 (en) 2015-11-12
US9454817B2 true US9454817B2 (en) 2016-09-27

Family

ID=48048612

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/654,249 Active US9454817B2 (en) 2013-02-19 2014-02-19 Relating to image processing

Country Status (13)

Country Link
US (1) US9454817B2 (en)
EP (1) EP2959455B1 (en)
JP (1) JP6446374B2 (en)
KR (1) KR102095723B1 (en)
CN (1) CN104919491B (en)
AU (1) AU2014220480B2 (en)
BR (1) BR112015014769A2 (en)
CA (1) CA2895297C (en)
DK (1) DK2959455T3 (en)
ES (1) ES2800627T3 (en)
GB (1) GB201302887D0 (en)
HK (1) HK1218984A1 (en)
WO (1) WO2014128456A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019034231A1 (en) 2017-08-14 2019-02-21 Optos Plc Retinal position tracking
US11253146B2 (en) 2016-10-11 2022-02-22 Optos Plc Ophthalmic device
US11523736B2 (en) 2017-11-16 2022-12-13 Victor X. D. YANG Systems and methods for performing gabor optical coherence tomographic angiography

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016189946A (en) * 2015-03-31 2016-11-10 富士フイルム株式会社 Medical image alignment device, method, and program
JP6572615B2 (en) * 2015-04-30 2019-09-11 株式会社ニデック Fundus image processing apparatus and fundus image processing program
JP6710759B2 (en) * 2016-01-14 2020-06-17 プリズマティック、センサーズ、アクチボラグPrismatic Sensors Ab Measuring circuit for an X-ray detector and corresponding method and X-ray imaging system
CN109166124B (en) * 2018-11-20 2021-12-14 中南大学 Retinal blood vessel morphology quantification method based on connected region
JP7410481B2 (en) 2019-12-10 2024-01-10 国立大学法人 筑波大学 Image processing method, scanning imaging method, image processing device, control method thereof, scanning imaging device, control method thereof, program, and recording medium
CN110960187A (en) * 2019-12-16 2020-04-07 天津中医药大学第二附属医院 Evaluation method of hypertension retinal vascular diseases with different traditional Chinese medicine syndromes
CN111110332B (en) * 2020-01-19 2021-08-06 汕头市超声仪器研究所股份有限公司 Optimization method for puncture needle development enhanced image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2064988A1 (en) 2007-11-08 2009-06-03 Kowa Company, Ltd. Device and method for creating retinal fundus maps
US20120195481A1 (en) 2011-02-01 2012-08-02 Universidade Da Coruna Method, apparatus, and system for automatic retinal image analysis

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7583827B2 (en) * 2001-10-03 2009-09-01 Retinalyze Danmark A/S Assessment of lesions in an image
JP4909378B2 (en) * 2009-06-02 2012-04-04 キヤノン株式会社 Image processing apparatus, control method therefor, and computer program
CN102439627B (en) * 2010-02-26 2014-10-08 松下电器(美国)知识产权公司 Pupil detection device and pupil detection method
JP5701024B2 (en) * 2010-11-26 2015-04-15 キヤノン株式会社 Image processing apparatus and method
CA2825169A1 (en) * 2011-01-20 2012-07-26 University Of Iowa Research Foundation Automated determination of arteriovenous ratio in images of blood vessels
JP2012254164A (en) * 2011-06-08 2012-12-27 Canon Inc Ophthalmologic information processing system and method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2064988A1 (en) 2007-11-08 2009-06-03 Kowa Company, Ltd. Device and method for creating retinal fundus maps
US20120195481A1 (en) 2011-02-01 2012-08-02 Universidade Da Coruna Method, apparatus, and system for automatic retinal image analysis

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
Abràmoff, Michael D., et al.; "Retinal Imaging and Image Analysis"; IEEE Reviews in Biomedical Engineering, vol. 3; 2010; pp. 169-208.
Can, Ali, et al.; "A Feature-Based, Robust, Hierarchical Algorithm for Registering Pairs of Images of the Curved Human Retina"; IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, No. 3; Mar. 2002; pp. 347-364.
Hu, Zhihong, et al.; "Multimodal Retinal Vessel Segmentation from Spectral-Domain Optical Coherence Tomography and Fundus Photography"; IEEE Transactions on Medical Imaging, vol. 31, No. 10; Oct. 10, 2012; pp. 1900-1911.
Klemencic, Ales, International Search Report, prepared for PCT/GB2014/050480, as mailed May 22, 2014, four pages.
Li, Qin, et al.; "A Multiscale Approach to Retinal Vessel Segmentation Using Gabor Filters and Scale Multiplication"; 2006 IEEE International Conference on Systems, Man, and Cybernetics; Taipei, Taiwan; Oct. 8-11, 2006; pp. 3521-3527.
Pinz, Axel, et al.; "Mapping the Human Retina"; IEEE Transactions on Medical Imaging, vol. 17, No. 4; Aug. 1998; pp. 606-619.
Rangayyan, Rangaraj M., et al.; "Detection of Blood Vessels in the Retina using Gabor Filters"; IEEE; Apr. 2007; pp. 717-720.
Ritter, Nicola, et al.; "Registration of Stereo and Temporal Images of the Retina"; IEEE Transactions on Medical Imaging, vol. 18, No. 5; May 1999; pp. 404-418.
Zana, F., et al.; "A Multimodal Registration Algorithm of Eye Fundus Images using Vessels Detection and Hough Transform"; IEEE Transactions on Medical Imaging, vol. 18, No. 5; May 5, 1999; pp. 419-428.

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11253146B2 (en) 2016-10-11 2022-02-22 Optos Plc Ophthalmic device
WO2019034231A1 (en) 2017-08-14 2019-02-21 Optos Plc Retinal position tracking
US11547293B2 (en) 2017-08-14 2023-01-10 Optos Plc Retinal position tracking
US11523736B2 (en) 2017-11-16 2022-12-13 Victor X. D. YANG Systems and methods for performing gabor optical coherence tomographic angiography

Also Published As

Publication number Publication date
EP2959455B1 (en) 2020-05-27
JP6446374B2 (en) 2018-12-26
AU2014220480B2 (en) 2019-05-23
KR102095723B1 (en) 2020-04-01
GB201302887D0 (en) 2013-04-03
JP2016514974A (en) 2016-05-26
CN104919491B (en) 2018-05-04
KR20150119860A (en) 2015-10-26
CA2895297A1 (en) 2014-08-28
AU2014220480A1 (en) 2015-07-02
WO2014128456A1 (en) 2014-08-28
CA2895297C (en) 2020-09-22
EP2959455A1 (en) 2015-12-30
DK2959455T3 (en) 2020-06-22
BR112015014769A2 (en) 2017-07-11
ES2800627T3 (en) 2021-01-04
HK1218984A1 (en) 2017-03-17
CN104919491A (en) 2015-09-16
US20150324966A1 (en) 2015-11-12

Similar Documents

Publication Publication Date Title
US9454817B2 (en) Relating to image processing
Sevastopolsky Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network
Baig et al. Deep learning approaches towards skin lesion segmentation and classification from dermoscopic images-a review
EP1918852B1 (en) Image processing method and image processor
Fan et al. Optic disk detection in fundus image based on structured learning
Panda et al. New binary Hausdorff symmetry measure based seeded region growing for retinal vessel segmentation
CN108009472B (en) Finger back joint print recognition method based on convolutional neural network and Bayes classifier
EP2092460A1 (en) Method and apparatus for extraction and matching of biometric detail
WO2013087026A1 (en) Locating method and locating device for iris
Wang et al. A fast method for automated detection of blood vessels in retinal images
Zhang et al. Robust and fast vessel segmentation via Gaussian derivatives in orientation scores
Vlachos et al. Finger vein segmentation from infrared images based on a modified separable mumford shah model and local entropy thresholding
Ramli et al. Feature-based retinal image registration using D-Saddle feature
CN112926516B (en) Robust finger vein image region-of-interest extraction method
Yedidya et al. Tracking of blood vessels in retinal images using Kalman filter
WO2020140380A1 (en) Method and device for quickly dividing optical coherence tomography image
CN111145117B (en) Spot detection method and system based on soft mathematical morphology operator
Wu et al. Retinal vessel radius estimation and a vessel center line segmentation method based on ridge descriptors
Peng et al. Retinal blood vessels segmentation using the radial projection and supervised classification
Kulkarni et al. ROI based Iris segmentation and block reduction based pixel match for improved biometric applications
Ng et al. Iris recognition algorithms based on texture analysis
Vlachos et al. Fuzzy segmentation for finger vessel pattern extraction of infrared images
Turukmane et al. EARLY-STAGE PREDICTION OF MELANOMA SKIN CANCER SEGMENTATION BY U-NET
Hsu et al. Rat brain registration using improved speeded up robust features
Marrugo Effect of Speckle Filtering in the Performance of Segmentation of Ultrasound Images Using CNNs

Legal Events

Date Code Title Description
AS Assignment

Owner name: OPTOS PLC, UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CLIFTON, DAVID;PINNOCK, RALPH ALLEN;REEL/FRAME:036118/0672

Effective date: 20150715

STCF Information on status: patent grant

Free format text: PATENTED CASE

SULP Surcharge for late payment
REFU Refund

Free format text: REFUND - PAYMENT OF FILING FEES UNDER 1.28(C) (ORIGINAL EVENT CODE: R1461); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8