WO2004079637A1 - Method for the recognition of patterns in images affected by optical degradations and application thereof in the prediction of visual acuity from a patient's ocular aberrometry data - Google Patents

Method for the recognition of patterns in images affected by optical degradations and application thereof in the prediction of visual acuity from a patient's ocular aberrometry data Download PDF

Info

Publication number
WO2004079637A1
WO2004079637A1 PCT/ES2004/070012 ES2004070012W WO2004079637A1 WO 2004079637 A1 WO2004079637 A1 WO 2004079637A1 ES 2004070012 W ES2004070012 W ES 2004070012W WO 2004079637 A1 WO2004079637 A1 WO 2004079637A1
Authority
WO
WIPO (PCT)
Prior art keywords
optical
procedure
image
images
patterns
Prior art date
Application number
PCT/ES2004/070012
Other languages
Spanish (es)
French (fr)
Inventor
Rafael Navarro Belsue
Oscar Nestares Garcia
Original Assignee
Consejo Superior De Investigaciones Científicas
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from ES200300562A external-priority patent/ES2247873B1/en
Application filed by Consejo Superior De Investigaciones Científicas filed Critical Consejo Superior De Investigaciones Científicas
Publication of WO2004079637A1 publication Critical patent/WO2004079637A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/103Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining refraction, e.g. refractometers, skiascopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the invention is directed to all areas where automatic recognition of patterns in images is necessary, in general in automatic inspection applications using optical means, and in particular in surveillance, process monitoring, quality control, simulation applications. visual process for clinical purposes, etc. Its application is especially indicated when the observation conditions do not guarantee a good image quality. It is a numerical procedure of pattern recognition based on visual perception that can be performed using computer methods. The application to the prediction of visual acuity is directed to the area of health, specifically ophthalmology, optometry and ophthalmic optics. In this case the numerical procedure combines optical and psychophysical models of visual perception, with the procedure of pattern recognition in images, described above.
  • Pattern recognition in images is an area of great interest within automatic image analysis, and with multiple applications. Among them, it is worth noting the optical character recognition, the recognition of targets in military applications, classification of biological species observed by optical means, active surveillance with automatic recognition of objects of interest, etc. Of special interest in the field of ophthalmology and optometry is the prediction of the patient's visual acuity from ocular aberrometry data.
  • AV visual acuity
  • the present invention consists of a method for recognizing patterns in images subjected to optical degradation and noise, from a finite and predetermined set.
  • This set of patterns is stored in digital format, and in a gray scale (intensities between black and white) or colors.
  • the observed image can be acquired by means of an optical image capture system (for example, in surveillance applications), or it can be a simulation of an optical capture system (for example, a simulation of the retinal image of an object from the optical data of an eye model).
  • the procedure is indicated for observed images that have undergone an a priori unknown optical degradation, introduced either by the capture system (camera, eye, etc.) or by factors external to it (atmospheric turbulence, etc.).
  • Figure 1 shows a block diagram presenting the data and processes carried out by this procedure, which are described in more detail below.
  • optically degraded digital image which may come from a scene observed by an optical image capture system and converted by an appropriate procedure to a digital image, or be the result of a numerical simulation of said capture processes .
  • This digital image will be compared with the digital images of the default pattern set, using digital computers.
  • the degraded image is transformed by applying a multiscale / multiorientation filter bank to obtain a visual representation of it.
  • This same procedure is applied to the images containing the set of preset patterns, being able to execute this process in a previous time and recovering directly the visual representation stored in a suitable device.
  • the method is flexible in terms of the type of filter to be used (Gabor, Gaussian derivatives, Laplacians, etc.), number of filters and arrangement of the scales and orientations, which allows it to be adapted to the specific needs of each application.
  • the probability of having generated the observed image is calculated for each pattern of the preset set.
  • a Bayesian method is applied that makes use of the visual representation of the images, and in which an approximation of the unknown degradation that is affecting the observed image is implicitly estimated.
  • this visual representation we introduce the further simplification that the frequency response of the unknown optical degradation is constant within the frequency range that each channel lets through. This simplification makes us move from an undetermined system to a particular system, and it is possible to calculate the probabilities of having generated the observed image.
  • the previous assumption it is possible to formulate the following simplified observation model for the versions of the observed image filtered with each of the Gabor filters of the representation scheme:
  • o, (x) is the observed degraded image filtered with the z " -th Gabor filter, g ( (x), and contaminated with the additive noise ⁇ , (x); h (x) is the impulse response of the unknown optical degradation; and c (x) is the image that contains the original pattern without degrading.
  • c, (x) is the image input pattern without degrading and filtered with Gabor filter z '-th; (h:, or are constant multiplicative factor and the overall displacement which is approximately the frequency response of the optical degradation in frequency range allowed by the z ' -th channel; and, finally, N c is the number of Gabor channels of the representation.
  • K is a no ⁇ nalization constant.
  • the a posteriori probability is equal to the likelihood, or conditional probability of the observations given the model parameters, multiplied by the a priori probability of the model parameters.
  • the posterior probability of the original undegraded pattern, c which is determined by the fact that the input image must be co-sponged with some of the patterns in the pre-set pattern set.
  • this a priori probability can be expressed as a sum of delta functions each associated with a pattern, with a weight given by the a priori probability of that pattern appearing in the image.
  • the posterior probability is:
  • Bayesian recognition consists first of all in choosing the degradation parameters that maximize the probability in (7) for each pattern in the set, and then choosing as the recognized pattern the one with the highest probability, which is precisely the one corresponding to the global maximum of the probability a posteriori.
  • Obtaining the parameters ⁇ ,, ü ; ⁇ maximizing the expression (7) can be done individually for each channel, and then multiply the maximum values for each channel to obtain the probability. For a particular channel, i, and assuming Gaussian white noise, maximizing the probability is equivalent to minimizing the following error function:
  • the result of the Bayesian method is a probability like the previous one, associated with each of the patterns and the set. This information can be used in many ways, depending on the application. One of the most interesting possibilities is to select the pattern with the highest probability as the recognized pattern from The observed image. It is also possible to reject the hypothesis that some of the patterns are present in the image, if confidence thresholds are not exceeded in the calculated probabilities. Other additional information provided by this method is an estimate of the most likely degradation parameters, which can be used to recover an approximation of the unknown optical degradation that affected the observed image.
  • the present invention could be applied in a variety of practical situations, including: 1. Optical character recognition (known by its acronym in English, OCR), in blurred or degraded images.
  • FIG. 2 The specific procedure is shown in Figure 2. It is based on input data, which is used to establish a personalized model of the patient's eye.
  • This model consists of an optical part, a retinal part consisting of cones sampling, and a neuronal representation of the image, which are applied sequentially.
  • the optical model starts from the optical aberrations to obtain the optical transfer function (OTF) that acts as a linear filter on the input test image.
  • OTF filter is modified to incorporate the effect of sampling retinal photoreceptors (cones in photopic vision and rods in scotopic vision).
  • the second part of the model is applied to the filtered image consisting of a Pyramidal multi-scale / multi-orientation decomposition through a filter bank (Gabor, Gaussian derivatives, steerable pyramid, etc.)
  • a Gabor filter bank has been chosen [O. Nestares, R. Navarro, J. Portilla, A. Tabernero (1998), "Efficient spatial-domain implementation of a multiscale image representation based on Gabor functions", J. Electronic Imaging, 7; 166-173], which is followed by normalization by the low-pass residue to pass to contrast units. Filter frequencies are set so that the maximum frequency matches the standard models.
  • This decomposition constitutes a schematic but realistic model of the representation of the image in the visual cortex.
  • a contrast threshold is applied, so that contrast values that do not exceed the threshold are not considered.
  • the complete model can be applied to any type of image, giving rise to a cortical representation of the same, and which in turn constitutes the entrance for the pattern recognition procedure that must be robust, or present some invariance, against to the presence of optical degradations (aberrations).
  • the output is the character (optotype) of the alphabet that most likely corresponds to the input image.
  • the entire procedure is applied to a set of images of input optotypes simulating the clinical procedure to obtain visual acuity.
  • the optical model is determined by the wave aberration, which in this case would be described by the coefficients of a development in Zernike polynomials provided directly by the aberrometer, and by the parameters that describe the Stiles-Crawford effect of the patient.
  • This effect is equivalent to an apodizing filter described with a Gaussian, of a certain width ⁇ and centered on certain coordinates in the plane of the pupil.
  • the OTF optical transfer function is obtained, which is the autocorrelation of the wavefront in the pupil.
  • the retinal optical image of an input test image is obtained by a filtering operation in the spatial frequency domain, the OTF being the filter.
  • a monochromatic case has been considered, although its extension to the polychromatic case is immediate, if the optical aberrations for various wavelengths in red, green and blue are known.
  • our method consists in applying the corresponding OTFs to each of the chromatic channels of the RVA input image for those 3 wavelengths, and obtaining the retinal images for the 3 chromatic components, then a transformation is carried out to pass to the representation in CIELAB chromatic coordinates that best model the color behavior of the visual system. From here, the image corresponding to brightness L is used for the rest of the procedure.
  • the OTF is modified due to the spectral overlap produced by the sampling of the photoreceptors. In this example the case of photopic vision has been considered, so the sampling is given by the distribution of cones.
  • the input data are of great importance for the realization of the model, since they are the ones that will characterize that particular patient or eye.
  • Figure 3 it has been assumed that the only data available are those of aberrometry, in which case the prediction will be more reliable in patients whose retina and visual cortex are normal and therefore do not present any conditioning in this regard.
  • test images containing calibrated optotypes are introduced so that their sizes correspond to specific values of visual acuity.
  • the image is analyzed by applying the pattern recognition to each optotype, assigning a value of 1 or 0 (or Boolean variables true or false) in case of success or failure, respectively, in the recognition.
  • a threshold is established, for the number of failures allowed, to override the task for a certain optotype size, corresponding to a certain visual acuity value.
  • Both the optotypes and the threshold of hits will be similar to those used in the procedure to measure the visual acuity used in the concrete reality or clinic (a typical percentage of hits is at least 75%).
  • Figure 2 shows an example of a test image consisting of 4 rows, each containing as many characters (optotypes).
  • the size of the characters in each row corresponds to a certain value of visual acuity.
  • the full size of the character is 5 times the thickness of the stroke, and this in turn corresponds to the value of visual acuity.
  • a decimal scale has been used, such that visual acuity unit corresponds to a stroke size that subtends a minute of visual field according to the optical model.
  • the lines of the test image correspond to visual acuities of 0.6, 0.8, 1 and 1.2 respectively (see figure).
  • the characters are designed on purpose to meet the above specifications.
  • a reduced 16-character alphabet is used for several reasons.
  • visual acuity tests do not use all the characters of the alphabet, having a predilection for a subset, and on the other hand, reducing the number of possible characters to 16 saves the calculation time in the recognition stage.
  • the alphabet can be chosen so that it is identical to that used in the actual procedure used in the specific clinic, as already mentioned.
  • Pattern recognition is done by extracting the portion of the image resulting from applying the models contained in each of the optotypes.
  • the procedure consists of several stages:
  • the procedure begins with the upper line (major scale, standard visual acuity, that is 1). If the line is overcome (at least 75% of hits) it is passed to the next higher line (1.2). In case of having more failures of the pe ⁇ nitidos, it goes to the bottom line (0.8). The procedure is stopped if in ascending trajectory the threshold of successes is not exceeded, or in descending trajectory when it is exceeded, returning as the value of visual acuity that of the last exceeded line.
  • the procedure can be monocular or binocular.
  • the procedure consists in properly combining the results of both eyes. This can be done either by a final stage in which the results of both eyes are checked to eliminate errors, or by merging the visual information contained in each of the images, optimizing the result.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention relates to a pattern-recognition method involving the use of a decomposition, based on channels which are tuned at different frequencies and orientations, of the original patterns and of the image observed. According to the invention, simplifications can be made to the degradation model using the aforementioned decomposition, such that, from the observed image, it is possible to calculate the probability of each of the original images generating said observation. The inventive method is Bayesian and, as such, can include statistical information about the most probable degradations and the most abundant patterns as well as about the cost of making an error in a determined pattern. In this way, the invention provides a more reliable pattern recognition method which is adapted to the requirements of each application and which can, above all, withstand possible optical degradations to the image. One application which is of particular interest to the optical and ophthalmology sector consists in predicting the visual acuity of a patient from data relating to optical aberrations of the eye, provided by an aberrometer.

Description

TITULOTITLE
PROCEDIMIENTO PARA EL RECONOCIMIENTO DE PATRONES EN IMÁGENES AFECTADAS POR DEGRADACIONES ÓPTICAS Y SU APLICACIÓN A LA PREDICCIÓN DE LA AGUDEZA VISUAL A PARTIR DE DATOS DE ABERROMETRÍA OCULAR DEL PACIENTEPROCEDURE FOR THE RECOGNITION OF PATTERNS ON IMAGES AFFECTED BY OPTICAL DEGRADATIONS AND ITS APPLICATION TO THE PREDICTION OF VISUAL AGUDEZA FROM THE PATIENT'S EYE ABERROMETRY DATA
SECTOR DE LA TÉCNICASECTOR OF THE TECHNIQUE
La invención se dirige a todas las áreas en la que sea necesario el reconocimiento automático de patrones en imágenes, en general en aplicaciones de inspección automática usando medios ópticos, y en particular en aplicaciones de vigilancia, monitorización de procesos, control de calidad, simulación del proceso visual con fines clínicos, etc. Su aplicación está especialmente indicada cuando las condiciones de observación no permitan garantizar una buena calidad de imagen. Se trata de un procedimiento numérico de reconocimiento de patrones basado en la percepción visual que puede realizarse mediante métodos informáticos. La aplicación a la predicción de la agudeza visual se dirige al área de la salud, concretamente a la oftalmología, optometría y óptica oftálmica. En este caso el procedimiento numérico combina modelos ópticos y psicofísicos de la percepción visual, con el procedimiento de reconocimiento de patrones en imágenes, descrito anteriormente.The invention is directed to all areas where automatic recognition of patterns in images is necessary, in general in automatic inspection applications using optical means, and in particular in surveillance, process monitoring, quality control, simulation applications. visual process for clinical purposes, etc. Its application is especially indicated when the observation conditions do not guarantee a good image quality. It is a numerical procedure of pattern recognition based on visual perception that can be performed using computer methods. The application to the prediction of visual acuity is directed to the area of health, specifically ophthalmology, optometry and ophthalmic optics. In this case the numerical procedure combines optical and psychophysical models of visual perception, with the procedure of pattern recognition in images, described above.
ESTADO DE LA TÉCNICASTATE OF THE TECHNIQUE
El reconocimiento de patrones en imágenes es un área de gran interés dentro del análisis automático de imágenes, y con múltiples aplicaciones. Entre ellas, cabe destacar el reconocimiento óptico de caracteres, el reconocimiento de blancos en aplicaciones militares, clasificación de especies biológicas observadas por medios ópticos, vigilancia activa con reconocimiento automático de objetos de interés, etc. De especial interés en el campo de la oftalmología y la optometría es la predicción de la agudeza visual del paciente a partir de datos de aberrometría ocular.Pattern recognition in images is an area of great interest within automatic image analysis, and with multiple applications. Among them, it is worth noting the optical character recognition, the recognition of targets in military applications, classification of biological species observed by optical means, active surveillance with automatic recognition of objects of interest, etc. Of special interest in the field of ophthalmology and optometry is the prediction of the patient's visual acuity from ocular aberrometry data.
Debido al gran interés de esta técnica, se han desarrollado muchas mejoras respecto de los métodos de reconocimiento óptico de patrones originales, basados en filtros óptimos de correlación. Gran parte de estas mejoras han consistido en preprocesar la imagen observada para corregir determinados factores que hacen que se aleje de los patrones originales, tales como distorsión geométrica, cambios de escala, etc. Otro conjunto de mejoras importantes se ha dirigido a tratar de reconocer patrones en imágenes afectadas por una gran cantidad de ruido, para lo que es muy útil considerar modelos probabilísticos y estadísticos. Sin embargo son muy pocos los métodos propuestos para reconocer patrones en imágenes afectadas de degradaciones ópticas importantes y desconocidas a priori, y prácticamente inexistentes los métodos propuestos para tratar imágenes degradadas ópticamente y con ruido añadido. La mayoría de los métodos de reconocimiento óptico suponen que la información sobre la degradación óptica es conocida, y que por tanto puede compensarse en un proceso previo al reconocimiento, a pesar de que son frecuentes los casos en los que no se cuenta con esa información. Por tanto, es necesario desarrollar métodos específicos de reconocimiento que sean robustos a degradaciones ópticas desconocidas además de al ruido. Por ejemplo, aparte de las degradaciones ópticas introducidas por el sistema de captación de la imagen, la atmósfera puede introducir degradaciones ópticas que en principio son variables, aleatorias y desconocidas.Due to the great interest of this technique, many improvements have been developed regarding the optical recognition methods of original patterns, based on optimal correlation filters. Much of these improvements have been to preprocess the observed image to correct certain factors that cause it to move away from the patterns originals, such as geometric distortion, scale changes, etc. Another set of important improvements has been aimed at trying to recognize patterns in images affected by a large amount of noise, for which it is very useful to consider probabilistic and statistical models. However, there are very few methods proposed to recognize patterns in images affected by important optical degradations and unknown a priori, and practically non-existent methods proposed to treat optically degraded images with added noise. Most optical recognition methods assume that information on optical degradation is known, and therefore can be compensated in a process prior to recognition, although there are frequent cases in which this information is not available. Therefore, it is necessary to develop specific recognition methods that are robust to unknown optical degradations in addition to noise. For example, apart from the optical degradations introduced by the image capture system, the atmosphere can introduce optical degradations that in principle are variable, random and unknown.
En este sentido, un trabajo anterior demostró que, usando métodos de representación de imágenes inspirados en la visión humana, era posible realizar reconocimiento de patrones en presencia de ciertas degradaciones ópticas, principalmente desenfoques [A. Vargas, J. Campos, R. Navarro (2000). "Invariant pattem recognition against defocus based on subband decomposition of the filter", Optics Communications 185: 33-40]. El presente procedimiento está igualmente basado en métodos de representación de imágenes basados en la visión humana, pero al combinar éstos con un modelo Bayesiano más general, permite tratar casos en los que las degradaciones no se limitan a desenfoques, y además se incluye de una manera natural el ruido en el modelo. Por tanto, el presente procedimiento es más amplio y genérico, dando lugar a la presente invención.In this sense, previous work showed that, using methods of representing images inspired by human vision, it was possible to recognize patterns in the presence of certain optical degradations, mainly blurring [A. Vargas, J. Campos, R. Navarro (2000). "Invariant pattem recognition against defocus based on subband decomposition of the filter", Optics Communications 185: 33-40]. The present procedure is also based on methods of representation of images based on human vision, but by combining them with a more general Bayesian model, it allows treating cases in which degradations are not limited to blurring, and is also included in a way Natural noise in the model. Therefore, the present process is broader and more generic, giving rise to the present invention.
Por otra parte, la medida de la agudeza visual, AV, es la forma más generalizada de evaluación de la calidad visual en pacientes, y desde hace mucho tiempo ha constituido el mejor y más eficiente método de realizar la refracción subjetiva. Por ello se ha convertido en la prueba visual más practicada, realizándose de manera convencional en las clínicas oftalmológicas, establecimientos optométricos, e incluso en medicina general y legal. El test de AV permite detectar de forma rápida y precisa tanto problemas retiñíanos o neurales (ambliopía), como defectos ópticos (miopía, astigmatismo, etc.) En el primer caso, la agudeza visual sólo es el primer paso para el diagnóstico, por lo que han de realizarse otras pruebas como exámenes del fondo de ojo, etc. Sin embargo, desde el punto de vista óptico, la AV sola permite obtener la graduación óptima del paciente, por ser muy sensible a cualquier defecto óptico. A pesar de lo extendido que está su uso desde hace muchos años, no ha existido hasta ahora un modelo que permita predecir los valores de agudeza visual en pacientes concretos, por varios motivos. En primer lugar, la obtención de la agudeza visual se realiza asignando una tarea visual al paciente consistente en reconocer los objetos que se le muestran. Dado que el ojo humano, incluso el emétrope, presenta una importante cantidad de aberraciones ópticas [Liang, J. and D. R. Williams (1997). "Aberrations and retinal image quality of the normal human eye." J. Opt. Soc. Am. A 14(11): 2873-2883] los métodos convencionales de reconocimiento de patrones fracasan como modelos del sistema visual ya que predicen una AV muy inferior a la que presentan sujetos normales, que parecen tolerar, sin mayor problema, cantidades moderadas de aberraciones. Por lo tanto, la única manera de predecir la AV es realizando modelos realistas de las distintas etapas del proceso visual. En los últimos años, ha habido importantes avances tanto en el modelado óptico, simulando las imágenes retinianas de cartas de optotipos [A. Guirao, J. Porter, D. R. Williams, I. G. Cox (2002), "Calculated impact of higher-order monochromatic aberrations on retinal image quality in a population of human eyes", J. Opt. Soc. Am. A 19(1): 1-9], como en los modelos de representación de imágenes en la corteza visual [Landy, M. S. and J. A. Movshon (1991). Computational models of visual processing. Cambridge, MA., MIT Press]. Es por ello que el procedimiento que permite realizar el reconocimiento de patrones en presencia de degradaciones ópticas, objeto de esta invención, permite encarar por primera vez el problema de modelar y predecir la agudeza visual.On the other hand, the measurement of visual acuity, AV, is the most widespread form of evaluation of visual quality in patients, and has long been the best and most efficient method of performing subjective refraction. That is why it has become the most practiced visual test, being carried out in a conventional manner in ophthalmological clinics, optometric establishments, and even in medicine General and legal. The AV test allows to quickly and accurately detect both retinal or neural problems (amblyopia), as well as optical defects (myopia, astigmatism, etc.). In the first case, visual acuity is only the first step for diagnosis, so that other tests such as fundus exams, etc. have to be performed. However, from the optical point of view, AV alone allows to obtain the optimal graduation of the patient, as it is very sensitive to any optical defect. Despite its widespread use for many years, there has not been a model so far to predict the values of visual acuity in specific patients, for several reasons. First, obtaining visual acuity is done by assigning a visual task to the patient consisting of recognizing the objects shown. Since the human eye, including the emmetropic, has a significant amount of optical aberrations [Liang, J. and DR Williams (1997). "Aberrations and retinal image quality of the normal human eye." J. Opt. Soc. Am. A 14 (11): 2873-2883] Conventional pattern recognition methods fail as models of the visual system since they predict a much lower AV than normal subjects, which seem to tolerate, without major problems, quantities Moderate aberrations. Therefore, the only way to predict AV is by making realistic models of the different stages of the visual process. In recent years, there have been significant advances in both optical modeling, simulating retinal images of optotype charts [A. Guirao, J. Porter, DR Williams, IG Cox (2002), "Calculated impact of higher-order monochromatic aberrations on retinal image quality in a population of human eyes", J. Opt. Soc. Am. A 19 (1): 1-9], as in the models of visual representation in the visual cortex [Landy, MS and JA Movshon (1991). Computational models of visual processing. Cambridge, MA., MIT Press]. That is why the procedure that allows the recognition of patterns in the presence of optical degradations, object of this invention, allows for the first time to face the problem of modeling and predicting visual acuity.
DESCRIPCIÓN DE LA INVENCIÓN La presente invención consiste en un procedimiento para reconocer patrones en imágenes sometidas a degradaciones ópticas y ruido, de entre un conjunto finito y prefijado. Este conjunto de patrones es almacenado en formato digital, y en una escala de grises (intensidades entre negro y blanco) o de colores. Dependiendo de la aplicación concreta, la imagen observada puede ser adquirida mediante un sistema óptico de captación de imágenes (por ejemplo, en aplicaciones de vigilancia), o bien puede ser una simulación de un sistema óptico de captación (por ejemplo, una simulación de la imagen retiniana de un objeto a partir de los datos ópticos de un modelo del ojo). En cualquier caso, el procedimiento está indicado para imágenes observadas que han sufrido una degradación óptica desconocida a priori, introducida bien por el sistema de captación (cámara, ojo, etc.) o por factores externos al mismo (turbulencia atmosférica, etc.).DESCRIPTION OF THE INVENTION The present invention consists of a method for recognizing patterns in images subjected to optical degradation and noise, from a finite and predetermined set. This set of patterns is stored in digital format, and in a gray scale (intensities between black and white) or colors. Depending on the specific application, the observed image can be acquired by means of an optical image capture system (for example, in surveillance applications), or it can be a simulation of an optical capture system (for example, a simulation of the retinal image of an object from the optical data of an eye model). In any case, the procedure is indicated for observed images that have undergone an a priori unknown optical degradation, introduced either by the capture system (camera, eye, etc.) or by factors external to it (atmospheric turbulence, etc.).
La Figura 1 muestra un diagrama de bloques presentando los datos y procesos llevados a cabo por este procedimiento, que a continuación se describen con más detalle.Figure 1 shows a block diagram presenting the data and processes carried out by this procedure, which are described in more detail below.
Se parte de una imagen digital degradada ópticamente, que puede provenir bien de una escena observada por un sistema óptico de captación de imagen y convertida mediante un procedimiento apropiado a una imagen digital, o bien ser el resultado de una simulación numérica de dichos procesos de captación. Esta imagen digital se comparará con las imágenes digitales del conjunto de patrones prefijado, usando ordenadores digitales.It is based on an optically degraded digital image, which may come from a scene observed by an optical image capture system and converted by an appropriate procedure to a digital image, or be the result of a numerical simulation of said capture processes . This digital image will be compared with the digital images of the default pattern set, using digital computers.
La imagen degradada es transformada aplicando un banco de filtros multiescala/multiorientación para obtener una representación visual de la misma. Este mismo procedimiento se aplica a las imágenes conteniendo el conjunto de patrones prefijado, pudiéndose ejecutar este proceso en un tiempo anterior y recuperando directamente la representación visual almacenada en un dispositivo adecuado. El método es flexible en cuanto al tipo de filtro a usar (Gabor, derivadas de Gaussiana, Laplacianas, etc.), número de filtros y disposición de las escalas y orientaciones, lo que permite adaptarlo a las necesidades concretas de cada aplicación.The degraded image is transformed by applying a multiscale / multiorientation filter bank to obtain a visual representation of it. This same procedure is applied to the images containing the set of preset patterns, being able to execute this process in a previous time and recovering directly the visual representation stored in a suitable device. The method is flexible in terms of the type of filter to be used (Gabor, Gaussian derivatives, Laplacians, etc.), number of filters and arrangement of the scales and orientations, which allows it to be adapted to the specific needs of each application.
A continuación se procede a calcular, para cada patrón del conjunto prefijado, su probabilidad de haber generado la imagen observada. Para ello se aplica un método Bayesiano que hace uso de la representación visual de las imágenes, y en el que implícitamente se estima una aproximación de la degradación desconocida que está afectando a la imagen observada. Usando esta representación visual introducimos la simplificación adicional de que la respuesta en frecuencia de la degradación óptica desconocida es constante dentro del rango de frecuencias que deja pasar cada canal. Esta simplificación hace que pasemos de un sistema no determinado a un sistema determinado, y que sea posible calcular las probabilidades de haber generado la imagen observada. Con la suposición anterior es posible formular el siguiente modelo de observación simplificado para las versiones de la imagen observada filtradas con cada uno de los filtros de Gabor del esquema de representación:Next, the probability of having generated the observed image is calculated for each pattern of the preset set. For this, a Bayesian method is applied that makes use of the visual representation of the images, and in which an approximation of the unknown degradation that is affecting the observed image is implicitly estimated. Using this visual representation we introduce the further simplification that the frequency response of the unknown optical degradation is constant within the frequency range that each channel lets through. This simplification makes us move from an undetermined system to a particular system, and it is possible to calculate the probabilities of having generated the observed image. With the previous assumption it is possible to formulate the following simplified observation model for the versions of the observed image filtered with each of the Gabor filters of the representation scheme:
o, (x) = (/2(x) * c(x)) * g, (x) +η, (x) A,c; (x -u,) +η,(x), z = l,...,Nc (1)or, (x) = (/ 2 (x) * c (x)) * g, (x) + η, (x) A, c ; (x -u,) + η, (x), z = l, ..., N c (1)
donde o, (x) es la imagen degradada observada filtrada con el z"-ésimo filtro de Gabor, g( (x) , y contaminada con el ruido aditivo η, (x) ; h(x) es la respuesta al impulso de la degradación óptica desconocida; y c(x) es la imagen que contiene el patrón original sin degradar. Haciendo uso de la simplificación anterior, se llega a la parte de la derecha de la ecuación anterior, en la que c, (x) es la imagen con el patrón de entrada sin degradar y filtrada con el filtro de Gabor z'-ésimo; (h:,u son el factor multiplicativo constante y el desplazamiento global con los que se ha aproximado la respuesta en frecuencia de la degradación óptica en el rango de frecuencias que deja pasar el canal z'-ésimo; y, finalmente, Nc es el número de canales de Gabor de la representación.where o, (x) is the observed degraded image filtered with the z " -th Gabor filter, g ( (x), and contaminated with the additive noise η, (x); h (x) is the impulse response of the unknown optical degradation; and c (x) is the image that contains the original pattern without degrading.Using the above simplification, you get to the right part of the previous equation, where c, (x) is the image input pattern without degrading and filtered with Gabor filter z '-th; (h:, or are constant multiplicative factor and the overall displacement which is approximately the frequency response of the optical degradation in frequency range allowed by the z ' -th channel; and, finally, N c is the number of Gabor channels of the representation.
Con el modelo de observación anterior podemos formular la probabilidad a posteriori del carácter original c, conjunta con los parámetros que modelan la degradación ópticaWith the previous observation model we can formulate the posterior probability of the original character c, together with the parameters that model the optical degradation
{/z,,u,} , dado el conjunto de observaciones {o,} . Aplicando la regla de Bayes obtenemos la siguiente expresión, en la que se hace explícita la posibilidad de incluir información a priori, como la probabilidad aparición de un pafrón concreto, o como la probabilidad a priori de los parámetros que modelan la degradación óptica:{/ z ,, u,}, given the set of observations {o,}. Applying the Bayes rule we obtain the following expression, which makes explicit the possibility of including a priori information, such as the probability of a specific pattern, or as the a priori probability of the parameters that model optical degradation:
Jp(c,{AI ,uI } | {ol }) = .S: ^({ l } | c, {Al ,u,})jp(c)/7({Λl ,ul}) (2) J p (c, {A I , u I } | {o l }) = .S: ^ ({ l } | c, {A l , u,}) j p (c) / 7 ({Λ l , u l }) (2)
donde por conveniencia de notación hemos expresado las imágenes como vectores; K es una constante de noπnalización. La probabilidad a posteriori es igual a la verosimilitud, o probabilidad condicional de las observaciones dados los parámetros del modelo, multiplicada por la probabilidad a priori de los parámetros del modelo. Hemos supuesto que la imagen original de entrada y los parámetros que modelan la degradación óptica son independientes. Si no se tiene información a priori, es posible suponer que los parámetros que modelan la degradación óptica son equiprobables, resultando:where for convenience of notation we have expressed the images as vectors; K is a noπnalization constant. The a posteriori probability is equal to the likelihood, or conditional probability of the observations given the model parameters, multiplied by the a priori probability of the model parameters. We have supposed that the original input image and the parameters that model the optical degradation are independent. If there is no prior information, it is possible to assume that the parameters that model the optical degradation are equiprobable, resulting:
Figure imgf000008_0001
| {o,}) = K'p({oi} | c, { 2! 3u;}) (c) (3)
Figure imgf000008_0001
| {o,}) = K'p ({o i } | c, {2 ! 3 u ; }) (c) (3)
donde K' es otra constante de normalización. El estimador máximo a posteriori, MAP, para el patrón de entrada, c , y para los parámetros de la degradación óptica, { ,ü,} es aquel que maximiza la probabilidad a posteriori de la ecuación (3):where K 'is another normalization constant. The maximum a posteriori estimator, MAP, for the input pattern, c, and for the parameters of the optical degradation, {, ü,} is the one that maximizes the posterior probability of equation (3):
(c, {/7, ,ü, }) = arg max p({o, } | c, {/?, ,u , }) /?((:) (4) (c, {/ 7,, ü,}) = arg max p ({o,} | c, {/ ?,, u,}) / ? ((:) (4)
(c,{Λ„u, })(c, {Λ „u,})
donde la función de verosimilitud ¿>({o,} | c,{ z,,u,}) viene dada por la función densidad de probabilidad del ruido, de acuerdo con el modelo de observación de la ecuación (1). Suponiendo independencia condicional entre canales, así como entre píxeles, la función de verosimilitud resulta:where the likelihood function ¿> ({o,} | c, {z ,, u,}) is given by the noise probability density function, according to the observation model of equation (1). Assuming conditional independence between channels, as well as between pixels, the likelihood function results:
Nr jp({o;} | c, { 2; ,u;}) = fÍfJ τl; (x) - /2;c, (x -u¡ )) (5) ι=lN j p r ({o;} | c, {2;, u;}) = fÍfJ τl (σ; (x) - / 2, c, (x -u)) (5) ι = l
A continuación, incorporamos la probabilidad a priori del patrón original sin degradar, c , que está determinada por el hecho de que la imagen de entrada debe coπ-esponderse con alguno de los patrones del conjunto de patrones pre-establecido. De esta manera, esta probabilidad a priori puede expresarse como una suma de funciones delta cada una asociada a un patrón, con un peso dado por la probabilidad a priori de que ese patrón aparezca en la imagen. Considerando todos los patrones equiprobables, la probabilidad a posteriori resulta:Next, we incorporate the a priori probability of the original undegraded pattern, c, which is determined by the fact that the input image must be co-sponged with some of the patterns in the pre-set pattern set. In this way, this a priori probability can be expressed as a sum of delta functions each associated with a pattern, with a weight given by the a priori probability of that pattern appearing in the image. Considering all the equiprobable patterns, the posterior probability is:
Figure imgf000008_0002
Figure imgf000008_0002
donde {c7}^, son las imagines correspondientes a los N patrones. La introducción de esta probabilidad a priori hace que se reduzca enormemente el espacio de todas las posibles configuraciones de las intensidades en la imagen de entrada, resultando que la probabilidad es distinta de cero únicamente para c e{cJ}*, , puntos en los que la probabilidad a posteriori es:where {c 7 } ^, are the images corresponding to the N patterns. The introduction of this a priori probability means that the space of all the possible configurations of the intensities in the input image, resulting that the probability is nonzero only for ce {c J } *,, points at which the posterior probability is:
/7(c = cJ , {Λl .uI} | {oI}) oc πf|/7 T]( (oI (x) - AIc,-' (x -«,)) (7)/ 7 (c = c J , {Λ l .u I } | {or I }) oc πf | / 7 T] ( (or I (x) - A I c, - '(x - «,)) ( 7)
;=1 x; = 1 x
Así, el reconocimiento Bayesiano consiste en primer lugar en elegir los parámetros de la degradación que maximizan la probabilidad en (7) para cada patrón del conjunto, y seguidamente escoger como patrón reconocido aquel con mayor probabilidad, que es precisamente el correspondiente al máximo global de la probabilidad a posteriori. La obtención de los parámetros {Λ,,ü;} que maximizan la expresión (7) puede hacerse individualmente para cada canal, y después multiplicar los valores máximos para cada canal para obtener la probabilidad. Para un canal concreto, i, y suponiendo ruido blanco gaussiano, maximizar la probabilidad es equivalente a minimizar la siguiente función de error:Thus, Bayesian recognition consists first of all in choosing the degradation parameters that maximize the probability in (7) for each pattern in the set, and then choosing as the recognized pattern the one with the highest probability, which is precisely the one corresponding to the global maximum of the probability a posteriori. Obtaining the parameters {Λ ,, ü ; } maximizing the expression (7) can be done individually for each channel, and then multiply the maximum values for each channel to obtain the probability. For a particular channel, i, and assuming Gaussian white noise, maximizing the probability is equivalent to minimizing the following error function:
/ = ∑(*, (x) - /(x- ,))2 (8)/ = ∑ (*, (x) - / (x-,)) 2 (8)
Es posible demostrar que la función (8) se minimiza para el valor ü/ que maximiza la función de correlación Cθ7τ (u() = ∑σ, (x)c (x-u,) , y que entoncesIt is possible to demonstrate that the function (8) is minimized for the value ü / that maximizes the correlation function Cθ7τ (u ( ) = ∑σ, (x) c (xu,), and then
fy = Corrι JJ ¡ )/K? , con K,J = ∑(c/ (x)j . Así, el valor de la probabilidad afy = Corr ι JJ ¡ ) / K? , with K, J = ∑ ( c / ( x ) j. Thus, the value of the probability a
posteriori para el patróny resulta finalmente:later for the employer and it turns out:
Pj = max (9)
Figure imgf000009_0001
P j = max (9)
Figure imgf000009_0001
El resultado del método Bayesiano es una probabilidad como la anterior, asociada a cada uno de los patronesy del conjunto. Esta información puede ser utilizada de muchas maneras, dependiendo de la aplicación. Una de las posibilidades más interesantes es seleccionar el patrón con la máxima probabilidad como el patrón reconocido a partir de la imagen observada. También es posible rechazar la hipótesis de que alguno de los patrones esté presente en la imagen, si no se superan unos umbrales de confianza en las probabilidades calculadas. Otra información adicional proporcionada por este método es una estimación de los parámetros de la degradación más probables, que pueden servir para recuperar una aproximación de la degradación óptica desconocida que afectó a la imagen observada.The result of the Bayesian method is a probability like the previous one, associated with each of the patterns and the set. This information can be used in many ways, depending on the application. One of the most interesting possibilities is to select the pattern with the highest probability as the recognized pattern from The observed image. It is also possible to reject the hypothesis that some of the patterns are present in the image, if confidence thresholds are not exceeded in the calculated probabilities. Other additional information provided by this method is an estimate of the most likely degradation parameters, which can be used to recover an approximation of the unknown optical degradation that affected the observed image.
En resumen, el resultado de la aplicación de este procedimiento proporciona:In summary, the result of the application of this procedure provides:
1.- Un conjunto de probabilidades de que el patrón que aparece en la imagen degradada corresponda a cada patrón almacenado, siendo posible establecer una lista ordenada de patrones de mayor a menor probabilidad.1.- A set of probabilities that the pattern that appears in the degraded image corresponds to each stored pattern, being possible to establish an ordered list of patterns from highest to lowest probability.
2.- El patrón de entre el conjunto de patrones que con mayor probabilidad ha generado la observación. Esta es la respuesta proporcionada por el modelo Bayesiano de máximo a posteriori. 3.- Una estimación de la degradación óptica más probable que afecta a la imagen observada.2.- The pattern among the set of patterns that most likely generated the observation. This is the answer provided by the Bayesian model from maximum to posterior. 3.- An estimate of the most likely optical degradation that affects the observed image.
La presente invención podría ser aplicada en variadas situaciones prácticas entre las que se incluyen: 1. Reconocimiento óptico de caracteres (conocido por sus siglas en inglés, OCR), en imágenes borrosas o degradadas.The present invention could be applied in a variety of practical situations, including: 1. Optical character recognition (known by its acronym in English, OCR), in blurred or degraded images.
2. Observación de objetos en el cielo desde plataformas terrestres fijas, usando medios ópticos, de tal forma que a las posibles degradaciones introducidas por los instrumentos ópticos (desenfoques, etc.) se suman las inüOducidas por la turbulencia atmosférica. Ejemplos concretos de aplicación son el reconocimiento de aves, aeronaves (por ejemplo, aviones y helicópteros), satélites, astros, objetos estelares, etc.2. Observation of objects in the sky from fixed terrestrial platforms, using optical means, in such a way that to the possible degradations introduced by the optical instruments (blurring, etc.) those added by atmospheric turbulence are added. Specific examples of application are the recognition of birds, aircraft (for example, airplanes and helicopters), satellites, stars, stellar objects, etc.
3. Observación de objetos, tanto en el cielo como en la tierra, usando medios ópticos desde plataformas móviles, terrestres o aéreas, es decir, imágenes de objetos móviles captadas desde plataformas también móviles, de manera que se producen degradaciones debidas al movimiento (además de las propias de la óptica y de la atmósfera). Ejemplos de aplicación son el reconocimiento de los números y letras en matrículas de vehículos mediante imágenes tomadas desde helicópteros de vigilancia, reconocimiento de objetivos militares en imágenes tomadas desde vehículos o aeronaves de reconocimiento, etc.3. Observation of objects, both in the sky and on the ground, using optical means from mobile, terrestrial or aerial platforms, that is, images of mobile objects captured from also mobile platforms, so that degradations due to movement occur (in addition from those of optics and the atmosphere). Application examples are the recognition of numbers and letters on vehicle license plates by images taken from surveillance helicopters, recognition of military targets in images taken from vehicles or reconnaissance aircraft, etc.
4. Imágenes de especímenes biológicos, captadas mediante microscopía u otras técnicas de imagen biomédica, y afectadas por degradaciones introducidas por la turbidez del medio biológico, preparaciones, etc., así como por los sistemas de formación y captación de la imagen, en las que es necesario reconocer el espécimen para proceder a su clasificación.4. Images of biological specimens, captured by microscopy or other biomedical imaging techniques, and affected by degradations introduced by the turbidity of the biological environment, preparations, etc., as well as by the formation and image capture systems, in which It is necessary to recognize the specimen to proceed with its classification.
5. Predicción de la agudeza visual de un paciente a partir de los datos de las aberraciones ópticas del ojo proporcionados por un aberrómetro. Éste es el ejemplo de realización que se propone a continuación (ver Figura 2).5. Prediction of the visual acuity of a patient from the data of the optical aberrations of the eye provided by an aberrometer. This is the exemplary embodiment proposed below (see Figure 2).
EJEMPLO DE REALIZACIÓN DE LA INVENCIÓNEXAMPLE OF EMBODIMENT OF THE INVENTION
Como ejemplo de realización, se propone la implementación en un aberrómetro de un procedimiento de predicción de la agudeza visual de un paciente. Los aberrómetros actuales, suelen disponer de un software de análisis de datos, algunos de los cuales incluso simulan la apariencia de una letra u optotipo en la retina del paciente. En este caso, se trata de añadir un módulo adicional de software, de tal forma que el aberrómetro pueda proporcionar la predicción de la agudeza visual del paciente. Esta predicción es útil, ya que si se coteja con la agudeza visual obtenida en la clínica, si existe un desacuerdo significativo, éste puede interpretarse como síntoma de que el paciente puede presentar alguna deficiencia en su percepción visual.As an example of implementation, it is proposed to implement a procedure for predicting the visual acuity of a patient in an aberrometer. Current aberrometers usually have data analysis software, some of which even simulate the appearance of a letter or optotype on the patient's retina. In this case, it is about adding an additional software module, so that the aberrometer can provide the prediction of the patient's visual acuity. This prediction is useful, since if it is checked against the visual acuity obtained in the clinic, if there is a significant disagreement, this can be interpreted as a symptom that the patient may have a deficiency in their visual perception.
El procedimiento concreto se muestra en la Figura 2. Se parte de unos datos de entrada, que son utilizados para establecer un modelo personalizado del ojo del paciente. Este modelo consta de una parte óptica, una parte retiniana consistente en el muestreo por los conos, y otra de representación neuronal de la imagen, que se aplican secuencialmente. El modelo óptico parte de las aberraciones ópticas para obtener la función de transferencia óptica (OTF) que actúa como un filtro lineal sobre la imagen test de entrada. El filtro OTF es modificado para incorporar el efecto del muestreo de los fotorreceptores retinianos (conos en visión fotópica y bastones en visión escotópica). A la imagen filtrada se le aplica la segunda parte del modelo que consiste en una descomposición piramidal multiescala/multiorientación mediante un banco de filtros (Gabor, derivadas de Gaussiana, steerable pyramid, etc.) En este caso se ha elegido un banco de filtros de Gabor [O. Nestares, R. Navarro, J. Portilla, A. Tabernero (1998), "Efficient spatial-domain implementation of a multiscale image representation based on Gabor functions", J. Electronic Imaging, 7; 166-173], a la que sigue una normalización por el residuo de paso bajo para pasar a unidades de contraste. Las frecuencias de los filtros se establecen de forma que el de máxima frecuencia coincida con los modelos estándar. Esta descomposición constituye un modelo esquemático pero realista de la representación de la imagen en la corteza visual. Finalmente se aplica un umbral de contraste, de forma que los valores de contraste que no superan el umbral no son considerados. De esta forma se modela la sensibilidad al contraste del sistema visual. El modelo completo puede aplicarse a cualquier tipo de imagen, dando lugar como salida a una representación cortical de la misma, y que constituye a su vez la entrada para el procedimiento de reconocimiento de patrones que ha de ser robusto, o presentar cierta invarianza, frente a la presencia de degradaciones ópticas (aberraciones). La salida es el carácter (optotipo) del alfabeto que con mayor probabilidad se corresponde con la imagen de entrada. Todo el procedimiento, se aplica a un conjunto de imágenes de optotipos de entrada simulando el procedimiento clínico de obtención de la agudeza visual.The specific procedure is shown in Figure 2. It is based on input data, which is used to establish a personalized model of the patient's eye. This model consists of an optical part, a retinal part consisting of cones sampling, and a neuronal representation of the image, which are applied sequentially. The optical model starts from the optical aberrations to obtain the optical transfer function (OTF) that acts as a linear filter on the input test image. The OTF filter is modified to incorporate the effect of sampling retinal photoreceptors (cones in photopic vision and rods in scotopic vision). The second part of the model is applied to the filtered image consisting of a Pyramidal multi-scale / multi-orientation decomposition through a filter bank (Gabor, Gaussian derivatives, steerable pyramid, etc.) In this case, a Gabor filter bank has been chosen [O. Nestares, R. Navarro, J. Portilla, A. Tabernero (1998), "Efficient spatial-domain implementation of a multiscale image representation based on Gabor functions", J. Electronic Imaging, 7; 166-173], which is followed by normalization by the low-pass residue to pass to contrast units. Filter frequencies are set so that the maximum frequency matches the standard models. This decomposition constitutes a schematic but realistic model of the representation of the image in the visual cortex. Finally, a contrast threshold is applied, so that contrast values that do not exceed the threshold are not considered. In this way the contrast sensitivity of the visual system is modeled. The complete model can be applied to any type of image, giving rise to a cortical representation of the same, and which in turn constitutes the entrance for the pattern recognition procedure that must be robust, or present some invariance, against to the presence of optical degradations (aberrations). The output is the character (optotype) of the alphabet that most likely corresponds to the input image. The entire procedure is applied to a set of images of input optotypes simulating the clinical procedure to obtain visual acuity.
Modelo ópticoOptical model
El modelo óptico queda determinado por la aberración de onda, que en este caso estaría descrita por los coeficientes de un desarrollo en polinomios de Zernike proporcionados directamente por el aberrómetro, y por los parámetros que describen el efecto Stiles- Crawford del paciente. Este efecto equivale a un filtro apodizante descrito con una gaussiana, de una determinada anchura σ y con centro en unas determinadas coordenadas en el plano de la pupila. En este ejemplo, se ha supuesto que no tenemos disponibles los datos concretos del ojo del paciente y se usan datos promedio estándar (en el caso del efecto Stiles-Crawford no es crítico disponer de datos exactos del paciente, por lo que usar datos promedio extraídos de la literatura es una aproximación suficientemente buena por lo general). A partir de estos datos que describen el frente de onda en la pupila, se obtiene la función de transferencia óptica OTF, que es la autocorrelación del frente de onda en la pupila. La imagen óptica retiniana de una imagen test de entrada se obtiene mediante una operación de filtrado en el dominio de la frecuencia espacial, siendo la OTF el filtro. En este ejemplo, se ha considerado un caso monocromático, si bien es inmediato su extensión al caso policromático, si se conocen las aberraciones ópticas para varias longitudes de onda en el rojo el verde y el azul. En este caso, nuestro método consiste en aplicar a cada uno de los canales cromáticos de la imagen de entrada RVA las correspondientes OTFs para esas 3 longitudes de onda, y obtener las imágenes retinianas para los 3 componentes cromáticos, realizándose a continuación una transformación para pasar a la representación en coordenadas cromáticas CIELAB que son las que mejor modelan el comportamiento cromático del sistema visual. A partir de aquí se usa la imagen correspondiente al brillo L para el resto del procedimiento. La OTF se modifica debido al solapamiento espectral producido por el muestreo de los fotorreceptores. En este ejemplo se ha considerado el caso de visión fotópica, por lo que el muestreo viene dado por la distribución de conos.The optical model is determined by the wave aberration, which in this case would be described by the coefficients of a development in Zernike polynomials provided directly by the aberrometer, and by the parameters that describe the Stiles-Crawford effect of the patient. This effect is equivalent to an apodizing filter described with a Gaussian, of a certain width σ and centered on certain coordinates in the plane of the pupil. In this example, it has been assumed that we do not have the specific data of the patient's eye available and standard average data is used (in the case of the Stiles-Crawford effect, it is not critical to have accurate patient data, so using average data extracted of literature is usually a good enough approximation). From these data describing the wavefront in the pupil, the OTF optical transfer function is obtained, which is the autocorrelation of the wavefront in the pupil. The retinal optical image of an input test image is obtained by a filtering operation in the spatial frequency domain, the OTF being the filter. In this example, a monochromatic case has been considered, although its extension to the polychromatic case is immediate, if the optical aberrations for various wavelengths in red, green and blue are known. In this case, our method consists in applying the corresponding OTFs to each of the chromatic channels of the RVA input image for those 3 wavelengths, and obtaining the retinal images for the 3 chromatic components, then a transformation is carried out to pass to the representation in CIELAB chromatic coordinates that best model the color behavior of the visual system. From here, the image corresponding to brightness L is used for the rest of the procedure. The OTF is modified due to the spectral overlap produced by the sampling of the photoreceptors. In this example the case of photopic vision has been considered, so the sampling is given by the distribution of cones.
Datos de entrada para los modelosInput data for models
Los datos de entrada, son de gran importancia para la realización del modelo, ya que son los que van a caracterizar a ese paciente u ojo concreto. Los podemos dividir en datos ópticos y corticales, según correspondan a una parte u otra del modelo completo. No obstante, de foπna más práctica, los dividimos atendiendo a su disponibilidad en el paciente concreto. Es decir, cuando disponemos de los datos concretos medidos en el paciente (1), o en caso contrario, utilizamos un modelo con datos estándar (2). En la Figura 3 se ha supuesto que los únicos datos disponibles son los de aberrometría, en cuyo caso la predicción será más fiable en pacientes cuya retina y corteza visual sean normales y por tanto no presenten ningún condicionamiento en este sentido.The input data are of great importance for the realization of the model, since they are the ones that will characterize that particular patient or eye. We can divide them into optical and cortical data, as they correspond to one part or another of the complete model. However, in a more practical way, we divide them according to their availability in the specific patient. That is, when we have the specific data measured in the patient (1), or otherwise, we use a model with standard data (2). In Figure 3 it has been assumed that the only data available are those of aberrometry, in which case the prediction will be more reliable in patients whose retina and visual cortex are normal and therefore do not present any conditioning in this regard.
Imágenes test y optotiposTest images and optotypes
Para obtener la agudeza visual, se introducen imágenes de prueba conteniendo optotipos calibrados de forma que sus tamaños correspondan a valores concretos de la agudeza visual. La imagen es analizada aplicando el reconocimiento de patrones a cada optotipo, asignándose un valor 1 ó 0 (o variables booleanas true o false) en caso de que se haya producido acierto o fallo, respectivamente, en el reconocimiento. Se establece un umbral, para el número de fallos permitidos, para dar por superada la tarea para un determinado tamaño de optotipo, correspondiente a un determinado valor de agudeza visual. Tanto los optotipos como el umbral de aciertos serán igules a los usados en el procedimiento para medir la agudeza visual usado en la realidad o clínica concreta (un porcentaje de aciertos típico es al menos del 75%). La Figura 2 muestra un ejemplo de imagen test que consta de 4 filas, cada una conteniendo otros tantos caracteres (optotipos). El tamaño de los caracteres en cada fila coπesponde a un valor determinado de agudeza visual. Así, el tamaño completo del carácter es 5 veces el grosor del trazo, y este a su vez se coπesponde con el valor de la agudeza visual. En este caso se ha utilizado una escala decimal, de tal manera que agudeza visual unidad corresponde a un tamaño de trazo que subtiende un minuto de campo visual según el modelo óptico. En este ejemplo las líneas de la imagen test corresponden a agudezas visuales de 0.6, 0.8, 1 y 1.2 respectivamente (ver figura).To obtain visual acuity, test images containing calibrated optotypes are introduced so that their sizes correspond to specific values of visual acuity. The image is analyzed by applying the pattern recognition to each optotype, assigning a value of 1 or 0 (or Boolean variables true or false) in case of success or failure, respectively, in the recognition. A threshold is established, for the number of failures allowed, to override the task for a certain optotype size, corresponding to a certain visual acuity value. Both the optotypes and the threshold of hits will be similar to those used in the procedure to measure the visual acuity used in the concrete reality or clinic (a typical percentage of hits is at least 75%). Figure 2 shows an example of a test image consisting of 4 rows, each containing as many characters (optotypes). The size of the characters in each row corresponds to a certain value of visual acuity. Thus, the full size of the character is 5 times the thickness of the stroke, and this in turn corresponds to the value of visual acuity. In this case a decimal scale has been used, such that visual acuity unit corresponds to a stroke size that subtends a minute of visual field according to the optical model. In this example the lines of the test image correspond to visual acuities of 0.6, 0.8, 1 and 1.2 respectively (see figure).
En este ejemplo los caracteres se diseñan a propósito para cumplir las especificaciones anteriores. En concreto se utiliza un alfabeto reducido de 16 caracteres, por varios motivos. Por un lado, los test de agudeza visual no utilizan todos los caracteres del alfabeto, teniendo predilección por un subconjunto, y por otro lado, el reducir el número de posibles carateres a 16 permite ahorrar tiempo de cálculo en la etapa de reconocimiento. En cualquier caso, el alfabeto se puede elegir de manera que sea idéntico al usado en el procedimiento real usado en la clínica concreta, como ya se ha mencionado.In this example the characters are designed on purpose to meet the above specifications. Specifically, a reduced 16-character alphabet is used for several reasons. On the one hand, visual acuity tests do not use all the characters of the alphabet, having a predilection for a subset, and on the other hand, reducing the number of possible characters to 16 saves the calculation time in the recognition stage. In any case, the alphabet can be chosen so that it is identical to that used in the actual procedure used in the specific clinic, as already mentioned.
Reconocimiento de patrones y obtención de la agudeza visualPattern recognition and visual acuity
El reconocimiento de patrones se realiza extrayendo la porción de la imagen resultante de aplicar los modelos que contiene cada uno de los optotipos. El procedimiento consta de varias etapas:Pattern recognition is done by extracting the portion of the image resulting from applying the models contained in each of the optotypes. The procedure consists of several stages:
- Se aplica el procedimiento de reconocimiento de patrones afectadas por degradaciones ópticas, comparando la imagen del optotipo, con los 16 optotipos del alfabeto utilizado. Este procedimiento devuelve como respuesta el optotipo de entre los 16, con más probabilidad de ser igual que el de entrada.- The procedure of recognition of patterns affected by optical degradations is applied, comparing the image of the optotype, with the 16 optotypes of the alphabet used. This procedure returns as an answer the optotype among the 16, most likely to be the same as the input.
- Cuando el optotipo proporcionado por el método coincide con el optotipo de la imagen test, se considera un acierto y fallo en caso contrario. En este alfabeto, la posibilidad de acierto por simple azar es de 1/16. - Se considera superada un valor de la agudeza visual, cuando el número de aciertos en la correspondiente línea es igual o mayor del 75% (es decir 3/4 o- When the optotype provided by the method matches the optotype of the test image, it is considered a success and failure otherwise. In this alphabet, the chance of success by simple chance is 1/16. - A value of visual acuity is considered exceeded, when the number of hits on the corresponding line is equal to or greater than 75% (ie 3/4 or
4/4).4/4).
El procedimiento comienza por la línea superior (escala mayor, agudeza visual estándar, es decir 1). Si la línea es superada (al menos 75% de aciertos) se pasa a la línea siguiente superior (1.2). En caso de tener más fallos de los peπnitidos, se pasa a la línea inferior (0.8). El procedimiento se para si en trayectoria ascendente no se supera el umbral de aciertos, o en trayectoria descendente cuando se supera, devolviendo como valor de la agudeza visual el de la última línea superada.The procedure begins with the upper line (major scale, standard visual acuity, that is 1). If the line is overcome (at least 75% of hits) it is passed to the next higher line (1.2). In case of having more failures of the peπnitidos, it goes to the bottom line (0.8). The procedure is stopped if in ascending trajectory the threshold of successes is not exceeded, or in descending trajectory when it is exceeded, returning as the value of visual acuity that of the last exceeded line.
El procedimiento puede ser monocular o binocular. En el segundo caso, el procedimiento consiste en combinar adecuadamente los resultados de ambos ojos. Esto puede realizarse bien mediante una etapa final en la que se cotejan los resultados de ambos ojos para eliminen errores, o bien fusionando la infoπnación visual contenida en cada una de las imágenes, optimizando el resultado.The procedure can be monocular or binocular. In the second case, the procedure consists in properly combining the results of both eyes. This can be done either by a final stage in which the results of both eyes are checked to eliminate errors, or by merging the visual information contained in each of the images, optimizing the result.
El resultado de la aplicación proporciona:The result of the application provides:
1.- La imagen óptica simulada de cualquier imagen test de entrada según el modelo óptico personalizado del ojo del paciente.1.- The simulated optical image of any input test image according to the personalized optical model of the patient's eye.
2.- La representación cortical con el formato de una descomposición piramidal multiescala/multiorinetación, que incluye el efecto de las degradaciones ópticas de la imagen test.2.- The cortical representation with the format of a multiscale / multiorineration pyramidal decomposition, which includes the effect of the optical degradations of the test image.
3.- Acierto o fallo en la tarea de reconocimiento de cada uno de los objetos contenidos en la imagen de entrada. El procedimiento opera sobre un conjunto de posibles objetos3.- Success or failure in the task of recognizing each of the objects contained in the input image. The procedure operates on a set of possible objects
(opotipos) predefinidos.(opotypes) predefined.
4.- Agudeza visual predicha en el paciente. 4.- Visual acuity predicted in the patient.

Claims

REIVINDICACIONES
1. Procedimiento numérico para reconocer patrones en imágenes sometidas a degradaciones ópticas genéricas y desconocidas, de entre un conjunto de patrones prefijado. Las degradaciones ópticas pueden provenir del proceso físico de captación de la imagen, o de simulaciones numéricas de dicho proceso físico.1. Numerical procedure to recognize patterns in images subjected to generic and unknown optical degradations, from a set of preset patterns. Optical degradations may come from the physical process of image acquisition, or from numerical simulations of said physical process.
2. Procedimiento, según reivindicación 1, que usa un sistema de representación de las imágenes, tanto la observada como los patrones, multiescala/multiorientación implementado con un banco de filtros pasa-banda. Este sistema es flexible tanto en la foπna de los filtros (funciones de Gabor, derivadas de Gaussiana, etc.) como en la disposición de las frecuencias centrales y ancho de banda de los mismos, pudiendo adaptarse estos parámetros a la aplicación concreta.2. A method according to claim 1, which uses a system for representing images, both observed and patterns, multiscale / multiorientation implemented with a band-pass filter bank. This system is flexible both in the filter field (Gabor functions, Gaussian derivatives, etc.) and in the arrangement of the center frequencies and their bandwidth, and these parameters can be adapted to the specific application.
3. Procedimiento, según reivindicaciones 1 y 2, que calcula las probabilidades de que cada patrón del alfabeto haya generado la imagen observada, usando un método3. Procedure, according to claims 1 and 2, which calculates the probabilities that each pattern of the alphabet has generated the observed image, using a method
Bayesiano, y usando el sistema de representación descrito en la reivindicación 2. Este método tiene la particularidad de permitir incluir información a priori sobre la estadística de aparición de los patrones, así como de la estadística de los parámetros que modelan la degradación óptica.Bayesian, and using the representation system described in claim 2. This method has the particularity of allowing a priori information to be included on the statistics of the appearance of the patterns, as well as the statistics of the parameters that model the optical degradation.
4. Procedimiento de cálculo de probabilidades Bayesianas, según reivindicación 3, caracterizado por (a) basarse en el valor de la correlación entre la imagen observada y el patrón; (b) por aplicarse la correlación en las diferentes subbandas de escalas y orientaciones; (c) transformar los valores de correlación a probabilidades y combinarlos de acuerdo con el método Bayesiano, de forma que el resultado final sea una probabilidad, con las características de minimizar las falsas alaπnas; (d) dar como respuesta aquel patrón del conjunto de patrones para el cual se ha obtenido el valor máximo de probabilidad.4. Procedure for calculating Bayesian probabilities, according to claim 3, characterized by (a) based on the value of the correlation between the observed image and the pattern; (b) because the correlation is applied in the different subbands of scales and orientations; (c) transform the correlation values into probabilities and combine them according to the Bayesian method, so that the final result is a probability, with the characteristics of minimizing false alaπnas; (d) give as an answer that pattern of the set of patterns for which the maximum probability value has been obtained.
5. Procedimiento de reconocimiento de patrones, según reivindicaciones 1 a 4, capaz de reconocer patrones en imágenes sometidas a degradaciones ópticas causadas, bien por el desenfoque de los instrumentos ópticos de captación, bien en imágenes captadas a través de la atmósfera y afectadas por tanto de las aberraciones aleatorias introducidas por la turbulencia. Ejemplos de aplicaciones en las que se dan este tipo de degradaciones incluyen el reconocimiento óptico de caracteres en imágenes borrosas, reconocimiento de aves en vuelo, aeronaves, satélites, astros y objetos estelares.5. Pattern recognition procedure, according to claims 1 to 4, capable of recognizing patterns in images subjected to optical degradations caused, either by blurring of optical pick-up instruments, or in images captured through the atmosphere and therefore affected of the random aberrations introduced because of the turbulence. Examples of applications in which this type of degradation occurs include the optical recognition of characters in blurred images, recognition of birds in flight, aircraft, satellites, stars and stellar objects.
7. Procedimiento de reconocimiento de patrones, según reivindicaciones 1 a 4, capaz de reconocer patrones en imágenes de objetos móviles captadas desde plataformas también móviles, de manera que, aparte de otras fuentes de degradación óptica, se ven afectadas por las degradaciones debidas al movimiento durante el tiempo de integración del sensor. Ejemplos de aplicaciones en las que se dan este tipo de degradaciones incluyen el reconocimiento de matrículas de vehículos en movimiento, captadas desde plataformas en movimiento, y el reconocimiento objetivos militares en imágenes captadas desde plataformas de reconocimiento móviles.7. Pattern recognition procedure, according to claims 1 to 4, capable of recognizing patterns in images of mobile objects captured from also mobile platforms, so that, apart from other sources of optical degradation, they are affected by degradations due to movement during sensor integration time. Examples of applications in which such degradations occur include the recognition of license plates of moving vehicles, captured from moving platforms, and the recognition of military objectives in images captured from mobile recognition platforms.
8. Procedimiento de reconocimiento de patrones, según reivindicaciones 1 a 4, capaz de reconocer patrones en imágenes sometidas a degradaciones ópticas simuladas, con el objeto de evaluar objetivamente la respuesta de sistemas ópticos de captación en función del nivel de degradación.8. Pattern recognition procedure according to claims 1 to 4, capable of recognizing patterns in images subjected to simulated optical degradations, in order to objectively assess the response of optical capture systems as a function of the level of degradation.
9. Procedimiento de reconocimiento de patrones, según reivindicaciones 1 a 4, capaz de reconocer patrones en imágenes de especímenes biológicos, captadas mediante microscopía u otras técnicas de imagen biomédica, y afectadas por degradaciones introducidas por la turbidez del medio biológico, preparaciones, etc., así como por los sistemas de formación y captación de la imagen.9. Pattern recognition procedure, according to claims 1 to 4, capable of recognizing patterns in images of biological specimens, captured by microscopy or other biomedical imaging techniques, and affected by degradations introduced by the turbidity of the biological medium, preparations, etc. , as well as the formation and image capture systems.
10. Procedimiento para la predicción de la agudeza visual en pacientes, caracterizado por el hecho de basarse en modelos ópticos y de procesado visual de la imagen realistas a partir de datos de abeπometría óptica y utilizar el procedimiento robusto de reconocimiento de patrones ante degradaciones ópticas de la imagen, según reivindicaciones 1, 2, 3, 4 y 8, y que genera un modelo visual plausible.10. Procedure for the prediction of visual acuity in patients, characterized by the fact that they are based on realistic optical and visual image processing models based on optical abetrometry data and use the robust pattern recognition procedure for optical degradation of the image, according to claims 1, 2, 3, 4 and 8, and which generates a plausible visual model.
11. Procedimiento basado en las reivindicaciones 1, 2, 3, 4, 8 y 10 caracterizado por modelar las primeras etapas del proceso visual por la combinación de un modelo óptico que simula la imagen óptica sobre la retina de un test de entrada, a partir de la abeπación de onda, incluye el solapamiento espectral producido por el muestreo retiniano y aplica una descomposición piramidal por medio de un modelo multiescala/multiorientación de dicha imagen.11. Method based on claims 1, 2, 3, 4, 8 and 10 characterized by modeling the first stages of the visual process by combining an optical model that simulates the optical image on the retina of an input test, from of the Wave division includes spectral overlap produced by retinal sampling and applies a pyramidal decomposition by means of a multiscale / multiorientation model of said image.
12. Procedimiento según reivindicaciones 1, 2, 3, 4, 8, 10 y 11, específico para el caso de iluminación policromática, caracterizado por (1) usar como datos de partida una imagen test en color (con sus componentes RVA) y sus aberraciones de onda para las coπespondientes longitudes de onda; (2) simular las imágenes retinianas coπespondientes a los 3 colores RVA; (3) transformar a coordenadas cromáticas CIELAB que son las que mejor modelan el comportamiento cromático del sistema visual y (4) usar la imagen correspondiente a la coordenada cromática L (luminosidad) para el resto del procedimiento.12. Method according to claims 1, 2, 3, 4, 8, 10 and 11, specific for the case of polychromatic lighting, characterized by (1) using as a starting data a color test image (with its RVA components) and its wave aberrations for the corresponding wavelengths; (2) simulate the retinal images corresponding to the 3 RVA colors; (3) transform to CIELAB chromatic coordinates that best model the color behavior of the visual system and (4) use the image corresponding to the chromatic coordinate L (luminosity) for the rest of the procedure.
13. Procedimiento según reivindicaciones 1, 2, 3, 4, 8, 10 y 11 caracterizado por: (1) el modelo incluye una normalización por el canal residuo pasa-bajo, de forma que la salida viene dada en unidades de contraste y (2) se aplica un umbral de contraste de forma que los valores inferiores al umbral se ignoran en el resto del procedimiento para simular el ruido neuronal.13. Method according to claims 1, 2, 3, 4, 8, 10 and 11 characterized by: (1) the model includes a normalization through the low-pass residual channel, so that the output is given in contrast units and ( 2) a contrast threshold is applied so that values below the threshold are ignored in the rest of the procedure to simulate neuronal noise.
14. Procedimiento, según reivindicaciones 10 y 11, de generación y uso de test de entrada, caracterizado por (1) contener filas de optotipos de diferentes tamaños, calibrados según los valores de agudezas visuales que se quieren comprobar; (2) utilizar optotipos de las imágenes test tomados exclusivamente de un conjunto, o alfabeto en el caso de utilizar letras; y (3) comparar cada optotipo con todo el conjunto o alfabeto en la etapa de reconocimiento.14. Procedure, according to claims 10 and 11, of generation and use of input test, characterized by (1) containing rows of optotypes of different sizes, calibrated according to the values of visual acuities to be checked; (2) use optotypes of test images taken exclusively from a set, or alphabet in the case of using letters; and (3) compare each optotype with the whole set or alphabet in the recognition stage.
15. Procedimiento de obtención de la agudeza visual, según reivindicaciones 1, 2, 3, 4, 8, 10, 11, 12, 13, 14, caracterizado por: (1) considerar superado un valor de la agudeza visual, cuando el número de aciertos obtenidos en el conjunto de optotipos test del tamaño coπespondiente a dicha agudeza es igual o mayor que un valor umbral prefijado, que suele ser del 75% o similar; (2) El procedimiento comienza por la línea superior (escala mayor, agudeza visual estándar, es decir 1). Si la línea es superada (al menos 75% de aciertos) se pasa a la línea siguiente superior (1.1). En caso de tener más fallos de los permitidos, se pasa a la línea inferior (0.9). El procedimiento se para, si en trayectoria ascendente no se supera el umbral de aciertos, o en trayectoria descendente cuando se supera, devolviendo como valor de la agudeza visual el de la última línea superada.15. Procedure for obtaining visual acuity according to claims 1, 2, 3, 4, 8, 10, 11, 12, 13, 14, characterized by: (1) considering a value of visual acuity exceeded, when the number of successes obtained in the set of test optotypes of the size coresponding to said acuity is equal to or greater than a predetermined threshold value, which is usually 75% or similar; (2) The procedure begins with the upper line (major scale, standard visual acuity, that is 1). If the line is exceeded (at least 75% of hits) it goes to the next higher line (1.1). In case you have more failures of the allowed, it goes to the bottom line (0.9). The procedure is stopped, if in ascending trajectory the threshold of successes is not exceeded, or in descending trajectory when it is exceeded, returning as the value of visual acuity that of the last exceeded line.
16. Procedimiento de obtención de la agudeza visual binocular, según reivindicaciones 10 a 15, caracterizado por combinar las respuestas de ambos ojos para eliminar falsas alarmas, mejorando así el número de aciertos con respecto al caso monocular, de forma análoga a la reivindicación 5, punto (3).16. Procedure for obtaining binocular visual acuity, according to claims 10 to 15, characterized by combining the responses of both eyes to eliminate false alarms, thus improving the number of successes with respect to the monocular case, analogously to claim 5, point (3).
17. Aplicación del procedimiento, según reivindicaciones 10 a 16, para ser utilizado en un abeπómetro ocular para que éste proporcione una predicción de la agudeza visual a partir de los datos medidos de abeπometría ocular. 17. Application of the method, according to claims 10 to 16, to be used in an ocular sound meter to provide a prediction of visual acuity from the measured data of ocular sound level.
PCT/ES2004/070012 2003-03-07 2004-03-08 Method for the recognition of patterns in images affected by optical degradations and application thereof in the prediction of visual acuity from a patient's ocular aberrometry data WO2004079637A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
ES200300562A ES2247873B1 (en) 2003-03-07 2003-03-07 PATTERN RECOGNITION SYSTEM IN IMAGES AFFECTED BY OPTICAL DEGRADATIONS.
ESP200300562 2003-03-07
ESP200301425 2003-06-18
ES200301425 2003-06-18

Publications (1)

Publication Number Publication Date
WO2004079637A1 true WO2004079637A1 (en) 2004-09-16

Family

ID=32963832

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/ES2004/070012 WO2004079637A1 (en) 2003-03-07 2004-03-08 Method for the recognition of patterns in images affected by optical degradations and application thereof in the prediction of visual acuity from a patient's ocular aberrometry data

Country Status (1)

Country Link
WO (1) WO2004079637A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9959153B2 (en) 2008-04-18 2018-05-01 Bae Systems Plc Assisting failure diagnosis in a system
US10739227B2 (en) 2017-03-23 2020-08-11 Johnson & Johnson Surgical Vision, Inc. Methods and systems for measuring image quality
US10876924B2 (en) 2018-02-08 2020-12-29 Amo Groningen B.V. Wavefront based characterization of lens surfaces based on reflections
US10895517B2 (en) 2018-02-08 2021-01-19 Amo Groningen B.V. Multi-wavelength wavefront system and method for measuring diffractive lenses
US11013594B2 (en) 2016-10-25 2021-05-25 Amo Groningen B.V. Realistic eye models to design and evaluate intraocular lenses for a large field of view
US11282605B2 (en) 2017-11-30 2022-03-22 Amo Groningen B.V. Intraocular lenses that improve post-surgical spectacle independent and methods of manufacturing thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10171924A (en) * 1996-12-10 1998-06-26 Brother Ind Ltd Character recognizing device
JPH11306285A (en) * 1998-04-22 1999-11-05 Mitsubishi Heavy Ind Ltd Pattern recognizing device
WO2002001855A2 (en) * 2000-06-26 2002-01-03 Miranda Technologies Inc. Apparatus and method for adaptively reducing noise in a noisy input image signal
EP1300803A2 (en) * 2001-08-28 2003-04-09 Nippon Telegraph and Telephone Corporation Image processing method and apparatus
WO2003079274A1 (en) * 2002-03-20 2003-09-25 Philips Intellectual Property & Standards Gmbh Method of improving fingerprint images
US6674915B1 (en) * 1999-10-07 2004-01-06 Sony Corporation Descriptors adjustment when using steerable pyramid to extract features for content based search
US20040017944A1 (en) * 2002-05-24 2004-01-29 Xiaoging Ding Method for character recognition based on gabor filters

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10171924A (en) * 1996-12-10 1998-06-26 Brother Ind Ltd Character recognizing device
JPH11306285A (en) * 1998-04-22 1999-11-05 Mitsubishi Heavy Ind Ltd Pattern recognizing device
US6674915B1 (en) * 1999-10-07 2004-01-06 Sony Corporation Descriptors adjustment when using steerable pyramid to extract features for content based search
WO2002001855A2 (en) * 2000-06-26 2002-01-03 Miranda Technologies Inc. Apparatus and method for adaptively reducing noise in a noisy input image signal
EP1300803A2 (en) * 2001-08-28 2003-04-09 Nippon Telegraph and Telephone Corporation Image processing method and apparatus
WO2003079274A1 (en) * 2002-03-20 2003-09-25 Philips Intellectual Property & Standards Gmbh Method of improving fingerprint images
US20040017944A1 (en) * 2002-05-24 2004-01-29 Xiaoging Ding Method for character recognition based on gabor filters

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DATABASE WPI Week 199836, Derwent World Patents Index; Class T01, AN 1998-418834, XP002903692 *
DATABASE WPI Week 200004, Derwent World Patents Index; Class T01, AN 2000-044715, XP002903691 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9959153B2 (en) 2008-04-18 2018-05-01 Bae Systems Plc Assisting failure diagnosis in a system
US11013594B2 (en) 2016-10-25 2021-05-25 Amo Groningen B.V. Realistic eye models to design and evaluate intraocular lenses for a large field of view
US10739227B2 (en) 2017-03-23 2020-08-11 Johnson & Johnson Surgical Vision, Inc. Methods and systems for measuring image quality
US11385126B2 (en) 2017-03-23 2022-07-12 Johnson & Johnson Surgical Vision, Inc. Methods and systems for measuring image quality
US11282605B2 (en) 2017-11-30 2022-03-22 Amo Groningen B.V. Intraocular lenses that improve post-surgical spectacle independent and methods of manufacturing thereof
US11881310B2 (en) 2017-11-30 2024-01-23 Amo Groningen B.V. Intraocular lenses that improve post-surgical spectacle independent and methods of manufacturing thereof
US10876924B2 (en) 2018-02-08 2020-12-29 Amo Groningen B.V. Wavefront based characterization of lens surfaces based on reflections
US10895517B2 (en) 2018-02-08 2021-01-19 Amo Groningen B.V. Multi-wavelength wavefront system and method for measuring diffractive lenses

Similar Documents

Publication Publication Date Title
Geisler Sequential ideal-observer analysis of visual discriminations.
CA2868425C (en) Process and apparatus for determining optical aberrations of an eye
CN100353907C (en) Objective manifest refraction
US7357509B2 (en) Metrics to predict subjective impact of eye's wave aberration
US6607274B2 (en) Method for computing visual performance from objective ocular aberration measurements
US9001316B2 (en) Use of an optical system simulating behavior of human eye to generate retinal images and an image quality metric to evaluate same
WO2004079637A1 (en) Method for the recognition of patterns in images affected by optical degradations and application thereof in the prediction of visual acuity from a patient's ocular aberrometry data
Nestares et al. Bayesian model of Snellen visual acuity
Alonso et al. Pre-compensation for high-order aberrations of the human eye using on-screen image deconvolution
Tuan et al. Predicting patients’ night vision complaints with wavefront technology
Fülep et al. Simulation of visual acuity by personalizable neuro-physiological model of the human eye
CN110598652A (en) Fundus data prediction method and device
CN111583248A (en) Processing method based on eye ultrasonic image
US20030053027A1 (en) Subjective refraction by meridional power matching
ES2330260T3 (en) APPARATUS TO DETERMINE THE VISUAL ACUTE OF AN EYE.
WO2014111759A1 (en) Method and apparatus for measuring aberrations of a ocular optical system
CN114927220A (en) Differential diagnosis system for cervical spondylotic myelopathy and Parkinson's disease
Beckmann Preneural factors limiting letter identification in central and peripheral vision
Zaman et al. Multimodal assessment of visual function and ocular structure for monitoring Spaceflight Associated Neuro-Ocular Syndrome
Navarro Predicting visual acuity
Faylienejad A computational model for predicting visual acuity from wavefront aberration measurements
EP4197427A1 (en) Method and device for evaluating refraction of an eye of an individual using machine learning
CN115660985B (en) Cataract fundus image restoration method, cataract fundus image restoration model training method and cataract fundus image restoration model training device
Navarro et al. Predicting visual acuity from measured ocular aberrations
KR102460451B1 (en) Method for measuring anomalies of refraction using a reflection image of pupil in visible light

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase