WO2004079637A1 - Procede de reconnaissance de modeles dans des images affectees par des degradations optiques et son application dans la prediction de l'acuite visuelle a partir de donnees d'aberrometrie oculaire du patient - Google Patents

Procede de reconnaissance de modeles dans des images affectees par des degradations optiques et son application dans la prediction de l'acuite visuelle a partir de donnees d'aberrometrie oculaire du patient Download PDF

Info

Publication number
WO2004079637A1
WO2004079637A1 PCT/ES2004/070012 ES2004070012W WO2004079637A1 WO 2004079637 A1 WO2004079637 A1 WO 2004079637A1 ES 2004070012 W ES2004070012 W ES 2004070012W WO 2004079637 A1 WO2004079637 A1 WO 2004079637A1
Authority
WO
WIPO (PCT)
Prior art keywords
optical
procedure
image
images
patterns
Prior art date
Application number
PCT/ES2004/070012
Other languages
English (en)
Spanish (es)
Inventor
Rafael Navarro Belsue
Oscar Nestares Garcia
Original Assignee
Consejo Superior De Investigaciones Científicas
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from ES200300562A external-priority patent/ES2247873B1/es
Application filed by Consejo Superior De Investigaciones Científicas filed Critical Consejo Superior De Investigaciones Científicas
Publication of WO2004079637A1 publication Critical patent/WO2004079637A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/103Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining refraction, e.g. refractometers, skiascopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the invention is directed to all areas where automatic recognition of patterns in images is necessary, in general in automatic inspection applications using optical means, and in particular in surveillance, process monitoring, quality control, simulation applications. visual process for clinical purposes, etc. Its application is especially indicated when the observation conditions do not guarantee a good image quality. It is a numerical procedure of pattern recognition based on visual perception that can be performed using computer methods. The application to the prediction of visual acuity is directed to the area of health, specifically ophthalmology, optometry and ophthalmic optics. In this case the numerical procedure combines optical and psychophysical models of visual perception, with the procedure of pattern recognition in images, described above.
  • Pattern recognition in images is an area of great interest within automatic image analysis, and with multiple applications. Among them, it is worth noting the optical character recognition, the recognition of targets in military applications, classification of biological species observed by optical means, active surveillance with automatic recognition of objects of interest, etc. Of special interest in the field of ophthalmology and optometry is the prediction of the patient's visual acuity from ocular aberrometry data.
  • AV visual acuity
  • the present invention consists of a method for recognizing patterns in images subjected to optical degradation and noise, from a finite and predetermined set.
  • This set of patterns is stored in digital format, and in a gray scale (intensities between black and white) or colors.
  • the observed image can be acquired by means of an optical image capture system (for example, in surveillance applications), or it can be a simulation of an optical capture system (for example, a simulation of the retinal image of an object from the optical data of an eye model).
  • the procedure is indicated for observed images that have undergone an a priori unknown optical degradation, introduced either by the capture system (camera, eye, etc.) or by factors external to it (atmospheric turbulence, etc.).
  • Figure 1 shows a block diagram presenting the data and processes carried out by this procedure, which are described in more detail below.
  • optically degraded digital image which may come from a scene observed by an optical image capture system and converted by an appropriate procedure to a digital image, or be the result of a numerical simulation of said capture processes .
  • This digital image will be compared with the digital images of the default pattern set, using digital computers.
  • the degraded image is transformed by applying a multiscale / multiorientation filter bank to obtain a visual representation of it.
  • This same procedure is applied to the images containing the set of preset patterns, being able to execute this process in a previous time and recovering directly the visual representation stored in a suitable device.
  • the method is flexible in terms of the type of filter to be used (Gabor, Gaussian derivatives, Laplacians, etc.), number of filters and arrangement of the scales and orientations, which allows it to be adapted to the specific needs of each application.
  • the probability of having generated the observed image is calculated for each pattern of the preset set.
  • a Bayesian method is applied that makes use of the visual representation of the images, and in which an approximation of the unknown degradation that is affecting the observed image is implicitly estimated.
  • this visual representation we introduce the further simplification that the frequency response of the unknown optical degradation is constant within the frequency range that each channel lets through. This simplification makes us move from an undetermined system to a particular system, and it is possible to calculate the probabilities of having generated the observed image.
  • the previous assumption it is possible to formulate the following simplified observation model for the versions of the observed image filtered with each of the Gabor filters of the representation scheme:
  • o, (x) is the observed degraded image filtered with the z " -th Gabor filter, g ( (x), and contaminated with the additive noise ⁇ , (x); h (x) is the impulse response of the unknown optical degradation; and c (x) is the image that contains the original pattern without degrading.
  • c, (x) is the image input pattern without degrading and filtered with Gabor filter z '-th; (h:, or are constant multiplicative factor and the overall displacement which is approximately the frequency response of the optical degradation in frequency range allowed by the z ' -th channel; and, finally, N c is the number of Gabor channels of the representation.
  • K is a no ⁇ nalization constant.
  • the a posteriori probability is equal to the likelihood, or conditional probability of the observations given the model parameters, multiplied by the a priori probability of the model parameters.
  • the posterior probability of the original undegraded pattern, c which is determined by the fact that the input image must be co-sponged with some of the patterns in the pre-set pattern set.
  • this a priori probability can be expressed as a sum of delta functions each associated with a pattern, with a weight given by the a priori probability of that pattern appearing in the image.
  • the posterior probability is:
  • Bayesian recognition consists first of all in choosing the degradation parameters that maximize the probability in (7) for each pattern in the set, and then choosing as the recognized pattern the one with the highest probability, which is precisely the one corresponding to the global maximum of the probability a posteriori.
  • Obtaining the parameters ⁇ ,, ü ; ⁇ maximizing the expression (7) can be done individually for each channel, and then multiply the maximum values for each channel to obtain the probability. For a particular channel, i, and assuming Gaussian white noise, maximizing the probability is equivalent to minimizing the following error function:
  • the result of the Bayesian method is a probability like the previous one, associated with each of the patterns and the set. This information can be used in many ways, depending on the application. One of the most interesting possibilities is to select the pattern with the highest probability as the recognized pattern from The observed image. It is also possible to reject the hypothesis that some of the patterns are present in the image, if confidence thresholds are not exceeded in the calculated probabilities. Other additional information provided by this method is an estimate of the most likely degradation parameters, which can be used to recover an approximation of the unknown optical degradation that affected the observed image.
  • the present invention could be applied in a variety of practical situations, including: 1. Optical character recognition (known by its acronym in English, OCR), in blurred or degraded images.
  • FIG. 2 The specific procedure is shown in Figure 2. It is based on input data, which is used to establish a personalized model of the patient's eye.
  • This model consists of an optical part, a retinal part consisting of cones sampling, and a neuronal representation of the image, which are applied sequentially.
  • the optical model starts from the optical aberrations to obtain the optical transfer function (OTF) that acts as a linear filter on the input test image.
  • OTF filter is modified to incorporate the effect of sampling retinal photoreceptors (cones in photopic vision and rods in scotopic vision).
  • the second part of the model is applied to the filtered image consisting of a Pyramidal multi-scale / multi-orientation decomposition through a filter bank (Gabor, Gaussian derivatives, steerable pyramid, etc.)
  • a Gabor filter bank has been chosen [O. Nestares, R. Navarro, J. Portilla, A. Tabernero (1998), "Efficient spatial-domain implementation of a multiscale image representation based on Gabor functions", J. Electronic Imaging, 7; 166-173], which is followed by normalization by the low-pass residue to pass to contrast units. Filter frequencies are set so that the maximum frequency matches the standard models.
  • This decomposition constitutes a schematic but realistic model of the representation of the image in the visual cortex.
  • a contrast threshold is applied, so that contrast values that do not exceed the threshold are not considered.
  • the complete model can be applied to any type of image, giving rise to a cortical representation of the same, and which in turn constitutes the entrance for the pattern recognition procedure that must be robust, or present some invariance, against to the presence of optical degradations (aberrations).
  • the output is the character (optotype) of the alphabet that most likely corresponds to the input image.
  • the entire procedure is applied to a set of images of input optotypes simulating the clinical procedure to obtain visual acuity.
  • the optical model is determined by the wave aberration, which in this case would be described by the coefficients of a development in Zernike polynomials provided directly by the aberrometer, and by the parameters that describe the Stiles-Crawford effect of the patient.
  • This effect is equivalent to an apodizing filter described with a Gaussian, of a certain width ⁇ and centered on certain coordinates in the plane of the pupil.
  • the OTF optical transfer function is obtained, which is the autocorrelation of the wavefront in the pupil.
  • the retinal optical image of an input test image is obtained by a filtering operation in the spatial frequency domain, the OTF being the filter.
  • a monochromatic case has been considered, although its extension to the polychromatic case is immediate, if the optical aberrations for various wavelengths in red, green and blue are known.
  • our method consists in applying the corresponding OTFs to each of the chromatic channels of the RVA input image for those 3 wavelengths, and obtaining the retinal images for the 3 chromatic components, then a transformation is carried out to pass to the representation in CIELAB chromatic coordinates that best model the color behavior of the visual system. From here, the image corresponding to brightness L is used for the rest of the procedure.
  • the OTF is modified due to the spectral overlap produced by the sampling of the photoreceptors. In this example the case of photopic vision has been considered, so the sampling is given by the distribution of cones.
  • the input data are of great importance for the realization of the model, since they are the ones that will characterize that particular patient or eye.
  • Figure 3 it has been assumed that the only data available are those of aberrometry, in which case the prediction will be more reliable in patients whose retina and visual cortex are normal and therefore do not present any conditioning in this regard.
  • test images containing calibrated optotypes are introduced so that their sizes correspond to specific values of visual acuity.
  • the image is analyzed by applying the pattern recognition to each optotype, assigning a value of 1 or 0 (or Boolean variables true or false) in case of success or failure, respectively, in the recognition.
  • a threshold is established, for the number of failures allowed, to override the task for a certain optotype size, corresponding to a certain visual acuity value.
  • Both the optotypes and the threshold of hits will be similar to those used in the procedure to measure the visual acuity used in the concrete reality or clinic (a typical percentage of hits is at least 75%).
  • Figure 2 shows an example of a test image consisting of 4 rows, each containing as many characters (optotypes).
  • the size of the characters in each row corresponds to a certain value of visual acuity.
  • the full size of the character is 5 times the thickness of the stroke, and this in turn corresponds to the value of visual acuity.
  • a decimal scale has been used, such that visual acuity unit corresponds to a stroke size that subtends a minute of visual field according to the optical model.
  • the lines of the test image correspond to visual acuities of 0.6, 0.8, 1 and 1.2 respectively (see figure).
  • the characters are designed on purpose to meet the above specifications.
  • a reduced 16-character alphabet is used for several reasons.
  • visual acuity tests do not use all the characters of the alphabet, having a predilection for a subset, and on the other hand, reducing the number of possible characters to 16 saves the calculation time in the recognition stage.
  • the alphabet can be chosen so that it is identical to that used in the actual procedure used in the specific clinic, as already mentioned.
  • Pattern recognition is done by extracting the portion of the image resulting from applying the models contained in each of the optotypes.
  • the procedure consists of several stages:
  • the procedure begins with the upper line (major scale, standard visual acuity, that is 1). If the line is overcome (at least 75% of hits) it is passed to the next higher line (1.2). In case of having more failures of the pe ⁇ nitidos, it goes to the bottom line (0.8). The procedure is stopped if in ascending trajectory the threshold of successes is not exceeded, or in descending trajectory when it is exceeded, returning as the value of visual acuity that of the last exceeded line.
  • the procedure can be monocular or binocular.
  • the procedure consists in properly combining the results of both eyes. This can be done either by a final stage in which the results of both eyes are checked to eliminate errors, or by merging the visual information contained in each of the images, optimizing the result.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

Afin de reconnaître les modèles, on utilise une décomposition s'inspirant de canaux syntonisés sur différentes fréquences et orientations, des modèles originaux et de l'image observée. A l'aide de cette décomposition, il est possible de réaliser des simplifications du modèle de dégradation de manière que, à partir de l'image observée, on puisse calculer la probabilité que chacune des images originales génère cette observation. Le procédé, du fait qu'il est bayésien, peut inclure une information statistique des dégradations, les plus probables, et des modèles, les plus abondants, ainsi que le coût des erreurs commises dans un modèle déterminé, et la reconnaissance sera plus fiable et adaptée aux besoins de chaque application, et le procédé est robuste face à d'éventuelles dégradations optiques de l'image. Une application d'intérêt pour le secteur de l'optique et de l'ophtalmologie consiste à prédire l'acuité visuelle d'un patient à partir de données d'aberrations optiques de l'oeil, fournies par un aberromètre.
PCT/ES2004/070012 2003-03-07 2004-03-08 Procede de reconnaissance de modeles dans des images affectees par des degradations optiques et son application dans la prediction de l'acuite visuelle a partir de donnees d'aberrometrie oculaire du patient WO2004079637A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
ESP200300562 2003-03-07
ES200300562A ES2247873B1 (es) 2003-03-07 2003-03-07 Sistema de reconocimiento de patrones en imagenes afectadas por degradaciones opticas.
ES200301425 2003-06-18
ESP200301425 2003-06-18

Publications (1)

Publication Number Publication Date
WO2004079637A1 true WO2004079637A1 (fr) 2004-09-16

Family

ID=32963832

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/ES2004/070012 WO2004079637A1 (fr) 2003-03-07 2004-03-08 Procede de reconnaissance de modeles dans des images affectees par des degradations optiques et son application dans la prediction de l'acuite visuelle a partir de donnees d'aberrometrie oculaire du patient

Country Status (1)

Country Link
WO (1) WO2004079637A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9959153B2 (en) 2008-04-18 2018-05-01 Bae Systems Plc Assisting failure diagnosis in a system
US10739227B2 (en) 2017-03-23 2020-08-11 Johnson & Johnson Surgical Vision, Inc. Methods and systems for measuring image quality
US10876924B2 (en) 2018-02-08 2020-12-29 Amo Groningen B.V. Wavefront based characterization of lens surfaces based on reflections
US10895517B2 (en) 2018-02-08 2021-01-19 Amo Groningen B.V. Multi-wavelength wavefront system and method for measuring diffractive lenses
US11013594B2 (en) 2016-10-25 2021-05-25 Amo Groningen B.V. Realistic eye models to design and evaluate intraocular lenses for a large field of view
US11282605B2 (en) 2017-11-30 2022-03-22 Amo Groningen B.V. Intraocular lenses that improve post-surgical spectacle independent and methods of manufacturing thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10171924A (ja) * 1996-12-10 1998-06-26 Brother Ind Ltd 文字認識装置
JPH11306285A (ja) * 1998-04-22 1999-11-05 Mitsubishi Heavy Ind Ltd パターン認識装置
WO2002001855A2 (fr) * 2000-06-26 2002-01-03 Miranda Technologies Inc. Appareil et procede de reduction adaptative du bruit dans un signal-image d'entree bruite
EP1300803A2 (fr) * 2001-08-28 2003-04-09 Nippon Telegraph and Telephone Corporation Procédé et appareil de traitement d'images
WO2003079274A1 (fr) * 2002-03-20 2003-09-25 Philips Intellectual Property & Standards Gmbh Procede visant a ameliorer les images d'empreintes digitales
US6674915B1 (en) * 1999-10-07 2004-01-06 Sony Corporation Descriptors adjustment when using steerable pyramid to extract features for content based search
US20040017944A1 (en) * 2002-05-24 2004-01-29 Xiaoging Ding Method for character recognition based on gabor filters

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10171924A (ja) * 1996-12-10 1998-06-26 Brother Ind Ltd 文字認識装置
JPH11306285A (ja) * 1998-04-22 1999-11-05 Mitsubishi Heavy Ind Ltd パターン認識装置
US6674915B1 (en) * 1999-10-07 2004-01-06 Sony Corporation Descriptors adjustment when using steerable pyramid to extract features for content based search
WO2002001855A2 (fr) * 2000-06-26 2002-01-03 Miranda Technologies Inc. Appareil et procede de reduction adaptative du bruit dans un signal-image d'entree bruite
EP1300803A2 (fr) * 2001-08-28 2003-04-09 Nippon Telegraph and Telephone Corporation Procédé et appareil de traitement d'images
WO2003079274A1 (fr) * 2002-03-20 2003-09-25 Philips Intellectual Property & Standards Gmbh Procede visant a ameliorer les images d'empreintes digitales
US20040017944A1 (en) * 2002-05-24 2004-01-29 Xiaoging Ding Method for character recognition based on gabor filters

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DATABASE WPI Week 199836, Derwent World Patents Index; Class T01, AN 1998-418834, XP002903692 *
DATABASE WPI Week 200004, Derwent World Patents Index; Class T01, AN 2000-044715, XP002903691 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9959153B2 (en) 2008-04-18 2018-05-01 Bae Systems Plc Assisting failure diagnosis in a system
US11013594B2 (en) 2016-10-25 2021-05-25 Amo Groningen B.V. Realistic eye models to design and evaluate intraocular lenses for a large field of view
US10739227B2 (en) 2017-03-23 2020-08-11 Johnson & Johnson Surgical Vision, Inc. Methods and systems for measuring image quality
US11385126B2 (en) 2017-03-23 2022-07-12 Johnson & Johnson Surgical Vision, Inc. Methods and systems for measuring image quality
US11282605B2 (en) 2017-11-30 2022-03-22 Amo Groningen B.V. Intraocular lenses that improve post-surgical spectacle independent and methods of manufacturing thereof
US11881310B2 (en) 2017-11-30 2024-01-23 Amo Groningen B.V. Intraocular lenses that improve post-surgical spectacle independent and methods of manufacturing thereof
US10876924B2 (en) 2018-02-08 2020-12-29 Amo Groningen B.V. Wavefront based characterization of lens surfaces based on reflections
US10895517B2 (en) 2018-02-08 2021-01-19 Amo Groningen B.V. Multi-wavelength wavefront system and method for measuring diffractive lenses

Similar Documents

Publication Publication Date Title
Geisler Sequential ideal-observer analysis of visual discriminations.
CA2868425C (fr) Procede et appareil de determination d'aberrations optiques dans un ƒil
CN100353907C (zh) 获得客观式显然验光的装置
US7357509B2 (en) Metrics to predict subjective impact of eye's wave aberration
US6607274B2 (en) Method for computing visual performance from objective ocular aberration measurements
US9001316B2 (en) Use of an optical system simulating behavior of human eye to generate retinal images and an image quality metric to evaluate same
WO2004079637A1 (fr) Procede de reconnaissance de modeles dans des images affectees par des degradations optiques et son application dans la prediction de l'acuite visuelle a partir de donnees d'aberrometrie oculaire du patient
Sharma et al. Harnessing the Strength of ResNet50 to Improve the Ocular Disease Recognition
CN110598652B (zh) 眼底数据预测方法和设备
Thibos Formation and sampling of the retinal image
CN111583248B (zh) 一种基于眼部超声图像的处理方法
Tuan et al. Predicting patients’ night vision complaints with wavefront technology
Alonso et al. Pre-compensation for high-order aberrations of the human eye using on-screen image deconvolution
CN117338234A (zh) 一种屈光度与视力联合检测方法
US9665771B2 (en) Method and apparatus for measuring aberrations of an ocular optical system
US20030053027A1 (en) Subjective refraction by meridional power matching
US7654674B2 (en) Method and apparatus for determining the visual acuity of an eye
CN114927220A (zh) 一种脊髓型颈椎病和帕金森病的鉴别诊断系统
Zaman et al. Multimodal assessment of visual function and ocular structure for monitoring Spaceflight Associated Neuro-Ocular Syndrome
Manne et al. Improved fundus image quality assessment: Augmenting traditional features with structure preserving scatnet features in multicolor space
Faylienejad A computational model for predicting visual acuity from wavefront aberration measurements
EP4197427A1 (fr) Procédé et dispositif d'évaluation de réfraction de l' il d'un individu à l'aide de l'apprentissage automatique
CN115660985B (zh) 白内障眼底图像的修复方法、修复模型的训练方法及装置
Navarro et al. Predicting visual acuity from measured ocular aberrations
Perches et al. Development of a subjective refraction simulator

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase