WO2022078568A1 - A method for providing a perimetry processing tool and a perimetry device with such a tool - Google Patents

A method for providing a perimetry processing tool and a perimetry device with such a tool Download PDF

Info

Publication number
WO2022078568A1
WO2022078568A1 PCT/EP2020/078622 EP2020078622W WO2022078568A1 WO 2022078568 A1 WO2022078568 A1 WO 2022078568A1 EP 2020078622 W EP2020078622 W EP 2020078622W WO 2022078568 A1 WO2022078568 A1 WO 2022078568A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
visual field
autoencoder
autoencoders
maps
Prior art date
Application number
PCT/EP2020/078622
Other languages
French (fr)
Inventor
Samuel WEINBACH
Jonas ANDRULIS
Original Assignee
Haag-Streit Ag
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haag-Streit Ag filed Critical Haag-Streit Ag
Priority to PCT/EP2020/078622 priority Critical patent/WO2022078568A1/en
Publication of WO2022078568A1 publication Critical patent/WO2022078568A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/024Subjective types, i.e. testing apparatus requiring the active assistance of the patient for determining the visual field, e.g. perimeter types
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the invention relates to a method for providing a processing tool for processing the visual field of a patient and to a perimetry device with such a tool.
  • Visual field maps may be obtained by means of a perimetry device, where a patient’s gaze is kept fixed and stimuli are displayed at various places within the pa- tient’s visual field.
  • Accurate visual field map determination is affected by the inher- ently subjective nature of the measurement because the patient needs to provide feed- back indicating if she/he has seen a stimulus. Another challenge is the long duration of the measurement.
  • the problem to be solved by the present invention is provide an im- proved means for determining the visual field of a patient.
  • the invention relates to a method for providing a processing tool adapted to process the visual field of a patient.
  • the pro- cessing tool comprises at least a first and a second autoencoder.
  • the method com- prises the steps of: - training the first autoencoder to reproduce the maps of a training set of visual field maps using a first loss function, and
  • the at least two autoencoders are trained with different loss functions.
  • a tool provided by this method allows to reconstruct at least two differently optimized maps by means of the at least two autoencoders, which in turn provides better insight into the stability of the map reconstruction by the autoencod- ers.
  • At least one of the loss functions may depend on a function estimat- ing the difference between a given map and a reconstructed map (see below for a more formal definition of such a function).
  • This function is invariant under a change of sign of the difference.
  • a deviation to a “lighter” value in the map i.e. to a value indicative of better sight
  • a “darker” value i.e. to a value indicative of poorer sight
  • At least one of the loss functions may depend on a function estimat- ing the difference between a given map and a reconstructed map (again, see below for a more formal definition of such a function), with this function being variant under a change of sign of said difference.
  • An autoencoder trained in this way will generate re- constructed maps that tend to be somewhat brighter or that tend to be somewhat darker than the input map. In more general terms, such an autoencoder will tend to generate a map that is likely to be worse of that is likely to be better than the real map.
  • one of the loss functions weighs positive differences between the given map and the reconstructed map less strongly than negative differences
  • the processing tool can further comprise a third autoencoder.
  • the method advantageously comprises the step of training the third autoen- coder to reproduce the training set using a third loss function, with the third loss func- tion being different from the first and the second loss functions.
  • loss functions for the three autoencoders can be as follows:
  • the loss function of the first autoencoder is invariant under a change of sign of said difference.
  • the loss function of the second autoencoder weighs positive dif- ferences between a given map and its reconstructed map less strongly than negative deviations.
  • the loss function of the third autoencoder weighs negative differ- ences between a given map and its reconstructed map more strongly than positive de- viations.
  • the first autoencoder favors the most likely recon- structed map
  • the second autoencoder favors a map that is more likely to represent a better scenario
  • the third autoencoder favors a map that is more likely to represent a worse scenario.
  • the map generated by the first autoencoder may e.g. be used as a “best estimate” for the real map, while the maps of the other autoencoders can be used to determine the reliability of the best estimate towards brighter and darker devi- ations.
  • the autoencoders are variational autoencoders.
  • Variational autoencoders generate less noisy output and, since the parameters in latent space are continuous in the sense that close parameters generate similar output, they can be interpreted, e.g. forjudging map similarity or they can be varied to generate similar reconstructed maps.
  • the autoencoders may share a common decoder. This reduces complexity and it allows to commonly interpret the latent space of all the autoencoders.
  • the autoencoders have different encoders.
  • the invention relates to a perimetry device com- prising at least the following elements:
  • a perimetry measurement unit This is a device adapted to obtain the visual field map of a patient. It successively presents stimuli to the patient, under various view angles, and the patient provides feedback as to when she/he sees a stim- ulus.
  • the perimetry measurement unit may be based on static perimetry or kinetic pe- rimetry measurements as known by the skilled person.
  • control unit having a processing tool with autoencoders as ob- tainable, in particular as obtained, by the method described above.
  • Such a device can record visual field maps and process them with the autoencoders.
  • the device may have a display
  • the control unit may be adapted to display reconstructed visual field maps derived from all of the au- toencoders on this display, thus allowing the user to compare the reconstructed field maps and gain an understanding of their reliability.
  • it may be adapted to display, on the display device, one or both of the following:
  • the control unit may further be adapted to calculate at least one quality parameter depending on the deviation (e.g. calculated from the value- wise dif- ference or ratio) between the visual field maps reconstructed by at least two of the different autoencoders. If the deviation between the two reconstructed maps is large, the reliability of the measurement may be poor.
  • control unit may further be adapted to automatically repeat visual field measurements depending on a value of the quality parameter. This allows to improve the quality of the measurement without using a large number of measurements.
  • control unit is adapted to calculate several quality pa- rameters for different regions in the visual field and to selectively repeat the visual field measurements only in regions where the quality parameters do not fulfill a given quality criterion. This allows to identify regions of poor quality and to selectively re- peat the measurements there, thereby creating a more reliable measurement in a short time.
  • the invention also relates to using a processing tool as obtainable by the above method for processing visual field maps.
  • Fig. 1 shows an embodiment of a perimetry device
  • Fig. 2 shows an embodiment of the autoencoders for such a device
  • Fig. 3 shows examples of visual field maps as measured (first col- umn), the results as obtained by the autoencoders (columns 2, 4, 6), and the differ- ences between the results (columns 3, 5, 7); note: dithering has been used to represent these gray-scale-encoded maps, with dark parts e.g. representing low values and high parts representing high values except for the columns showing differences, where bright parts represent small differences and dark parts large differences.
  • an “autoencoder” is, in the present context, a deep neural network having an encoder and a decoder.
  • the encoder is a neural network having an n-dimensional input and an m-dimensional output, with n » m, in particu- lar with n being at least two times as large as m.
  • the encoder and decoder are trained by feed- ing maps of a set of visual field maps to the input of the encoder (optionally together with further data, such as demographic data of the patient) and comparing the recon- structed map at the output of the decoder to the input map by means of a loss func- tion. This loss function is minimized.
  • a “variational autoencoder” is an autoencoder trained such that at least some of the parameters passed from the encoder to the decoder (i.e. at least some of the parameter in the “latent space” of the VAE) have or describe a given, statistical distribution over the training dataset.
  • the latent space follows a given distribution.
  • the parameters of the latent space describe a Gaussian distribution, e.g. by adding the so-called Kullback-Leibler divergence to the loss function. For details, see e.g. Carl Doersch, “Tutorial on Variational Autencod- ers”, arXiv:1606.05908v2, 13 August 2016.
  • only, such as f(x, y) g(
  • a function increases strictly monotonously with increasing value of
  • a function is advantageously differentiable.
  • Fig. 1 shows an example of a perimetry device 8 (also called a pe- rimeter) comprising a perimetry measurement unit 10 (schematically shown in sec- tional view) with e.g. a spherical screen 12, a partially transparent projection mirror 14, and an image projector 16.
  • a perimetry device 8 also called a pe- rimeter
  • a perimetry measurement unit 10 Schematically shown in sec- tional view
  • Image projector 16 is used to generate visual stimuli, which are pro- jected, via mirror 14, onto screen 12.
  • a subject resting her/his head on a headrest 18, observes screen 12 and indicates, by means of an input means, such as a button 20, when a stimulus appears.
  • the device further comprises a control unit 22, which may e.g. be equipped with a microprocessor 24.
  • Control unit 22 may be physically integrated in measurement unit 10, or it may be a device separate from measurement unit 10.
  • Control unit 22 further comprises an autoencoder unit 26 (in the present context also called a “processing tool”) with several autoencoders as de- scribed below.
  • an autoencoder unit 26 in the present context also called a “processing tool” with several autoencoders as de- scribed below.
  • Perimetry device 8 further comprises a display unit 28, which is used to display images to a user, such as to an ophthalmologist or physician. In partic- ular, it may display one or more versions of a visual field map, e.g. as color-encoded or grayscale-encoded images.
  • Display unit 28 may show further information, such as indices (e.g. mean defect, diffuse defect) and reliabilities (e.g. false positive rate, false negative rate).
  • indices e.g. mean defect, diffuse defect
  • reliabilities e.g. false positive rate, false negative rate
  • Control unit 22 operates measurement unit 10 to provide stimuli to the subject at various locations in the visual field, e.g. increasing the strength of a stimulus until the subject responds to it. Thus, it records a visual field map. This vis- ual field map is then processed by means of autoencoder unit 26 as described below.
  • autoencoder unit 26 implements three autoencoders 30a, 30b, and 30c. Each autoencoder has its own encoder 32a, 32b, and 32c, respectively, but all autoencoders share a common decoder 34.
  • Each encoder 32a, 32b, and 32c is a deep neural network, which has an n-dimensional input receiving the pixel values X of a measured field view map, e.g. normalized as real values between 0 and 1. For example, a value of 0 indicates that the respective region of the subject’s visual field cannot detect stimuli, and a value of 1 indicates that the respective region of the subject’s visual field detects stimuli easily.
  • Each encoder 32a, 32b, and 32c has an m-dimensional output (i.e. generates m output parameters) that defines the latent space 36 of the autoencoder.
  • these parameters are fed to the common decoder 34.
  • the input of decoder 34 advantageously has the same dimension m as the output of each encoder.
  • decoder 34 For each encoder, decoder 34 generates a reconstructed map X’, which is again a visual field map.
  • decoder 34 is the same for all autoen- coders 32a, 32b, 32c, i.e. it uses the same mathematical operations for generating its output from the latent space of each encoder.
  • decoder 34 may process the three channels sequentially, or three identi- cal copies of the decoder process the three channels in parallel.
  • the autoencoders 30a, 30b, 30c are vari- ational autoencoders.
  • each encoder 32a, 32b, 32c may e.g. calculate the average value p as well as the standard deviation o of each of the m pa- rameters.
  • the average values p or, in more general terms, a potentially reparameter- ized latent space, are then forwarded to decoder 34 for decoding (while the standard deviations o are only used during training, see below, to enforce the parameters in la- tent space to describe a given distribution).
  • the dimension m of the latent space defines the filtering power of the autoencoders. If it is too low, important features in the visual field maps may be lost. If it is too high, noise and artefacts may be poorly filtered.
  • the dimension m is between 5 and 12, i.e. each en- coder e.g. generates m average values p.
  • a typical depth of the encoders 30a, 30b, 30c is between 2 and 10, and a typical depth of the decoder 34 is between 2 and 10.
  • the number n of inputs to each encoder typically lies between 16 and 200.
  • autoencoder unit 26 is trained on a training set of visual field maps.
  • this training set comprises a large number of vis- ual field maps measured on real patients, such as at least 10’000 such visual field maps.
  • Each one of the autoencoders 30a, 30b, 30c is trained with the maps of this training set.
  • the three autoencoders could be trained separately, with suitable reconstruction losses as described below. If the autoencoders are to share a common decoder, as in the shown embodiment, they are advantageously trained to- gether.
  • training involves adjusting the connec- tions between the nodes of the neural network to minimize the following loss function with KLD being e.g. the sum of the KL divergences associated to each of the autoencoders.
  • KLD being e.g. the sum of the KL divergences associated to each of the autoencoders.
  • KLD is used to train the autoencoders to be a variational autoencod- ers. It forces the values c and p to describe a Gaussian distribution. Any other regu- larization loss can be used.
  • the autoencoders 30a, 30b, 30c differ because different map loss function Li are used to train them.
  • the first autoencoder 30a is trained to generate a “base map” X’base, i.e. the most probable reconstructed map.
  • its map loss function estimates the differences between the values of gl and r and is invariant of the sign of these differences.
  • the second autoencoder 30b is trained to generate a “best map” X’best, i.e. a reconstructed map that is optimistic in the sense that it favors higher sensitivity values in the reconstructed map. To do so, an asymmetric loss function is used, one that weighs positive differences between the values of the reconstructed map X’best and the given map X less strongly than negative differ- ences.
  • L 2 may be defined as wherein k + and are positive constants with k + ⁇ k_.
  • L 2 may depend on a function f estimating the difference between r i , g2 i as defined above, e.g. f(g2i, ri) is variant under a change of sign of this difference, i.e. f(r i, g2i) & f(g2, ri), with
  • the third autoencoder 30c is trained to generate a “worst map” X’worst, i.e. a reconstructed map that is pessimistic in the sense that it favors lower sensitivity values in the reconstructed map.
  • a “worst map” X’worst i.e. a reconstructed map that is pessimistic in the sense that it favors lower sensitivity values in the reconstructed map.
  • L 3 may be defined as with wherein k + and k - are positive constants with k + > k-.
  • L 3 may depend on a function f estimating the difference between r i , g3 i as defined above wherein f g3t, rf) is variant under a change of sign of this difference, i.e. f(g3i,ri > f(r i ,g3 i ) for g3 i > r i , and f g3i,ri ⁇ f(r i ,g3 i ) for g3 i > r i .
  • the autoencoders are trained by adapting the weights in the neural networks in order to minimize the loss function (1) over the training set.
  • Fig. 3 shows the differences X’base - X’worst, X’best - X’base, and X’best - X’worst.
  • the first two rows of Fig. 3 represent measurements that are fairly robust in the sense that the differences between X’base, X’best, and X’worst are small (i.e. the differences of the maps are low (dark)).
  • Fig. 3 represent measurements that are fragile in the sense that the three autoencoders yield strongly differing results (i.e. the differ- ences of the maps are high (bright)).
  • control unit 22 may display at least two of the visual field maps generated by the autoencoders on display device 28 for the ophthalmolo- gist or physician to interpret them.
  • These visual field maps may e.g. be displayed as color-encoded or grayscale-encoded images as shown in Fig. 3.
  • control unit 22 may display at least one differ- ence between two reconstructed visual field maps from said autoencoders on display device 28.
  • Such a difference may e.g. be displayed as a color-encoded or a grayscale- encoded image as shown in the differences columns of Fig. 3.
  • the system may e.g. display an image of binary pixels indicating the areas where the difference exceeds a given threshold and/or it may show a one or more quality parameters derived from the dif- ference between the reconstructed visual field maps as described in the next section.
  • Quality Parameter (s) e.g. display an image of binary pixels indicating the areas where the difference exceeds a given threshold and/or it may show a one or more quality parameters derived from the dif- ference between the reconstructed visual field maps as described in the next section.
  • the reconstructed visual field maps of two or more of the autoen- coders can, as mentioned, be used to calculate at least one quality parameter Q for a given measurement of the visual field map.
  • This quality parameter can be derived from the difference between at least two reconstructed visual field maps X' 2 .
  • this quality parameter may be a scalar value describing the reliability of a visual field map measurement as a whole. In par- ticular, it may be at least one of the following:
  • the quality parameter may be a function F estimating a difference between the mean or median values of the two reconstructed visual maps (in the sense as defined above), i.e.
  • Calculating the quality parameter may comprise the step of first calculating, for all values of the two reconstructed visual maps, a function f estimat- ing the difference between then. Each value-wise difference is then converted by a function f(x) estimating the difference between them (as defined above) and then summed, and the sum may optionally be run through a monotonous, in particular strictly monotonous, function F: wherein the sum runs over all the values i of the reconstructed visual field maps.
  • a quality parame- ter Q i for each value i of the two reconstructed visual field mapsX'1 , X' 2 may be cal- culated using a function f estimating the difference between two value (as defined above), i.e.
  • a quality parameter may be calculated as in Eq. (5a), (5b), or (6), with the median, mean, or sum extending over the pixels of the given region only.
  • the one or more quality parameters may be used to control the op- eration of the perimetry device.
  • control unit 22 may be adapted to repeat visual field measurements either in the whole visual field or in the poor-quality region(s).
  • Perimetry typically employs the decibel scale, with its unit of meas- urement being the logarithmic decibel (dB).
  • the decibel range depends on the perim- eter type and e.g. ranges, in the fovea, from 0 dB to approximately 34 dB in the fovea.
  • a sensitivity value of 0 dB e.g. means that a patient is not able to see the most intense perimetric stimulus that the device can display (e.g. 4’000 asb), whereas values close to e.g. 34 dB represent normal foveal vision for a 20-year-old person.
  • Advanta- geously, the visual field maps fed to the autoencoder use such a logarithmic scale, too, for obtaining a wider range of meaningful data, but they may also use a linear scale.
  • the map Before feeding a recorded visual field map (from a training dataset or from a measurement during use of the device) to one of the autoencoders, the map may be processed. In particular it may e.g. be processed in one or more of the follow- ing manners, in any suitable order: a) Left and right visual fields are mapped into the same system of coordinates in order to process them in the same autoencoders. For example, one side-type of maps (e.g. the left maps) may be swapped horizontally while the other (e.g. the right maps) are not swapped. b) The maps may be normalized to have values within a given range, such as 0 and 1 in the example above.
  • the maps may be normalized using demographic data of the re- spective subject, in particular their age and/or gender. For example, the pixel values of a visual field map on a logarithmic scale as mentioned above may be subtracted from those of an average person having the same age and gender, respectively.
  • Further steps may comprise rescaling, denoising, or calibrating for device hardware.
  • step c) at least the preprocessing steps a) and b) are used, advantageously also step c).
  • the encoders 32a, 32b, 32c have the values of a visual field map X as inputs. They may have further values as inputs, such as de- mographic data of the subject, e.g. age and/or reliability indices (e.g. false positive rate, false negative rate), which may have a systematic influence on the visual field.
  • the dimension n of the input of the encoders 32a, 32b, 32c may be e.g. larger than the dimension n’ of the output of decoder 34.
  • Autoencoder unit 26 may also have only two autoencoders, such as the second and third autoencoder 32b, 32c to estimate the best and worst recon- structed maps.
  • the base reconstructed map may be calculated from the best and worst reconstructed maps, e.g. from the average values of the best and the worst reconstructed map.
  • Autoencoder unit 26 may also comprise more than three autoencod- ers, e.g. for better assessing best and/or worst case scenarios.
  • autoencoder unit 26 may comprise at least one autoencoder trained for a dif- ferent goal with a different loss function.
  • autoencoder unit 26 may comprise at least one autoencoder trained for a dif- ferent goal with a different loss function.
  • Archetypes are typical visual field defects, such as e.g. described by Elze et al, “Patterns of functional vision loss in glaucoma determined with archetypal analysis” in Journal of the Royal Society, Interface, 12(103), 20141118., https://doi.org/10.1098/rsif.2014.! 118.
  • Such an autoencoder may be used for estimating the likelihood that a measured visual field map corre- sponds to one of the archetypes.
  • variational autoencoders have been used.
  • variational autoencoders generate less noisy output and, since the parame- ters in latent space are continuous in the sense that close parameters generate similar output, they can be interpreted e.g. forjudging map similarity or they can be varied to generate similar map.
  • a particularly advantageous type of autoencoder is adapted to carry out deep archetypal analysis as described by Keller et al., “Learning Extremal Repre- sentations with Deep Archetypal Analysis”, https://arxiv.org/abs/2002.00815.
  • conventional autoencoders i.e. non-variational auto- encoders, may be used.
  • the autoencoders can be implemented in hardware, such as in neu- ral network circuitry, or they may be implemented in software.
  • the autoencoders may be run concurrently, or, in particular if im- plemented in software, they can be executed sequentially.
  • the neural networks used for the encoders and de- coders are fully connected neural networks, i.e. networks where each node in one layer and connected to all the nodes in the next layer. Experiments run on convolution layers have been found to yield useful results.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Veterinary Medicine (AREA)
  • Ophthalmology & Optometry (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

A perimetry device comprises a perimetry measurement unit (10) and a control unit (22) with a processing tool (26). The processing tool (26) includes several variational autoencoders (30a, 30b, 30c) for reproducing base-case, best-case, and worst-case visual field maps from a measured visual field map. The variational autoencoders (30a, 30b, 30c) are trained on a training dataset using different loss functions, with one of the loss function training the autoencoder to generate the most probable reconstructed map, one loss function training the autoencoder to generate a map that is likely to be better than the real map, and one loss function training the autoencoder to generate a map that is likely to be worse than the real map. The outputs of the autoencoders (30a, 30b, 30c) may be used to assess the reliability of a measured visual field map and/or to operate the measurement unit (10) to repeat unreliable measurements.

Description

A method for providing a perimetry processing tool and a perimetry device with such a tool
Technical Field
The invention relates to a method for providing a processing tool for processing the visual field of a patient and to a perimetry device with such a tool.
Background Art
The assessment of the visual field of a patient allows to detect im- portant dysfunctions or issues in central and peripheral vision that may be caused by various medical conditions such as glaucoma, retina degradation, or neuronal dys- function. Visual field maps may be obtained by means of a perimetry device, where a patient’s gaze is kept fixed and stimuli are displayed at various places within the pa- tient’s visual field.
Accurate visual field map determination is affected by the inher- ently subjective nature of the measurement because the patient needs to provide feed- back indicating if she/he has seen a stimulus. Another challenge is the long duration of the measurement.
S. Berchuck et al. in Scientific Reports (2019) 9:18113 (https://doi.org/10.1038/s41598-019-54653-6) provide a method for processing visual field maps using a variational autoencoder, which improves the signal to noise ratio and provides a means for predicting future visual fields.
Disclosure of the Invention
The problem to be solved by the present invention is provide an im- proved means for determining the visual field of a patient.
This problem is solved by the methods and device of the independ- ent claims.
Accordingly, in a first aspect, the invention relates to a method for providing a processing tool adapted to process the visual field of a patient. The pro- cessing tool comprises at least a first and a second autoencoder. The method com- prises the steps of: - training the first autoencoder to reproduce the maps of a training set of visual field maps using a first loss function, and
- training the second autoencoder to reproduce the maps of the training set using a second loss function, wherein the second loss function is different from the first loss function.
Accordingly, the at least two autoencoders are trained with different loss functions.
A tool provided by this method allows to reconstruct at least two differently optimized maps by means of the at least two autoencoders, which in turn provides better insight into the stability of the map reconstruction by the autoencod- ers.
At least one of the loss functions may depend on a function estimat- ing the difference between a given map and a reconstructed map (see below for a more formal definition of such a function). This function is invariant under a change of sign of the difference. Hence, a deviation to a “lighter” value in the map (i.e. to a value indicative of better sight) is weighed in the same manner as a deviation to a “darker” value (i.e. to a value indicative of poorer sight). An autoencoder trained in this way will strive to generate the most probable reconstructed map.
At least one of the loss functions may depend on a function estimat- ing the difference between a given map and a reconstructed map (again, see below for a more formal definition of such a function), with this function being variant under a change of sign of said difference. An autoencoder trained in this way will generate re- constructed maps that tend to be somewhat brighter or that tend to be somewhat darker than the input map. In more general terms, such an autoencoder will tend to generate a map that is likely to be worse of that is likely to be better than the real map.
In particular, there may be one loss function that favors bright re- constructions while another one favors dark reconstructions. This may e.g. be achieved if:
- one of the loss functions weighs positive differences between the given map and the reconstructed map less strongly than negative differences and
- another one of said loss functions weighs positive differences be- tween the given map and the reconstructed map more strongly than negative differ- ences.
Assuming, for the sake of example, that higher values in a map are regarded as “brighter” values, this allows to determine, for a given map, how strongly the prediction of the map may vary towards brighter or darker values, respectively. The processing tool can further comprise a third autoencoder. In this case, the method advantageously comprises the step of training the third autoen- coder to reproduce the training set using a third loss function, with the third loss func- tion being different from the first and the second loss functions.
This allows to gain even more insight into how strongly the recon- structed map depends on the loss function, i.e. how stable the reconstructed map is.
In particular, the loss functions for the three autoencoders can be as follows:
- The loss function of the first autoencoder is invariant under a change of sign of said difference.
- The loss function of the second autoencoder weighs positive dif- ferences between a given map and its reconstructed map less strongly than negative deviations.
- The loss function of the third autoencoder weighs negative differ- ences between a given map and its reconstructed map more strongly than positive de- viations.
In this case, the first autoencoder favors the most likely recon- structed map, the second autoencoder favors a map that is more likely to represent a better scenario, and the third autoencoder favors a map that is more likely to represent a worse scenario.
The map generated by the first autoencoder may e.g. be used as a “best estimate” for the real map, while the maps of the other autoencoders can be used to determine the reliability of the best estimate towards brighter and darker devi- ations.
Advantageously, the autoencoders are variational autoencoders.
Variational autoencoders generate less noisy output and, since the parameters in latent space are continuous in the sense that close parameters generate similar output, they can be interpreted, e.g. forjudging map similarity or they can be varied to generate similar reconstructed maps.
In another important embodiment, the autoencoders may share a common decoder. This reduces complexity and it allows to commonly interpret the latent space of all the autoencoders.
In particular, though, the autoencoders have different encoders.
In another aspect, the invention relates to a perimetry device com- prising at least the following elements:
- A perimetry measurement unit: This is a device adapted to obtain the visual field map of a patient. It successively presents stimuli to the patient, under various view angles, and the patient provides feedback as to when she/he sees a stim- ulus. The perimetry measurement unit may be based on static perimetry or kinetic pe- rimetry measurements as known by the skilled person.
- A control unit having a processing tool with autoencoders as ob- tainable, in particular as obtained, by the method described above.
Such a device can record visual field maps and process them with the autoencoders.
Advantageously, the device may have a display, and the control unit may be adapted to display reconstructed visual field maps derived from all of the au- toencoders on this display, thus allowing the user to compare the reconstructed field maps and gain an understanding of their reliability. In particular, it may be adapted to display, on the display device, one or both of the following:
- visual field maps from said autoencoders and/or
- at least one difference between two reconstructed visual field maps from said autoencoders.
The control unit may further be adapted to calculate at least one quality parameter depending on the deviation (e.g. calculated from the value- wise dif- ference or ratio) between the visual field maps reconstructed by at least two of the different autoencoders. If the deviation between the two reconstructed maps is large, the reliability of the measurement may be poor.
In this case, the control unit may further be adapted to automatically repeat visual field measurements depending on a value of the quality parameter. This allows to improve the quality of the measurement without using a large number of measurements.
Further, the control unit is adapted to calculate several quality pa- rameters for different regions in the visual field and to selectively repeat the visual field measurements only in regions where the quality parameters do not fulfill a given quality criterion. This allows to identify regions of poor quality and to selectively re- peat the measurements there, thereby creating a more reliable measurement in a short time.
The invention also relates to using a processing tool as obtainable by the above method for processing visual field maps.
Brief Description of the Drawings The invention will be better understood and objects other than those set forth above will become apparent when consideration is given to the following de- tailed description thereof. Such description makes reference to the annexed drawings, wherein:
Fig. 1 shows an embodiment of a perimetry device,
Fig. 2 shows an embodiment of the autoencoders for such a device, and
Fig. 3 shows examples of visual field maps as measured (first col- umn), the results as obtained by the autoencoders (columns 2, 4, 6), and the differ- ences between the results (columns 3, 5, 7); note: dithering has been used to represent these gray-scale-encoded maps, with dark parts e.g. representing low values and high parts representing high values except for the columns showing differences, where bright parts represent small differences and dark parts large differences.
Modes for Carrying Out the Invention
Definitions
An “autoencoder” is, in the present context, a deep neural network having an encoder and a decoder. Advantageously, the encoder is a neural network having an n-dimensional input and an m-dimensional output, with n » m, in particu- lar with n being at least two times as large as m. The decoder has an input of m’ di- mensions directly or indirectly connected to the output of the encoder, typically with m’ = m. The output of the decoder has a dimension n’, typically but not necessarily with n’ = n (for examples, see below). The encoder and decoder are trained by feed- ing maps of a set of visual field maps to the input of the encoder (optionally together with further data, such as demographic data of the patient) and comparing the recon- structed map at the output of the decoder to the input map by means of a loss func- tion. This loss function is minimized.
A “variational autoencoder” (VAE) is an autoencoder trained such that at least some of the parameters passed from the encoder to the decoder (i.e. at least some of the parameter in the “latent space” of the VAE) have or describe a given, statistical distribution over the training dataset. In other words, the latent space follows a given distribution. Typically, the parameters of the latent space describe a Gaussian distribution, e.g. by adding the so-called Kullback-Leibler divergence to the loss function. For details, see e.g. Carl Doersch, “Tutorial on Variational Autencod- ers”, arXiv:1606.05908v2, 13 August 2016.
A “function f(x, y) estimating the difference between two values” is advantageously understood as being a function that has a minimum for x = y and, from that minimum, increases monotonously for an increasing value of |x - y| (at least within the ranges of x and y that are being used). Examples for such functions are functions depending on (x - y)2 only, e.g. f(x, y) = g((x - y)2) with g being monoto- nous, or functions depending on |x - y| only, such as f(x, y) = g(|x - y|), again with g being monotonous. Advantageously, such a function increases strictly monotonously with increasing value of |x - y|. For numerical optimization purposes during training of a network, such a function is advantageously differentiable.
Perimetry Device
Fig. 1 shows an example of a perimetry device 8 (also called a pe- rimeter) comprising a perimetry measurement unit 10 (schematically shown in sec- tional view) with e.g. a spherical screen 12, a partially transparent projection mirror 14, and an image projector 16. Another, similar example of such a device is shown in US9084540.
Image projector 16 is used to generate visual stimuli, which are pro- jected, via mirror 14, onto screen 12. A subject, resting her/his head on a headrest 18, observes screen 12 and indicates, by means of an input means, such as a button 20, when a stimulus appears.
The device further comprises a control unit 22, which may e.g. be equipped with a microprocessor 24. Control unit 22 may be physically integrated in measurement unit 10, or it may be a device separate from measurement unit 10.
Control unit 22 further comprises an autoencoder unit 26 (in the present context also called a “processing tool”) with several autoencoders as de- scribed below.
Perimetry device 8 further comprises a display unit 28, which is used to display images to a user, such as to an ophthalmologist or physician. In partic- ular, it may display one or more versions of a visual field map, e.g. as color-encoded or grayscale-encoded images.
Display unit 28 may show further information, such as indices (e.g. mean defect, diffuse defect) and reliabilities (e.g. false positive rate, false negative rate).
Control unit 22 operates measurement unit 10 to provide stimuli to the subject at various locations in the visual field, e.g. increasing the strength of a stimulus until the subject responds to it. Thus, it records a visual field map. This vis- ual field map is then processed by means of autoencoder unit 26 as described below.
Autoencoder Unit
An embodiment of autoencoder unit 26 is shown in Fig. 2. In the present embodiment, autoencoder unit 26 implements three autoencoders 30a, 30b, and 30c. Each autoencoder has its own encoder 32a, 32b, and 32c, respectively, but all autoencoders share a common decoder 34.
Each encoder 32a, 32b, and 32c is a deep neural network, which has an n-dimensional input receiving the pixel values X of a measured field view map, e.g. normalized as real values between 0 and 1. For example, a value of 0 indicates that the respective region of the subject’s visual field cannot detect stimuli, and a value of 1 indicates that the respective region of the subject’s visual field detects stimuli easily.
Each encoder 32a, 32b, and 32c has an m-dimensional output (i.e. generates m output parameters) that defines the latent space 36 of the autoencoder.
In the present embodiment, these parameters are fed to the common decoder 34.
The input of decoder 34 advantageously has the same dimension m as the output of each encoder. The output of decoder 34 has a larger dimension n’ than its input. In one embodiment, it may have the same as dimension as the input of the encoders, i.e. n = n’, in particular if the measured maps are the sole input to the encoders and if the outputs of the decoder 34 reconstruct maps of the same resolution as those at the inputs.
Thus, for each encoder, decoder 34 generates a reconstructed map X’, which is again a visual field map.
In the present embodiment, decoder 34 is the same for all autoen- coders 32a, 32b, 32c, i.e. it uses the same mathematical operations for generating its output from the latent space of each encoder.
It must be noted that, even though Fig. 2 shows decoder 34 to have three “channels”, each with its input and output, these channels are separate from each other and there is no crosstalk between them. Each channel is processed by the same network with the same training, i.e. based on identical mathematical processing. For example, decoder 34 may process the three channels sequentially, or three identi- cal copies of the decoder process the three channels in parallel.
In the present embodiment, the autoencoders 30a, 30b, 30c are vari- ational autoencoders. Hence, in latent space 36, each encoder 32a, 32b, 32c may e.g. calculate the average value p as well as the standard deviation o of each of the m pa- rameters. The average values p or, in more general terms, a potentially reparameter- ized latent space, are then forwarded to decoder 34 for decoding (while the standard deviations o are only used during training, see below, to enforce the parameters in la- tent space to describe a given distribution).
The dimension m of the latent space defines the filtering power of the autoencoders. If it is too low, important features in the visual field maps may be lost. If it is too high, noise and artefacts may be poorly filtered.
Advantageously, the dimension m is between 5 and 12, i.e. each en- coder e.g. generates m average values p.
A typical depth of the encoders 30a, 30b, 30c is between 2 and 10, and a typical depth of the decoder 34 is between 2 and 10.
The number n of inputs to each encoder typically lies between 16 and 200.
Training
As mentioned, autoencoder unit 26 is trained on a training set of visual field maps. Advantageously, this training set comprises a large number of vis- ual field maps measured on real patients, such as at least 10’000 such visual field maps.
Each one of the autoencoders 30a, 30b, 30c is trained with the maps of this training set.
Methods for training neural networks on such datasets are, per se, known.
Generally, the three autoencoders could be trained separately, with suitable reconstruction losses as described below. If the autoencoders are to share a common decoder, as in the shown embodiment, they are advantageously trained to- gether.
In this case, advantageously, training involves adjusting the connec- tions between the nodes of the neural network to minimize the following loss function
Figure imgf000010_0001
with KLD being e.g. the sum of the KL divergences associated to each of the autoencoders. gi with i = 1 for “base”, i = 2 for “best”, and i = 3 for “worst” being the reconstructed maps X’ at the output of the decoder, r being the real (measured) map at the input of the encoders, and
Lj(gi, r) being map loss functions (reconstruction losses) with i = 1 to 3 describing how deviations between gi and r are to be weighed.
KLD is used to train the autoencoders to be a variational autoencod- ers. It forces the values c and p to describe a Gaussian distribution. Any other regu- larization loss can be used.
The autoencoders 30a, 30b, 30c differ because different map loss function Li are used to train them.
In the present embodiment, the first autoencoder 30a is trained to generate a “base map” X’base, i.e. the most probable reconstructed map. Hence, its map loss function
Figure imgf000011_0001
estimates the differences between the values of gl and r and is invariant of the sign of these differences.
In a specific example,
Figure imgf000011_0002
may e.g. be defined as
Figure imgf000011_0003
with the sum extending over all values i of the maps and rb glt being the values of the maps X and X’base.
In more general terms, L may depend on a function f estimating the difference between gl.. as defined above, e.g.
Figure imgf000011_0004
f gli, ri) is invariant under a change of sign of this difference, i.e. f r^ glt) =
Figure imgf000011_0005
The second autoencoder 30b is trained to generate a “best map” X’best, i.e. a reconstructed map that is optimistic in the sense that it favors higher sensitivity values in the reconstructed map. To do so, an asymmetric loss function is used, one that weighs positive differences
Figure imgf000011_0006
between the values of the reconstructed map X’best and the given map X less strongly than negative differ- ences.
In a specific example, L2 may be defined as
Figure imgf000012_0002
wherein k+ and
Figure imgf000012_0001
are positive constants with k+ < k_.
In more general terms, L2 may depend on a function f estimating the difference between ri, g2i as defined above, e.g.
Figure imgf000012_0003
f(g2i, ri) is variant under a change of sign of this difference, i.e. f(ri,g2i) & f(g2, ri), with
Figure imgf000012_0004
The third autoencoder 30c is trained to generate a “worst map” X’worst, i.e. a reconstructed map that is pessimistic in the sense that it favors lower sensitivity values in the reconstructed map. Again, an asymmetric loss function is used, one that weighs positive differences di = g3i — ri between the values of the reconstructed map X’worst and the given map X and more strongly than negative dif- ferences.
In a specific example, L3 may be defined as with
Figure imgf000012_0005
wherein k+ and k - are positive constants with k+ > k-.
In more general terms, L3 may depend on a function f estimating the difference between ri, g3i as defined above
Figure imgf000012_0006
wherein f g3t, rf) is variant under a change of sign of this difference, i.e.
Figure imgf000013_0001
f(g3i,ri > f(ri,g3i) for g3i > ri, and f g3i,ri < f(ri,g3i) for g3i > ri.
The autoencoders are trained by adapting the weights in the neural networks in order to minimize the loss function (1) over the training set.
Results
After training the autoencoders 30a, 30b, and 30c as described in the previous section, measured visual field maps are applied to them, and the outputs X’base, X’best, and X’worst are recorded as e.g. shown in Fig. 3. In addition, Fig. 3 shows the differences X’base - X’worst, X’best - X’base, and X’best - X’worst.
The first two rows of Fig. 3 represent measurements that are fairly robust in the sense that the differences between X’base, X’best, and X’worst are small (i.e. the differences of the maps are low (dark)).
The last two rows of Fig. 3 represent measurements that are fragile in the sense that the three autoencoders yield strongly differing results (i.e. the differ- ences of the maps are high (bright)).
Hence, by inspecting at least two of the visual field maps as gener- ated by the autoencoders, a measure of the reliability of a measurement can be ob- tained.
For example, control unit 22 may display at least two of the visual field maps generated by the autoencoders on display device 28 for the ophthalmolo- gist or physician to interpret them. These visual field maps may e.g. be displayed as color-encoded or grayscale-encoded images as shown in Fig. 3.
In another example, control unit 22 may display at least one differ- ence between two reconstructed visual field maps from said autoencoders on display device 28. Such a difference may e.g. be displayed as a color-encoded or a grayscale- encoded image as shown in the differences columns of Fig. 3.
Alternatively or in addition thereto, the system may e.g. display an image of binary pixels indicating the areas where the difference exceeds a given threshold and/or it may show a one or more quality parameters derived from the dif- ference between the reconstructed visual field maps as described in the next section. Quality Parameter (s)
The reconstructed visual field maps of two or more of the autoen- coders can, as mentioned, be used to calculate at least one quality parameter Q for a given measurement of the visual field map. This quality parameter can be derived from the difference between at least two reconstructed visual field maps
Figure imgf000014_0001
X'2.
In a simple embodiment, this quality parameter may be a scalar value describing the reliability of a visual field map measurement as a whole. In par- ticular, it may be at least one of the following:
- The quality parameter may be a function F estimating a difference between the mean or median values of the two reconstructed visual maps (in the sense as defined above), i.e.
Q = F( median(X'1),median(X'2)) (5a) or
Q = F( meanQ1X'1.mean(X'2)) (5a)
- Calculating the quality parameter may comprise the step of first calculating, for all values of the two reconstructed visual maps, a function f estimat- ing the difference between then. Each value-wise difference is then converted by a function f(x) estimating the difference between them (as defined above) and then summed, and the sum may optionally be run through a monotonous, in particular strictly monotonous, function F:
Figure imgf000014_0002
wherein the sum runs over all the values i of the reconstructed visual field maps.
- In a more advanced embodiment, several quality parameters may be calculated for different regions in the visual field. For example, a quality parame- ter Qi for each value i of the two reconstructed visual field mapsX'1, X'2 may be cal- culated using a function f estimating the difference between two value (as defined above), i.e.
Figure imgf000014_0003
- Alternatively to the previous example, several multi-pixel regions i of the reconstructed visual field maps may be defined, with a single quality parame- ter being assigned to each such region. Such a quality parameter may be calculated as in Eq. (5a), (5b), or (6), with the median, mean, or sum extending over the pixels of the given region only.
The one or more quality parameters may be used to control the op- eration of the perimetry device. In particular, if the quality parameter for the whole visual field or a region thereof falls below a given threshold, control unit 22 may be adapted to repeat visual field measurements either in the whole visual field or in the poor-quality region(s).
The quality parameter is advantageously derived from at least one of the following reconstructed visual field maps:
- The maps X’best and X’worst,
- The maps X’best and X’base,
- The maps X’base and X’worst.
Data Preprocessing
Perimetry typically employs the decibel scale, with its unit of meas- urement being the logarithmic decibel (dB). The decibel range depends on the perim- eter type and e.g. ranges, in the fovea, from 0 dB to approximately 34 dB in the fovea. A sensitivity value of 0 dB e.g. means that a patient is not able to see the most intense perimetric stimulus that the device can display (e.g. 4’000 asb), whereas values close to e.g. 34 dB represent normal foveal vision for a 20-year-old person. Advanta- geously, the visual field maps fed to the autoencoder use such a logarithmic scale, too, for obtaining a wider range of meaningful data, but they may also use a linear scale.
Before feeding a recorded visual field map (from a training dataset or from a measurement during use of the device) to one of the autoencoders, the map may be processed. In particular it may e.g. be processed in one or more of the follow- ing manners, in any suitable order: a) Left and right visual fields are mapped into the same system of coordinates in order to process them in the same autoencoders. For example, one side-type of maps (e.g. the left maps) may be swapped horizontally while the other (e.g. the right maps) are not swapped. b) The maps may be normalized to have values within a given range, such as 0 and 1 in the example above. c) The maps may be normalized using demographic data of the re- spective subject, in particular their age and/or gender. For example, the pixel values of a visual field map on a logarithmic scale as mentioned above may be subtracted from those of an average person having the same age and gender, respectively. d) Further steps may comprise rescaling, denoising, or calibrating for device hardware.
Advantageously, at least the preprocessing steps a) and b) are used, advantageously also step c).
Alternative Autoencoder Unit Designs
In the examples above, the encoders 32a, 32b, 32c have the values of a visual field map X as inputs. They may have further values as inputs, such as de- mographic data of the subject, e.g. age and/or reliability indices (e.g. false positive rate, false negative rate), which may have a systematic influence on the visual field. In this case, the dimension n of the input of the encoders 32a, 32b, 32c may be e.g. larger than the dimension n’ of the output of decoder 34.
Autoencoder unit 26 may also have only two autoencoders, such as the second and third autoencoder 32b, 32c to estimate the best and worst recon- structed maps. In this case, if needed, the base reconstructed map may be calculated from the best and worst reconstructed maps, e.g. from the average values of the best and the worst reconstructed map.
Autoencoder unit 26 may also comprise more than three autoencod- ers, e.g. for better assessing best and/or worst case scenarios.
In the above examples, the autoencoders 30a, 30b, 30c were trained to reconstruct base, best, and worst estimates for the map. Alternatively or in addition thereto, autoencoder unit 26 may comprise at least one autoencoder trained for a dif- ferent goal with a different loss function. For example, there may be at least one auto- encoder favoring a visual field archetype. Archetypes are typical visual field defects, such as e.g. described by Elze et al, “Patterns of functional vision loss in glaucoma determined with archetypal analysis” in Journal of the Royal Society, Interface, 12(103), 20141118., https://doi.org/10.1098/rsif.2014.! 118. Such an autoencoder may be used for estimating the likelihood that a measured visual field map corre- sponds to one of the archetypes. In particular, there may be several such autoencoders for different archetypes, e.g. for comparing the likelihoods that a measured visual field map corresponds to one of them.
In the examples above, variational autoencoders have been used. As mentioned, variational autoencoders generate less noisy output and, since the parame- ters in latent space are continuous in the sense that close parameters generate similar output, they can be interpreted e.g. forjudging map similarity or they can be varied to generate similar map. A particularly advantageous type of autoencoder is adapted to carry out deep archetypal analysis as described by Keller et al., “Learning Extremal Repre- sentations with Deep Archetypal Analysis”, https://arxiv.org/abs/2002.00815.
Alternatively, conventional autoencoders, i.e. non-variational auto- encoders, may be used.
Notes
The autoencoders can be implemented in hardware, such as in neu- ral network circuitry, or they may be implemented in software.
The autoencoders may be run concurrently, or, in particular if im- plemented in software, they can be executed sequentially.
Advantageously, the neural networks used for the encoders and de- coders are fully connected neural networks, i.e. networks where each node in one layer and connected to all the nodes in the next layer. Experiments run on convolution layers have been found to yield useful results.
While there are shown and described presently preferred embodi- ments of the invention, it is to be distinctly understood that the invention is not lim- ited thereto but may be otherwise variously embodied and practiced within the scope of the following claims.

Claims

Claims
1. A method for providing a processing tool for processing a visual field of a patient, wherein said tool comprises at least a first and a second autoencoder (30a, 30b, 30c), said method comprising the steps of training said first autoencoder (30a) to reproduce maps of a training set of visual field maps using a first loss function (L1), and training said second autoencoder (30b) to reproduce the maps of the training set using at least a second loss function (L2, L3), wherein said second loss function (L2, L3) is different from said first loss function (L1).
2. The method of claim 1 wherein at least one (L1) of the loss func- tions depends on a function estimating the difference between a given map and a re- constructed map and is invariant under a change of sign of said difference.
3. The method of any of the preceding claims wherein at least one (L2, L3) of the loss functions depends on a function estimating the difference between a given map and a reconstructed map and is variant under a change of sign of said dif- ference.
4. The method of any of the preceding claims wherein one (L2) of said loss functions weighs positive differences between a given map and a reconstructed map less strongly than negative differences and another one (L3) of said loss functions weighs positive differences between a given map and a reconstructed map more strongly than negative differ- ences.
5. The method of any of the preceding claims wherein said tool fur- ther comprises a third autoencoder (30c), said method comprising the steps of training said third autoencoder (30c) to reproduce the training set of visual field maps using a third loss function (L3), wherein said third loss function (L3) is different from said first and said second loss functions (L1, L2).
6. The method claim 5, wherein the loss function (L1) of the first autoencoder (30a) is invariant un- der a change of sign of said difference the loss function (Z,2) of the second autoencoder (30b) weighs posi- tive differences between a reconstructed map and a given map less strongly than neg- ative deviations, and the loss function (L3) of the third autoencoder (30c) weighs positive differences between a reconstructed map and a given map more strongly than nega- tive deviations.
7. The method of any of the preceding claims wherein said autoen- coders (30a, 30b, 30c) are variational autoencoders.
8. The method of any of the preceding claims wherein said autoen- coders (30a, 30b, 30c) share a common decoder (34).
9. The method of any of the claims wherein said autoencoders have different encoders (32a, 32b, 32c).
10. A use of a processing tool as obtainable or obtained by the method of any of the preceding claims for processing visual field maps.
11. A perimetry device comprising perimetry measurement unit (10) and a control unit (22) having a processing tool (26) with autoencoders (30a, 30b, 30c) as obtainable by the method of any of the claims 1 to 10.
12. The device of claim 11 further comprising a display device (28) wherein said control unit (22) is adapted
- to display reconstructed visual field maps derived from said auto- encoders (30a, 30b, 30c) on said display device (28) and/or
- to display at least one difference between two reconstructed visual field maps from said autoencoders on said display device (28).
13. The device of any of the claims 11 or 12 wherein said control unit (22) is adapted to calculate at least one quality parameter (Q, Qi) depending on a deviation between the visual field maps reconstructed by at least two of the different autoencoders (30a, 30b, 30c). 18
14. The device of claim 13 wherein said control unit (22) is adapted to automatically repeat visual field measurements depending on a value of said qual- ity parameter (0, Qi).
15. The device of claim 14 wherein said control unit (22) is adapted to calculate several quality parameters (Qi) for different regions in a visual field and to selectively repeat the visual field measurements only in regions where said quality parameters (Qi) do not fulfill a quality criterion.
PCT/EP2020/078622 2020-10-12 2020-10-12 A method for providing a perimetry processing tool and a perimetry device with such a tool WO2022078568A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2020/078622 WO2022078568A1 (en) 2020-10-12 2020-10-12 A method for providing a perimetry processing tool and a perimetry device with such a tool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2020/078622 WO2022078568A1 (en) 2020-10-12 2020-10-12 A method for providing a perimetry processing tool and a perimetry device with such a tool

Publications (1)

Publication Number Publication Date
WO2022078568A1 true WO2022078568A1 (en) 2022-04-21

Family

ID=73013370

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/078622 WO2022078568A1 (en) 2020-10-12 2020-10-12 A method for providing a perimetry processing tool and a perimetry device with such a tool

Country Status (1)

Country Link
WO (1) WO2022078568A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161249A (en) * 2019-12-31 2020-05-15 复旦大学 Unsupervised medical image segmentation method based on domain adaptation
GB2581808A (en) * 2019-02-26 2020-09-02 Imperial College Sci Tech & Medicine Scene representation using image processing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2581808A (en) * 2019-02-26 2020-09-02 Imperial College Sci Tech & Medicine Scene representation using image processing
CN111161249A (en) * 2019-12-31 2020-05-15 复旦大学 Unsupervised medical image segmentation method based on domain adaptation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JI ZEXUAN ET AL: "Beyond Retinal Layers: A Deep Voting Model for Automated Geographic Atrophy Segmentation in SD-OCT Images", TRANSLATIONAL VISION SCIENCE & TECHNOLOGY, vol. 7, no. 1, 2 January 2018 (2018-01-02), US, pages 1, XP055775915, ISSN: 2164-2591, Retrieved from the Internet <URL:https://web.stanford.edu/group/rubinlab/pubs/Ji-2018-Beyond.pdf> DOI: 10.1167/tvst.7.1.1 *
SAMUEL I BERCHUCK ET AL: "Scalable Modeling of Spatiotemporal Data using the Variational Autoencoder: an Application in Glaucoma", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 24 August 2019 (2019-08-24), XP081469265 *

Similar Documents

Publication Publication Date Title
US11776119B2 (en) Confidence-based method and system for analyzing images of a retina
Zhu et al. Predicting visual function from the measurements of retinal nerve fiber layer structure
JP7251873B2 (en) Methods and systems for quantifying tissue biomarkers
CN105339982B (en) Lung measurement
US20220309668A1 (en) Using a Set of Machine Learning Diagnostic Models to Determine a Diagnosis Based on a Skin Tone of a Patient
WO2019169322A1 (en) Systems and methods for measuring visual function maps
Montesano et al. Improving visual field examination of the macula using structural information
WO2020165120A1 (en) Prediction of coronary microvascular dysfunction from coronary computed tomography
Karlsson et al. Automatic fundus image quality assessment on a continuous scale
US10918275B2 (en) Optical texture analysis of the inner retina
JP7173482B2 (en) Health care data analysis system, health care data analysis method and health care data analysis program
JP2022134068A (en) Biological information calculation system, server, and data structure
WO2022078568A1 (en) A method for providing a perimetry processing tool and a perimetry device with such a tool
CN110494893B (en) FFR-based interactive monitoring of non-invasive imaging
CN109727660B (en) Machine learning prediction of uncertainty or sensitivity for hemodynamic quantification in medical imaging
JP2023060172A (en) Blood pressure measurement device, model setting device, and blood pressure measurement method
Glassman et al. Comparison of optical coherence tomography in diabetic macular edema, with and without reading center manual grading from a clinical trials perspective
EP3893202B1 (en) 3d analysis with optical coherence tomography images
US20240074696A1 (en) Information processing device, information processing method, and storage medium
JP2021104140A (en) Medical information processor, medical information processing method, and medical information processing program
JP7433901B2 (en) Learning device and learning method
TWI836529B (en) Method for estimating blood pressures by using photoplethysmography signal analysis and system
JP7266807B1 (en) Lifestyle prediction device, lifestyle prediction system, lifestyle prediction method, lifestyle prediction program, and recording medium
US20230386630A1 (en) Information processing device, control method, and storage medium
Gong et al. Trail-Traced Threshold Test (T4) with a Weighted Binomial Distribution for a Psychophysical Test

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20796708

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20796708

Country of ref document: EP

Kind code of ref document: A1