IL305329A - Method, system and computer program product for determining optometric parameters - Google Patents

Method, system and computer program product for determining optometric parameters

Info

Publication number
IL305329A
IL305329A IL305329A IL30532923A IL305329A IL 305329 A IL305329 A IL 305329A IL 305329 A IL305329 A IL 305329A IL 30532923 A IL30532923 A IL 30532923A IL 305329 A IL305329 A IL 305329A
Authority
IL
Israel
Prior art keywords
test subject
eye
test
visual
refraction
Prior art date
Application number
IL305329A
Other languages
Hebrew (he)
Original Assignee
Rodenstock Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rodenstock Gmbh filed Critical Rodenstock Gmbh
Publication of IL305329A publication Critical patent/IL305329A/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/103Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining refraction, e.g. refractometers, skiascopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • A61B3/04Trial frames; Sets of lenses for use therewith
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Eye Examination Apparatus (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
  • Auxiliary Devices For Music (AREA)

Description

R 3314WO- 1 Method, system and computer program product for determining optometric parameters The invention relates to a method, a system and a computer program product for determining optometric parameters of a test subject.
Subjective refraction involves determining the refractive index of that optical correction with which an eye(s) of a test subject produces a sharp image of a distant object. Standardized procedures exist for performing subjective refraction. These methods are performed by a refractionist such as an optometrist or ophthalmologist. This is traditionally done using measuring devices such as trial glasses, test lenses, and/or a phoropter. These measuring devices can be operated either manually or electrically by the refractionist.
In subjective refraction, the subjective visual impression of the test subject is decisive for determining the required optical correction. The test subject communicates with the refractionist by giving them acoustic feedback regarding a visual task set for them. The refractionist receives the feedback from the test subject and controls the refraction process depending on the feedback from the test subject.
The refractionist is required to perform the refraction in order to evaluate the feedback given by the test subject on the visual tasks and to control the refraction process. On the one hand, this means that time must be spent by refractionists to determine the visual acuity of the test subject. Furthermore, the operation and execution of the refraction by the refractionist is error-prone due to the fact that it is controlled by a human being and, in addition, errors can occur when giving and/or receiving the acoustic feedback.
The invention is based on the object of enabling an improved method for performing a determination of optometric parameters, in particular to support a refractionist in performing a refraction and/or to enable an at least partially automatic performance of a refraction.
R 3314WO- 2 - This object is achieved by the subject matters of the independent claims. Preferred embodiments are the subject matters of the dependent claims.
One aspect relates to a method for determining optometric parameters of a test subject comprising the steps: - setting an optical correction for at least one eye of the test subject by means of a refraction unit;- displaying a test image for determining the subjective refraction of the test subject;- detecting a viewing direction and/or an orientation of the at least one eye of the test subject by means of an eye tracking unit while the test subject looks at the displayed test image;- generating an eye signal which contains information about the detected viewing direction and/or orientation of the at least one eye of the test subject; and- determining optometric parameters of the test subject by evaluating the eye signal in accordance with the displayed test image.
In the method, the individual method steps do not necessarily have to be carried out in the order specified above. This means that the individual method steps can be carried out either in the specified order, in a different order and/or at least partially simultaneously.
The refraction unit can be configured and/or used to manipulate curvatures of wavefronts and, if necessary, additionally a mean propagation direction and/or wavelength and/or intensity and/or a polarization state of the light emanating from the test image and to guide the light thus manipulated into the at least one eye of the test subject. For this purpose, the optical correction is defined with which the light entering the eye is manipulated. The optical correction can either be introduced as an object into the propagation path of the light, or be simulated purely virtually within the framework of a projection, e.g. by means of the phoropter and/or by means of a light R 3314WO- 3 - field display.
The refraction unit can be configured, for example, as a phoropter, in particular as an automated phoropter, which comprise refractive and/or diffractive elements such as lenses and/or gratings. Alternatively, refractive glasses can be used for this purpose. In this case, the refractive and/or diffractive elements can be adaptive as optical correction, e.g., configured as a deformable mirror and/or a deformable lens.
In particular, a zero correction can also be used as optical correction, e.g. a window glass without actual optical correction or an omission of a correction by means of an object. Thus, for example, the visual acuity of the uncorrected, "naked" eye can be determined. In general, the optical corrections may correspond to a pre-retained optical effect, in particular an optical effect with a spherical and/or cylindrical optical effect.
The test image can be displayed on a display unit. The display unit can be separate and/or independent from the refraction unit and e.g. be operated by the refractionist. Alternatively, the display unit may be electrically connected to and/or communicate with the refraction unit, wherein it is controlled partially or fully automatically. The display unit can display the test image on a display panel, for example on a screen.
A test image is displayed which shows different visual objects such as pictures and/or symbols which are to be recognized and/or identified by the test subject. The subjective refraction of the test subject can be determined depending on how well the test subject recognizes the visual objects displayed in the test image. At least one visual task can be assigned to each test image, which can be set for the test subject when the test image is displayed, in order, for example, to be able to determine the subjective visual acuity of the test subject depending on the optical correction.
In an embodiment, the refraction unit and the display unit may be combined in such a way that they comprise a projection unit which projects light of a virtual test image directly into the at least one eye of the test subject depending on the defined optical correction.
R 3314WO- 4 - When looking at the test image, the eye tracking unit detects the viewing direction and/or the orientation of the at least one eye of the test subject. The viewing direction and the eye orientation depend on each other, so that it is sufficient to detect either the viewing direction or the orientation of the at least one eye. In particular, it can be detected on which part of the test image the test subject is looking, i.e. on which area of the display panel the test subject is currently looking. The eye tracking unit can either detect only the viewing direction and/or orientation of exactly one eye of the test subject, or that of both eyes. For this purpose, the eye tracking unit may comprise at least one camera that captures images and/or videos of the at least one eye while the test subject looks at the displayed test image. Furthermore, the eye tracking unit may comprise and cooperate with program parts that perform image recognition on the captured images and/or videos to determine the viewing direction and/or orientation of the at least one eye based on one or more image features. In any case, the eye tracking unit generates measurement data relating to the viewing direction and/or the orientation of the at least one eye of the test subject, which may contain images and/or videos. In particular, a viewing movement can also be detected, i.e. a temporally variable viewing direction of the at least one eye.
The eye signal is created from the measurement data acquired by the eye tracking unit. The eye signal contains information about the detected viewing direction and/or orientation. In this case, the eye signal can be digital and contain e.g. a direction vector together with a starting point of the direction vector in three-dimensional space, e.g. a center of rotation of the eyeball and/or a point on the retina of the test subject and/or a point of the pupil of the test subject such as a center of the pupil. The eye signal is sufficiently detailed, sufficiently accurate, and/or has sufficient resolution to be able to determine from the eye signal which area of the test image the respective eye of the test subject is currently looking at.
In the method, the eye signal is generated and/or evaluated in accordance with that test image which is displayed exactly when the viewing direction and/or orientation associated with this eye signal is detected by the eye tracking unit.
R 3314WO- 5 - The test image can be calibrated relative to the refraction unit, wherein in particular a distance of the test image to the refraction unit can be known. This distance can be taken into account, for example, in a calculation of the visual acuity, i.e. the visual acuity. In general, the axial distance and/or a lateral relative position can be known in advance as a calibration. Then information can be accessed about the distance and/or direction in which individual areas of the test image are arranged relative to the refraction unit. Calibration data of this calibration can be used in the evaluation of the eye signal. The calibration data of this calibration can be used to determine a visual acuity of the test subject from the distance and size of visual objects displayed on the test image. The evaluation of the eye signal can include information about how far away the pupil(s) of the test subject is/are from the refraction unit and in which direction, in particular a pupil center point, an eye rotation point, a point on the retina of the test subject, and/or a reflex, in particular a Purkinje reflex.
Alternatively or additionally, a calibration can be used for which the exact positioning of the refraction unit relative to the test image need not be known. As described in more detail below, during a calibration procedure, the test subject may be asked to look at and/or fixate predetermined areas and/or points of the test image. An eye signal associated with these predetermined areas and/or points can be registered and used as calibration.
The evaluation of the eye signal comprises in any case a determination of the area of the test image which the test subject is currently looking at. The test image may comprise several areas that differ from one another, the fixation of which leads to respectively different eye signals. These different eye signals are distinguished from each other during the evaluation. For example, several different visual objects may be displayed on the test image and the test subject may be asked to look at a particular one of the visual symbols. The resolution and/or accuracy of the eye tracking unit is sufficient to recognize which visual object the test subject looks at, e.g., in response to a corresponding request.
R 3314WO- 6 - In principle, a large number of different test images and associated visual tasks can be used for this purpose. Exemplary configurations of such visual tasks and associated test images are described in more detail below. An example of a test image with associated visual task provides the display of two visual objects, such as visual symbols, which differ with respect to at least one property, for example with respect to their contrast and/or size. When looking at this test image with the two different visual objects, the test subject can be asked to look at the visual object which appears sharper and/or more visible to them. The direction in which the test subject looks can be used to draw conclusions about the optometric parameters of the test subject, in particular their subjective visual acuity. This can be effected, for example, in the context of a subjective refraction. In this context, subjective means that the result of the measurement procedure depends on the subjective visual perception of the test subject.
The method supports the refractionist in such a way that possible sources of error, which are based e.g. on the human factor, can be reduced. Such error sources which can be reduced by the method can be based on an acoustic misunderstanding of a feedback of the test subject, on the giving of a wrong feedback by the test subject, and/or on the wrong registration of the feedback by the refractionist. For example, automatic detection of the viewing direction can prevent the test subject from giving an incorrect indication as to which of the displayed visual objects they can see more sharply and/or better. Furthermore, the detection of the viewing direction and/or the orientation of the eye enables at least a partial automation of a subjective refraction, in particular a full automation of a subjective refraction, in which a refractionist is no longer absolutely necessary. This makes it possible, for example, to perform a subjective refraction at a distance, e.g. at the home of the test subject, without the refractionist having to be on site with the test subject.
This can facilitate the implementation of a subjective refraction, at least for the refractionist, and human sources of error can at least be reduced when determining the optometric parameters. The optometric parameters may relate to information about the at least one eye of the test subject, in particular also about the visual system of the R 3314WO- 7 - test subject as a whole. The optometric parameters may be subjective optical parameters which depend on the subjective visual perception of the test subject. In particular, these can be subjective refraction values, i.e. those refraction values at which the test subject subjectively perceives the best visual impression. In particular, the optometric parameters may include refraction values that are optimized for near and/or distance vision. The optometric parameters may also relate to and/or include physiological parameters. The optometric parameters may include subjective optical parameters such as a subjectively determined best optical correction, i.e., subjective refraction, and/or objective optical parameters such as aberrometric measurement data and/or pupillometric measurement data. The optometric parameters may include information on visual acuity and/or contrast vision.
According to an embodiment, at least one refraction value, at least one sensitivity, at least one contrast vision parameter, and/or at least one visual acuity are determined as the optometric parameters . In particular, the optometric parameters may be a single parameter. Preferably, the optometric parameters comprise at least one refraction value, in particular at least one subjective refraction value, which is optimized for near vision and/or distance vision, for example. The refraction value may correspond to an optical correction, i.e. an applied optical effect. The optometric parameters can be determined monocularly and/or binocularly for the test subject.
According to an embodiment, the eye signal is generated such that it has a horizontal and a vertical component, which are dependent on the viewing direction and/or the orientation of the at least one eye. In particular, the eye signal may have at least one horizontal and/or at least one vertical component. These components may be proportional, to a linear approximation, to a horizontal and/or vertical component of the viewing direction and/or orientation. In particular, the eye signal can be created as a vector which has at least one horizontal and/or at least one vertical component. The vector may be oriented approximately parallel to the viewing direction and/or orientation of the at least one eye. By detecting both the horizontal and the vertical orientation of the viewing direction and/or orientation, a sufficient specification can be achieved to determine from the eye signal which area of the test image the test subject R 3314WO- 8 - is currently looking at. In this context, the eye signal can in particular be vector-like and/or have a vector, wherein it can contain both a vector direction and a vector base. The use of a vector as eye signal is mathematically easy to evaluate by means of known vector calculation, if necessary, taking into account a calibration, e.g. a calibration of the displayed test image relative to the eye and/or the refraction unit.
In general, the eye signal can be configured as a reversibly unique function of the viewing direction and/or orientation. Thus, a function value of the eye signal is assigned to each viewing direction and/or orientation, from which their viewing direction and/or orientation can be inferred.
According to an embodiment, the eye signal is generated by means of a signal unit from measurement data generated by the eye tracking unit. These measurement data may in particular contain one or more images, videos or the like. The signal unit may be at least partially software-controlled and/or include image recognition software, in particular image recognition software trained by means of "machine learning". The signal unit may be formed as part of the eye tracking unit. Alternatively, the signal unit can be configured as part of a controller and/or control unit, in particular as part of a computer system with a processor, which controls and/or regulates one, several or all of the method steps.
According to an embodiment, the test image is displayed by a display unit which is controlled by a controller. The display unit can, for example, have a screen, a projection and/or a holographic display unit. The display unit may be electronically controlled by the controller such that it displays a test image selected by the controller on a display panel. The controller may optionally also be configured and/or used to control the refraction unit, the eye tracking unit, the signal unit, if present, and/or an evaluation unit, if present, for evaluating the eye signal. The degree of automation of the process depends on how many of these units the controller actually controls and/or regulates. If the controller actually controls all implemented of the units listed above, the method can be carried out fully automatically. The controller can allow detection of the viewing direction and/or orientation to be performed exactly when the display unit displays a R 3314WO- 9 - new test image that the test subject is looking at.
According to an embodiment, a calibration is performed during which calibration correction data is generated regarding a deviation of the viewing direction and/or orientation determined by means of the eye tracking unit from an actually assumed viewing direction and/or orientation of the at least one eye of the test subject. In this process, a calibrated eye signal is generated and evaluated, wherein the viewing direction and/or orientation detected by the eye tracking unit is or are corrected by the calibration correction data in the calibrated eye signal. Depending on the method used by the eye tracking unit to detect the viewing direction and/or orientation, the determined viewing direction may deviate from the actual viewing direction of the test subject. This deviation can have different causes and can be compensated by the calibration. For calibration, the test subject can be told to first look at a predetermined area of a display panel on which the test image can later be displayed. For example, the test subject may first fixate an upper area, a lower area, a left area, and/or a right area in the display panel. In doing so, the eye tracking unit can detect the immediately determined viewing direction and/or orientation. A few selected areas and/or points on the display panel can be sufficient for a sufficient calibration. The calibration can be performed automatically, in particular controlled by an AI or a machine. After calibration, the actual viewing direction and/or orientation of the eye can be determined more reliably and evaluated as a calibrated eye signal.
In an embodiment, the eye signal is calibrated by detecting the viewing direction and/or orientation of the at least one eye of the test subject by means of an eye tracking unit while the test subject is looking at predetermined areas and/or points. This can be effected similarly to the calibration described above, wherein the test subject can be instructed to first look at a predetermined area of a display panel on which the test image can later be displayed. For example, the test subject may first fixate an upper area, a lower area, a left area, and/or a right area in the display panel. In doing so, the eye tracking unit can detect the immediately determined viewing direction and/or orientation. A few selected areas and/or points on the display panel can be sufficient for a sufficient calibration. The calibration can take place automatically, in particular R 3314WO- 10 - controlled by an AI or a machine. After this calibration, the viewing directions and/or orientations of the eye detected by the eye tracking unit can be assigned to points and/or areas of the display unit.
These calibrations can make a geometric and/or optical calibration superfluous, in which knowledge about distances of the test subject and/or the refraction unit from the test image and/or knowledge about optical beam paths and/or wave fronts are necessary. Thus, direct calibration at the test display panel can make further geometric and/or optical calibration unnecessary and greatly simplify the calibration process.
According to an embodiment, a time-dependent eye signal is generated which contains information regarding directional changes of the viewing direction and/or orientation of the at least one eye of the test subject. A viewing movement and/or orientation movement can thus be derived and evaluated from the time-dependent eye signal. This makes it possible to detect several successive viewing directions and/or to use moving test images, such as a test video, during the viewing of which the test subject is prompted and/or requested to perform a viewing movement dependent on the test video.
According to an embodiment, the test image is used to set a viewing direction­dependent visual task for the test subject, in which conclusions are drawn about the optometric parameters of the test subject with the optical correction set in each case in accordance with the detected and evaluated viewing direction and/or orientation of at least one eye of the test subject. Examples of such viewing direction-dependent visual tasks are described in more detail below. Different types of viewing direction­dependent visual tasks can be set. Here, the viewing direction and/or eye orientation adopted by the test subject can depend on which of several displayed visual objects the test subject can recognize well or poorly. An area of the test image and/or display panel, which the test subject looks at in the context of the visual task, can be dependent on the visual acuity (e.g. dependent on the optical correction). By detecting the area of the test image which the test subject looks at during the visual task, an evaluation of the refraction used and/or the visual acuity of the test subject can be made and these R 3314WO- 11 can be determined as optometric parameters.
According to an embodiment, the test image is configured in such a way that it causes the test subject to unconsciously assume a predetermined viewing direction and/or perform a predetermined viewing movement depending on their visual acuity. By means of the eye tracking unit, the viewing direction and/or the viewing movement and/or the orientation of the at least one eye of the test subject is detected and registered as a passive feedback while the test subject is looking at this test image. In visual tasks with a passive feedback, the test subject is unintentionally prompted by the image itself to assume a predetermined viewing direction and/or to perform a predetermined viewing movement. This can be effected, for example, by presenting the test subject with two fields or panels as a test image. One of the two fields can be empty, the other field can show a visual object. If the vision of the test subject is sufficient to recognize the visual object, they will unconsciously automatically look at the field with visual object. If they cannot see it, they will show no preference for a field. Similarly, the test image can be configured as a moving test image that stimulates the test subject to perform a predetermined viewing movement depending on their visual acuity. In particular, this can be a viewing movement in which the test subject follows a visual stimulus, such as a visual symbol, with their view, provided that they can recognize the displayed visual stimulus well. The viewing movement can thus be configured in particular as a viewing movement following a visual stimulus. In this visual task with passive feedback, it is thus not necessary to instruct the test subject and/or to explain to the test subject how to solve the visual task. Thus, this type of visual task is particularly suitable for test subjects who cannot be expected to follow any instructions (e.g. children) or test subjects who cannot express themselves otherwise, e.g. due to a disability, and/or for test subjects who do not speak the language of the refractionist.
According to an embodiment, the test subject is requested to give a predetermined active feedback depending on their optometric parameters when looking at the test image, which is done at least partially by means of a viewing direction actively assumed by the test subject, from the detection of which information on the optometric R 3314WO- 12 - parameters of the test subject is determined. As an alternative to the passive feedback described above, active feedback can thus be provided for solving the visual task. It should be noted that the use of a visual task with active feedback does not exclude the use of a visual task with passive feedback. For example, tasks with passive feedback can be performed after or before visual tasks with active feedback, or vice versa, so that different types of visual tasks can be part of the same measurement procedure. Usually, in visual tasks with active feedback, the visual task is explained to the test subject, in particular the direction in which they should look when performing the visual task. The viewing direction to be adopted can depend on the visual acuity and/or at least one other optometric parameter of the test subject. For example, the test subject may be asked to look at the one of a plurality of visual symbols that they subjectively see best and/or better and/or with greater contrast.
In a further development of the embodiment with the active feedback, the active feedback comprises, in addition to the actively taken viewing direction, an additional feedback, which is given by the test subject in at least one of the following ways: - by manually pressing a trigger;- by performing a gesture;- by blinking or closing the at least one eye;- by acoustic feedback.
Manual actuation of a trigger can be effected by means of a hand and/or foot, such as pressing a button or pedal. Alternatively, it can be effected by actuating such as swiping over a sensitive field such as a touchpad and/or touch display. Gesture control may be made possible by performing a gesture. For example, at the moment the test subject looks at an area of the test image that is dependent on the visual acuity of the test subject, the test subject can perform a confirmation gesture, such as using the hand and/or foot. A deliberate blinking and/or closing of the at least one eye for a predetermined period of time of a few seconds can be directly detected by means of the eye tracking unit. For example, the test subject may be instructed to first look at a specific area of the test image and blink immediately thereafter. Acoustic feedback can R 3314WO- 13 - also be registered. Here, the acoustic feedback can preferably be as simple as possible, for example consisting only of a sound such as "mhm", for which the test subject does not have to open their mouth, so that their viewing direction remains as unchanged and/or unaffected as possible when giving the acoustic feedback. The combination of actively assumed viewing direction and additional feedback enables a particularly reliable and intentionally emitted feedback, which makes it possible to reliably register the viewing direction belonging to the test image.
In an embodiment of the method with active feedback, this active feedback includes that the test subject recognizes and fixates one of several visual objects displayed as a test image. When fixing the visual object, the test subject adopts the actively adopted viewing direction, which is detected and registered as part of the active feedback. For example, two different visual symbols can be displayed as a visual task and the test subject can be asked to fixate and look at the visual object that they recognize better.
In general, the test subject can be instructed to look at the visual object that has a stronger or weaker expression of a given property.
According to an embodiment of the method with a visual task with active feedback, this active feedback includes that the test subject actively adopts a viewing direction which depends on a property of at least one visual object displayed as a test image, which the test subject recognizes. The visual task can be configured in such a way that it can only be solved by the test subject if they recognize the displayed visual object. If, for example, a Landolt ring commonly used for visual tasks is used as a visual symbol and visual object, the test subject can be instructed to look actively in the direction and/or at the edge of the test image to which the Landolt ring is open. The viewing direction then taken is detected and evaluated in the form of the eye signal. In this type of visual task, for example, only a single visual symbol can be displayed. Test variants can also be used in which the viewing direction to be adopted can be written out. For example, "up", "down", "right" or "left" can be displayed as a visual object so that the test subject can follow this instruction written as a visual object with their viewing direction if they can solve the visual task.
R 3314WO- 14 - According to an embodiment with active feedback, this active feedback includes that the test subject first adopts a viewing direction relevant for the feedback, which depends on at least one visual object displayed as a test image and which is detected by the eye tracking unit, and then looks at an actuation field, which is also detected by the eye tracking unit. Here, for example, the test subject may first fixate a visual object displayed on the test image for a predetermined time and then fixate an actuation field. The actuation field may be a field or a plurality of fields, which is displayed either as part of the test image or adjacent to the test image. In an embodiment for this purpose, it is first detected which of the displayed visual objects the test subject fixates. This detection of fixation can be confirmed by an automatic feedback, for example by a colored or other marking of the visual object thus fixated, for example by a circle or the like. Here, the test subject can thus select one of the displayed visual objects and activate it with their view, similar to clicking with a computer mouse. Subsequently, the test subject can confirm their selection by fixing the actuation field. Further actuation fields can be provided, the fixation of which leads to a cancellation, to an unmarking of the selected visual object, to a pause, or to similar functions. The test subject can steer a visible or invisible cursor through the test image with the help of their view.
According to an embodiment with active feedback, this active feedback includes that the test subject follows at least one visual object displayed as a test image with their view and the viewing direction and/or orientation of their at least one eye is detected by the eye tracking unit as a viewing movement and/or eye movement in a time­dependent manner. In this process, it can be checked whether the test subject recognizes the displayed visual object to such an extent that they can reliably follow it. The evaluation does not only set a single viewing direction and/or orientation in connection with the test image, but a movement sequence of the viewing direction in connection with a test video. In this case, the eye signal generated from the detected measurement data can also be time-dependent and evaluated in this time­dependence.
Alternatively, it is also possible that the test subject follows at least one visual object R 3314WO- 15 - displayed as a test image with their view as part of a passive and/or unconscious feedback and the viewing direction and/or orientation of their at least one eye is detected by the eye tracking unit as a time-dependent viewing movement and/or eye movement.
According to an embodiment, a visual task associated with the displayed test image is explained to the test subject in at least one of the following ways: - by displaying at least one text, image and/or video;- by giving an acoustic explanation; and/or- by a chatbot or agent, which is controlled by artificial intelligence and with which the test subject communicates acoustically and/or in text form.
The explanation can contain how the test subject can control and/or solve the visual task associated with the test image with their view. Furthermore, the explanation can contain how the test subject can give an active feedback which contains their solution of the visual task. Since the exact visual acuity of the test subject has not yet been determined when explaining the task, it may be difficult to explain the test task in a text form that the test subject cannot yet read with sufficient acuity. Therefore, the explanation of the visual task in an acoustic way is preferred, especially the explanation by an AI-controlled chatbot.
According to an embodiment, several at least partially different test images are displayed to the test subject in succession and/or at least partially different optical corrections are presented to the at least one eye in succession and/or simultaneously. The viewing direction and/or orientation of the test subject for each of the displayed test images and/or for each optical correction presented is detected and evaluated in accordance with the respective test image displayed and/or the respective optical correction presented. For example, the test images can differ in that they display smaller and smaller visual objects and it is checked up to which size of visual symbols the test subject can still recognize them. The sequence of the displayed test images can follow a previously determined pattern and, in particular, be controlled fully R 3314WO- 16 - automatically. Alternatively, the test images can be activated one after the other by a refractionist and merely the view evaluation can be performed automatically. The selection of the respective following test image can depend on the extent to which the test subject was able to recognize the current test image. In the same way, the different optical corrections can be selected and presented to at least one eye. The selection of the optical correction used in each case can depend on the view evaluation. The optical corrections can differ in particular in their spherical and/or cylindrical strength as well as their cylinder axis. In the course of the procedure, both the test images and the optical corrections provided and/or applied can be varied. It is possible that a plurality of different optical corrections are simultaneously presented to the at least one eye, through which the test subject can, for example, view different areas of the test image. For this purpose, a light field display can be used, for example, which is described in more detail below. Here, the test subject can be asked to look at the area which is subjectively best perceived by the test subject.
According to a further development of this embodiment, the at least partially different test images are displayed to the test subject by means of a light field display. The light field display can at least partially generate the at least partially different optical corrections. This means that the light field display either generates the optical effects of the different optical corrections alone, or that the different optical corrections are composed of, for example, a physically provided optical correction and additional optical effects generated by the light field display. Details of this embodiment are given in more detail below.
According to an embodiment, the method is performed semi-automatically or fully automatically and a subjective refraction is performed, and/or the visual acuity and/or the visual sensitivity of the test subject is determined semi-automatically or fully automatically. During the refraction, one or more intermediate results can be stored in order to generate sufficient measurement data from various visual tasks when approaching the actual required optical correction, from which the sensitivity of the visual acuity can be determined. The visual acuity sensitivity can be used to adjust the addition and/or the refractive transition of ophthalmic lenses to the visual acuity R 3314WO- 17 - sensitivity of the test subject.
One aspect relates to a method for determining optometric parameters of a test subject, in particular according to the preceding aspect, comprising the steps:- providing a light field display;- actuating the light field display such that the light field display shows a test image for determining the subjective refraction of the test subject, wherein- the test image has a plurality of test image areas;- the light field display for each of the test image areas simulates an assignedoptical correction for at least one eye of the test subject in such a way that the impression is created that the at least one eye is looking at the respective test image area through the respective assigned optical correction;- at least two of the simulated, assigned optical corrections differ from each other with respect to their optical effect; and- the test image areas with the assigned optical corrections are displayed simultaneously; and- determining the optometric parameters of the test subject in accordance with the displayed test image and the simulated optical corrections.
In the method, the individual method steps do not necessarily have to be carried out in the sequence listed above as well. This means that the individual method steps can be carried out either in the specified order, in a different order and/or at least partially simultaneously.
The method may be either an embodiment of the preceding aspect or a method independent thereof. As an embodiment of the preceding aspect, the light field display can be used both (at least partially) as the refraction unit and for displaying the test image. By means of the eye tracking unit, the viewing direction and/or orientation of the at least one eye may be detected so as to determine which of the displayed test areas with the simulated associated optical correction the test subject is currently looking at.
R 3314WO- 18 - However, even without the eye tracking unit, i.e. as an independent aspect, the simultaneous display of different test image areas with different assigned optical corrections offers technical advantages. Simultaneous display enables direct comparison of the different optical corrections with each other. In conventional refraction methods, test images are displayed one after the other. The test subject is not always sure which display actually gives a better visual impression. This often leads to repeated switching back and forth between test images and/or corrections that are held up. With the simultaneous display, the different optical corrections can be compared directly and immediately with each other, making it easier to determine and/or select the subjectively best and/or subjectively better visual impression. This can enable a more reliable determination of the optometric parameters.
Simultaneously displaying and thus displaying at the same time can speed up and/or shorten the parameter determination.
The test image areas can either all display the same image and/or visual object, or at least partially display different images and/or visual objects. The display of the same images and/or visual objects in the different test image areas can facilitate the selection of the best assigned optical correction. However, it is also possible to display a large contiguous test image that extends over all or at least several of the test image areas. Different optical corrections can be assigned to individual areas of the test image.
Combined with the eye tracking unit, automatic feedback detection can be effected, so that the process can be automated and/or human errors can be reduced.
When using the light field display, a phoropter as a refraction unit can be dispensed with, as can physically existing optical corrections. This makes it possible to reduce the components required and/or the material costs.
The light field display can generate a light field which is projected into the eye(s) of the test subject. The light field display can simulate different spherical and/or cylindrical optical effects without presenting them physically, or as an object, to the eye(s). The R 3314WO- 19 - light field display can show the test image from several viewing angles simultaneously. This allows, for example, two images - one for each eye - with slightly different perspectives to be displayed simultaneously to create a 3D image. The displayed test image can be adapted to the position of the test subject. Thereby, the test subject and/or their eyes can be detected, e.g. by a camera and/or the eye tracking unit, if present. 3D structures behind the monitor plane can be simulated and/or light sources that illuminate individual areas differently.
The light field display can simulate the different spherical and/or cylindrical optical effects simultaneously and/or consecutively. The simultaneously simulated effects can be simulated on separate objects (e.g. optotypes), which are perceived at different positions, so that the observer can perceive them locally separated from each other. For example, different spherical optical effects can be simulated at the same time, and then (e.g. for an optimized spherical optical effect) several different cylindrical optical effects can be simulated at the same time.
A light field display can be configured as an e.g. planar light source in which the position and/or the direction and/or the intensity and/or the color of the light emission can be varied. For this purpose, the light field display can have, for example, a lens array and/or lens array, which can be arranged at the distance of the focal length of the individual lenses in front of a monitor, which has a pixel array. Depending on the control, a 4D light field can be generated in this way. The spatial resolution can depend on the number of individual lenses of the lens array and/or lens array, while the angular resolution of the light field display may depend on a number of monitor pixels of the monitor behind each individual lens. By means of calibration, at least one individual lens and/or a radiation direction of the generated part of the light field can be or will be assigned to each monitor pixel.
With a conventional monitor, which comprises a pixel array, diffuse radiation occurs. By combining this monitor pixel array with a lens array, i.e. an array of lenslets and/or micro lenses, a directional radiation can be achieved from each superpixel. Thereby, each superpixel is formed by all pixels arranged behind each lens of the lens array (i.e.
R 3314WO- 20 - behind each lenslet) and/or pixels assigned to this lens.
Alternatively or in addition to the lens array, the light field display may comprise at least one array of micromirrors. Two or more arrays of micromirrors can also be arranged in series to generate the light field.
Furthermore, a light field display with at least one scanning mirror can be used, e.g. with a single scanning mirror or a combination of two or more scanning mirrors in series.
The lenses and/or mirrors of the light field display can each be configured to be monochromatic or multicolored, in particular red or green or red/green. This makes it possible to exploit chromatic aberration of the eye to determine optometric parameters, e.g. by bichromatic refraction methods, red-green tests and/or red-green matching. This provides a better comparison with simultaneous imaging with different corrections.
For example, the light field display can show at least one optotype and/or another visual object in each test image area, and additionally simulate a different optical effect of the correction for each test image area. At least two, preferably all, of the simultaneously simulated optical corrections differ from each other with respect to their spherical and/or cylindrical effect.
The light field display simultaneously displays at least two test image areas. In some embodiments, three, four or even more test image areas may be displayed simultaneously. Preferably, about two to a maximum of eight test image areas are displayed simultaneously. The number of test image areas displayed simultaneously may therefore be limited upwardly so as not to overwhelm and/or confuse the test subject. In individual embodiments, however, this maximum number can also be exceeded.
The number of displayed test image areas can be variable. For example, a large number of test image areas with different optical corrections can be displayed first, but R 3314WO- 21 then the number can be reduced to a few test image areas (e.g., providing a particularly good subjective visual impression). At least in one step of the process, only two different test image areas with two different optical corrections can be displayed to the test subject, thus enabling a direct comparison of a specific change. This can be effected in particular in the final steps of a subjective refraction determination, cf. method steps e) of an exemplary embodiment for performing a subjective refraction described below.
A 2D, a 3D, and/or a 4D test image can be displayed as the test image, which comprises several test image areas.
The test image areas can be approximately the same size and/or displayed uniformly on an inspection plane, in particular in rows and/or columns.
In one method step, different corrections can be simulated and thus "displayed" simultaneously only with respect to the spherical effect, and in another method step, different corrections can be simulated simultaneously only with respect to the cylindrical effect. In one method step, for example, different corrections can be simulated and thus "displayed" simultaneously only with respect to the axial position of the cylindrical effect, and in another method step only corrections differing from one another with respect to the cylinder thickness. In this or a similar way, the simulated, different optical corrections can be meaningfully grouped, in particular grouped according to method steps for performing a subjective refraction (cf. method steps a) to e) described below).
In an embodiment, different optical corrections are simulated simultaneously at least with respect to their cylindrical effect and/or axial position.
When determining the optometric parameters, optical corrections can be simulated by means of the light field display, which not only simulate spherical and cylindrical corrections, but with which a determination of J0 and J45 and/or Harris vectors and/or power vectors can be performed.
R 3314WO- 22 - Test images each with a red and a green area can be displayed by the light field display. This allows a bi-chromatic refraction method to be performed, in which, for example, chromatic aberration of the eye can be exploited and/or determined. Thus, red-green tests and/or (binocular) red-green alignments can be performed. If these images are displayed simultaneously at different simulated optical effects by the light field display, this allows a better comparison of the different optical corrections.
The light field display can be used to generate light fields with a test image for a single eye, and/or to generate binocular light fields in which a dedicated optical effect is assigned to each eye and simulated.
With such binocular light fields, dedicated test images can be displayed to each eye, showing each eye different objects and/or the same object.
Binocular light fields can be generated to display test images to each eye to determine phoria and/or corrective prisms.
For example, a light field can be generated that displays a circle to one eye and a cross to the other, each with an assigned optical effect (corresponding to the assigned optical correction). If the test subject sees the cross centrally in the circle, binocular fine-tuning may have been achieved. Alternatively, other symbols and/or objects can be displayed, which together and when correctly aligned produce a coherent image.
Binocular light fields can be generated such that at least one dedicated object is displayed to each eye and additionally at least one common object for both eyes. This can generate a fusion stimulus which can be used to perform binocular refractions e.g. with binocular fine adjustment.
A light field control can be provided, via which light field control signals are sent to the light field display for displaying certain test image areas and/or for simulating certain optical corrections. These light field control signals may be automatically generated R 3314WO- 23 - and/or processed and/or generated by a refractionist. The selection of at least some of the light field control signals may depend on feedback from the test subject, in particular feedback detected by means of an eye tracking unit and/or by the refractionist.
From the various test image areas displayed, the test subject can select the one or ones for which they have the best subjective visual impression. For this purpose, the test subject can give feedback, e.g. by telling a refractionist which of the test image areas produces the subjectively best visual impression for the test subject. The feedback can also be detected by an eye tracking unit, a speech recognition unit, and/or a button or the like.
As a result of the procedure, a (monocular or binocular) best subjective refraction result can be determined as the optometric parameters of the test subject. The optometric parameters result from the displayed test image and the respective assigned simulated optical corrections. The optometric parameters may also relate to and/or include physiological parameters. The optometric parameters may include subjective optical parameters such as a subjectively determined best optical correction, i.e., a subjective refraction, and/or objective optical parameters such as aberrometric measurement data and/or pupillometric measurement data. The optometric parameters may include information on visual acuity and/or contrast vision.
In an embodiment, the light field display is used to display optical corrections with higher order optical effects (i.e., at least third order), especially simultaneously. This makes it possible to also investigate the need for higher order optical effects during refraction. Here, corrections with second-order optical effects can be displayed, i.e. corrections with spherical and/or cylindrical optical effects, as well as additionally (or exclusively) corrections with third- and/or fourth-order optical effects, i.e. corrections for e.g. coma, asymmetry error, spherical aberration, trefoil and/or pentafoil. Optical effects with even higher orders can also be used, but the ones specified above are the most relevant in calculation and/or manufacture of glasses.
R 3314WO- 24 - By using the light field display, the selection options of presented optical corrections can be increased and/or made more flexible. The desired optical effect can be created and/or simulated immediately without having to produce a physical correction (e.g. in advance).
Since this method may be embodied as an embodiment of the aspect described above, the embodiments relating to the aspect described above may also relate to this method and vice versa. In any case, this may also apply to those embodiments which do not concern the eye tracking unit and/or the eye signal, and which may thus be independent of the aspect described above.
According to an embodiment, the optical effects of the different optical corrections are generated from an interaction of an optical effect of at least one physical optical correction and additionally optical effect generated by the light field display. The physically provided correction can be e.g. at least one common lens with a pre-known optical effect, or e.g. for each eye a dedicated lens like e.g. old, already existing glasses and/or test glasses.
According to an embodiment, the physically provided correction comprises at least one classical lens.
According to an embodiment, the physically provided correction comprises at least one adaptive lens and/or other adaptive element.
According to an embodiment, the physically provided correction comprises at least one displaceable lens and/or a lens system with at least two lenses, which are arranged displaceably relative to one another and/or to the pixel array in such a way that different spherical and/or cylindrical optical effects can be produced thereby.
According to an embodiment, the physically provided correction comprises at least one classical lens as well as at least one adaptive lens and/or another adaptive element.
R 3314WO- 25 - According to an embodiment, a lens from a lens magazine is selected as the physically provided correction, such as in a classic phoropter. This allows the use of a plurality of lenses with different optical effects.
According to an embodiment, at least one Fresnel lens is used as a physically provided correction. This makes it possible to realize a lens with a large area, which can be arranged e.g. close and/or immediately adjacent to the light field display positioned e.g. several meters away from the test subject.
The optical effect of the total correction provided consists of the optical effect of the physically provided, optical correction(s) and the additional optical effects generated by the light field display. The physically provided optical correction can be arranged in the beam path between the test subject and all test image areas and thus roughly preset the optical corrections. In addition, the light field display can be used to fine­tune the optical effect coarsely preset by the physically provided optical correction. In this way, the optical effect of the physically provided correction can be additionally changed by about +/-0.50dpt or about +/-1.00dpt in individual test image areas by means of the light field display.
If, for example, a physically provided correction (for one or both eyes) of +5.00dpt is used, this optical effect can be fine-adjusted between +4.50dpt and +5.50dpt by means of the light field display in the individual test image areas.
Since some light field displays have a limited dynamic range, this combination with a physically provided, optical correction can improve the application area for the light field display. Furthermore, the combination makes it possible to use a more favorable light field display with a smaller dynamic range.
In a further development, the physically provided correction generates only a spherical optical effect, while the light field display additionally generates different cylindrical optical effects.
R 3314WO- 26 - Combined with an already existing pair of glasses, whose optical effect is known and/or measured, a change in the refraction values of the test subject can be determined, in particular by means of a light field display with a low dynamic range.
One aspect relates to a system for determining optometric parameters of a test subject comprising:- a refraction unit for setting an optical correction for at least one eye of the test subject;- a display unit for displaying a test image for determining the subjective refraction of the test subject;- an eye tracking unit which is configured and configured to detect a viewing direction and/or an orientation of the at least one eye of the test subject while the test subject is looking at the displayed test image;- a signal unit which generates an eye signal containing information on the detected viewing direction and/or orientation of the at least one eye of the test subject; and- an evaluation unit which determines the optometric parameters of the test subject by evaluating the eye signal in accordance with the displayed test image.
The device can be used to perform the method according to the preceding aspect. Therefore, all explanations regarding the method also refer to the device and vice versa. The device can be formed in several parts. In particular, the display unit may be separate and/or distinct from a refraction device. The refraction device may comprise, for example, the refraction unit and the eye tracking unit, as well as, if applicable, a signal unit and an evaluation unit. In this context, an integrated or separate controller may be provided, which includes the signal unit and/or the evaluation unit and/or contributes to the control of the different units of the system or takes over this control completely, in order to be able to perform an automatic detection of the subjective refraction.
The device and/or the method can be used to perform subjective refraction. The R 3314WO- 27 - optometric parameters detected and determined in this way can also be used to calculate a visual aid for the test subject which is individually adjusted and optimized for the test subject. Thus, during the determination of the optometric parameters such as a subjective visual acuity and/or refraction, measurement data can be generated which are relevant for the production of a visual aid such as a pair of glasses for the test subject. For this purpose, the device can determine and/or calculate optical parameters (such as pupil distance) of the test subject, which are required and used in the manufacture of the glasses.
One aspect relates to a system for determining optometric parameters of a test subject, in particular according to the aspect described above, comprising a light field display which is configured to:- display a test image for determining the subjective refraction of the test subject, which comprises a plurality of test image areas; and- for each of the test image areas, simulating an assigned optical correctionfor at least one eye of the test subject in such a way that the impression is created that the at least one eye is looking at the respective test image area through the respective assigned optical correction;- wherein at least two of the simulated associated optical corrections differ from each other with respect to their optical effect; and- wherein the test image areas with the assigned optical corrections are displayed simultaneously.
The system may further comprise an evaluation unit which determines the optometric parameters of the test subject depending on the displayed test image and the displayed optical corrections. The evaluation can alternatively or additionally be performed by an operator such as an optometrist, possibly by means of auxiliary means not necessarily belonging to the system, such as a computer and/or a computer program product. This evaluation may be at least guided by a computer program product, which may be configured as part of the system. Alternatively, the computer program product can perform the evaluation fully automatically, in particular combined with an eye tracking unit.
R 3314WO- 28 - The system may be either an embodiment of the preceding aspect or a stand-alone system that is independent of the eye tracking unit, the signal unit, and/or the eye signal. As an embodiment of the preceding aspect, the light field display may be used as the refraction unit and/or the display unit. By means of the eye tracking unit, the viewing direction and/or orientation of the at least one eye may be detected, so as to determine which of the displayed test areas with the simulated associated optical correction the test subject is looking at.
The device can be used to perform the method according to the preceding aspect with the light field display. Therefore, all explanations regarding the method likewise refer to the device and vice versa.
In an embodiment, the system is configured to be fully automatic in such a way that no specialist personnel and/or no personnel at all are required for use. The test subject can even operate the system alone as a layperson. The system can be configured as a kind of kiosk, similar to a machine for taking passport photos. The test subject can enter the kiosk and/or position themselves in front of the system and their optometric parameters are determined fully automatically, e.g. triggered by a start signal such as a button press and/or proof of payment.
In an embodiment, the system is integrated into a mobile terminal such as a smartphone and/or tablet and/or laptop. The mobile terminal device may have a camera that can be used, for example, to detect the distance of the test subject and/or the eyes of the test subject. The camera may be configured as a normal digital camera and/or as a depth camera. The mobile terminal may comprise an infrared camera for pupil detection and/or a distance sensor for distance detection and/or calibration. A display of the mobile terminal may be configured to display the test image(s). The mobile terminal may further comprise a light field display to (at least partially) generate the test image and optical corrections. The light field display may be integrated into the regular display, for example.
R 3314WO- 29 - In an embodiment, the camera is further used for eye tracking in accordance with the aspect of the invention described at the outset.
One aspect relates to a computer program product comprising computer-readable program portions, which, when loaded into a processor and executed, cause an apparatus according to any one of the preceding aspects to perform a method according also to any one of the preceding aspects, wherein the computer program product at least partially controls and/or regulates any one of the following units:- the refraction unit; and/or- the display unit; and/or- the eye tracking unit; and/or- the signal unit; and/or- the evaluation unit.
In this case, the computer program product can be executed on a controller and/or control unit, for example, and used to control all of the aforementioned units, so that the subjective refraction can be performed fully automatically. In particular, a light field display can be used both as a refraction unit and as a display unit.
The invention is described in more detail below with reference to exemplary embodiments and figures. These embodiments and figures are provided for a better understanding of some aspects of the invention. The Figure show in: Figure 1A a schematic representation of a cross cylinder in a first position;Figure 1B a schematic representation of a cross cylinder in a second position;Figure 2 a schematic representation of a first embodiment of a system fordetermining optometric parameters of a test subject in a first display state;Figure 3 a schematic representation of a second embodiment of a system fordetermining optometric parameters of a test subject in a second display state;Figure 4 a schematic representation of a light field display of a system for determining optometric parameters of a test subject; and R 3314WO- 30 - Figure 5 a schematic representation of a light field display of a system for determining optometric parameters of a test subject.
Embodiments According to the invention, a refraction unit is combined with an eye tracking unit with which a viewing movement of the test subject to be refracted can be detected at least as part of a feedback to a visual task.
This feedback can be displayed to the refractor, i.e. the refractionist, e.g. on a control unit of the system, and thus be used for a subjective refraction by the refractionist. This can supplement or replace the conventional acoustic communication between refractionist and test subject as feedback, which can be tedious and error-prone. In refraction devices where a chin rest is used for the test subject, acoustic feedback usually involves an uncomfortable movement of the lower jaw for the test subject. Furthermore, such movement can lead to an unintended change in the position and/or orientation of the eyes with respect to the refraction device, and thus to measurement errors. This can be improved using the viewing movement as at least part of the feedback.
The feedback can be used for partially or fully automated refraction.
This feedback can be used for a remote refraction where the refractionist is not directly on site. The feedback of the test subject can be transmitted to the refractionist either completely and immediately as part of a partially automated refraction, or only as needed or delayed.
The invention can be used for the following types of subjective refractions: • In a manual subjective refraction, the subjective refraction is performed manually. In this case, the refractionist can specify the settings of the refraction unit and/or the contents of the display unit for each step themselves.
R 3314WO- 31 • In a fully automated subjective refraction, the subjective refraction is performed in a fully automated manner. In this case, the system can automatically determine the settings of the refraction unit and the content of the display unit based on the feedback it receives from the test subject. If needed and available, content displayed on a control unit can also be assigned automatically. Furthermore, the end of the refraction process and the result can be determined automatically. Intervention by a refractionist is generally not required here.• In a partially automated subjective refraction, a subjective refraction process is performed in which the system performs some parts of the refraction process (e.g., a determination of the sphere, the cylinder, and/or the axis position) in an automated manner, and other parts of the refraction process (e.g., a binocular adjustment of the addition) are performed manually. Here, the system can at least partially make suggestions for settings of the refraction unit and/or the content of the display unit, which the refractionist can at least partially override and replace with their own settings or content. Preferably, the operating unit is used for this purpose.
The system comprises at least one refraction unit and one eye tracking unit. It can preferably have a display unit for displaying the visual task and/or visual object to be viewed by the test subject. Optionally, the system comprises a control unit for displaying at least the current and/or final state of the refraction set in the refraction unit and/or for manually changing it and/or the detected feedback from the test subject.
As a visual task, the visual object can be displayed to the test subject, which can take the form of a symbol such as an optotype, a digit, a letter, a pictogram, a picture, or the like. The visual object can be displayed in one or more colors.
The system may additionally comprise a transmission unit that transmits the result of the refraction determination to a receiver system, such as a receiver system for ordering ophthalmic lenses, and/or a receiver system for consulting ophthalmic lenses.
R 3314WO- 32 - Refraction unit As a refraction unit, a device is used which is at least capable of manipulating curvatures of the wave fronts and, if necessary, additionally a mean propagation direction and/or wavelength and/or intensity and/or polarization state of the light emanating from a visual object (e.g., displayed on the display unit) and of directing these into at least one eye of the test subject. For this purpose, optical corrections can be presented to the test subject.
The refraction unit can also be capable of directly generating light suitable for displaying visual objects on the retina of the test subject (e.g., using a light field display and/or a holographic display. Here, the display unit may be integrated in the refraction unit.
The refraction unit may be configured as an automated phoropter or as refraction glasses with at least one refractive and/or diffractive element (such as a lens and/or grating). The refractive and/or diffractive element(s) may be adaptively configured, such as at least one deformable mirror and/or at least one deformable lens.
In an embodiment, the refraction unit is configured as a phoropter unit and is used to present optical corrections to the test subject as trial lenses with different effects, i.e. e.g. with different sphere, cylinder and/or axis. Furthermore, color filters (especially red and green) as well as polarizers for image separation (vertical and horizontal and/or left and right circular), gray filters (different intensities), apertures and prisms (according to intensity and base top/bottom/right/left or according to intensity and direction) can be placed in front of the test subject.
The different effects can be classically achieved by lenses with different spherical and cylindrical effects, and the axial position can be adjusted in the cylindrical lenses.
In more modern designs, the different effects of optical corrections can be realized by variable-fill liquid lenses, lenses with variable interfaces, Alvarez lenses, or other R 3314WO- 33 - elements with variable optical effects, such as adaptive mirrors. The latter have the advantage of having no chromatic aberration. Alternatively, the different effects can also be achieved with diffractive elements.
The aforementioned ones of optical corrections as effective elements can be combined with each other, e.g. as a combination of a fluid-filled lens with a rotatable Alvarez cylindrical lens.
If a light field display and/or a holographic display is used as the display unit, with which the necessary light fields can be generated directly, an additional unit with optical corrections (such as a phoropter) can be dispensed with.
A pair of measuring glasses as a refraction unit can either be operated manually, helped by automated feedback, or be provided with the variable elements described above, which can be controlled by a computer and/or control unit.
Eye tracking unit The eye tracking unit is used to detect the viewing direction and/or orientation of the respective eye. The viewing direction depends on the orientation and vice versa. This can be effected exemplarily in at least one of the ways described below:• The eye tracking unit can capture an image of the pupil and/or the iris. An illumination unit can be used for this purpose. The eye position and/or viewing direction is derived from the position and/or perspective distortion of the pupil and/or iris.• The eye tracking unit can capture an image of one or more Purkinje reflexes. For this purpose, one or more preferably point-shaped illumination units are used, which generate the Purkinje reflex(es). The viewing direction and/or eye position and/or orientation of the eye is derived from the positions of the one or more Purkinje reflexes in the captured image.• The eye tracking unit can capture an image of the pupil and/or the iris and one or more Purkinje reflexes. For this purpose, one or more preferably point­ R 3314WO- 34 - shaped illumination unit(s) are used, which generate the Purkinje reflex(es). From the positions of the one or more Purkinje reflex(es) and the position and/or the perspective distortion of the pupil and/or the iris, the eye position or viewing direction is derived. The relative position of the one or more Purkinje reflexes relative to the position of the pupil and/or the iris can also be evaluated.• The eye tracking unit can alternatively or additionally acquire an image of the edge of the cornea and / or the sclera, which can be used to determine a viewing direction of the eye. Prominent points on the sclera, such as visible veins, can also be suitable for this purpose.• The eye tracking unit can acquire a fundus image, i.e. an image of the back of the eye. This can be used to determine a viewing direction of the eye, as the retina has an easily recognizable pattern of blood vessels.• The eye tracking unit can capture an image of a pattern reflected from the cornea, e.g. a pattern of multiple point light sources or extended light sources with known position. This can be used to determine a viewing direction of the eye. For this purpose, a topographic model of the cornea can be created, which can be used to calculate a reflectance image dependent on the viewing direction.• A coil (as a "coil" system) can be attached to the eye, e.g. by means of a contact lens without optical effect with integrated coil on the cornea. Via measurements of inductance, the position and/or orientation of the coil can be determined and the eye position or viewing direction can be derived therefrom.• For eye tracking, the electrical derivatives of the muscle tension of the eyes can also be used.
The viewing direction can be determined from images. From these images, a course of the view over time can be determined, i.e. the viewing movement. This can be effected either live or by means of at least one video recording.
Using the methods for determining the viewing direction briefly described above, an eye signal is determined from the acquired data (which may include measurement data on the position/orientation/distortion/orientation of pupil/ iris/Purkinje R 3314WO- 35 - reflex(es)/retina/cornea/sclera as well as the derived voltages of the coils or electrodes). The eye signal may have at least one horizontal and/or one vertical component and may be proportional, to a linear approximation, to a component of the viewing direction, e.g., to the horizontal and/or vertical component of the unit vector pointing in the viewing direction. The eye signal may additionally or instead contain components with the help of which not only the viewing direction or instead of the viewing direction the orientation of the eye can be determined.
Regardless of the specific design of the system, it can be advantageous to calibrate the system for the respective test subject, especially if the eye signal behaves only approximately linear to the viewing direction or to the orientation of the eye. Such a calibration results in a calibrated eye signal that matches the viewing direction and/or the orientation of the eye better than the original signal. The data required for calibration can be measured by setting the test subject visual tasks in which they have to look at specific points of a display panel of the display unit (e.g., center of the display panel, center of an edge of the display panel, corner of the display panel). If the display panel of the display unit is used for calibration, the points which are to be fixed by the test subject during calibration can be shown with suitable markings on the display panel. Since it may be difficult for an ametropic test subject to recognize such a marking when the correction is not yet known, the marking can be configured in such a way that it is recognizable despite ametropia. For example, it can have a particularly striking color (e.g., bright blue) or it can additionally or alternatively flash. The signal corresponding to the viewing direction specified by the dots is detected during calibration and a correlation between the detected data and the eye position and/or viewing direction is derived from it. If there are indications for the ametropia, e.g. based upon a known previous refraction, known values of an existing visual aid and/or a previously performed autorefractive measurement, such a correction can also be switched on beforehand during the calibration. This can make calibration easier for the test subject and improve the quality of the calibration.
The eye tracking unit can either be configured as a stand-alone device or be integrated into the refraction unit and/or the display unit.
R 3314WO- 36 - Display unit While the refraction unit and the eye tracking unit are necessary components of the system, the display unit is an optional component.
Without the display unit, the refractionist can use an independent display unit to perform subjective refractions based on the feedback of the test subject indicated by the system in a partially automated method.
For performing an at least partially automated subjective refraction, it is advantageous if the system can control the display unit and thus trigger the display of different visual objects. Alternatively, for performing an at least partially automated subjective refraction, a display unit with fixed content can be used, which contains all necessary symbols (e.g. optotypes such as Landolt rings, Snellen E, letters and/or numbers) or images (e.g. classic vision panel with different sized symbols).
The display unit may comprise a display panel on which the display unit can present test images to the test subject. By means of the test images displayed in this way, the test subject can be presented with visual tasks, the solution of which depends on the visual acuity of the test subject with the optical correction currently used.
The display unit can be implemented in the following configurations, for example:• The display unit can be configured as a display, e.g. as a display in TFT, LED, LCD, OLED or similar technologies, in which individual elements or pixels can be controlled individually. Here, separate screens can be used for each eye, e.g. to separate the visual impressions of both eyes or for 3-dimensional display.• The display unit can be configured as a projection display, which has a projection unit and a screen. The projection unit can be used to project an image onto this screen. The projection unit can show individual predefined images, such as "slides", or individual elements, e.g. parts of an optotype, or pixels can be controlled individually.
R 3314WO- 37 - • The display unit can be configured as a light field display and/or as a holographic display, which can display several visual objects such as symbols or optotypes simultaneously in the image field with different refractive effects. For example, three symbols with different spherical refractive effects, e.g. -0.5dpt, 0dpt and +0.5dpt, can be displayed simultaneously. Together with a refractive effect of the refraction unit sph, the test subject would see three symbols with the effects sph -0.5dpt, sph, and sph +0.5dpt.
In this case, the refraction unit can be combined with a light field display and/or holographic display in order to perform a part of the effect change, such as an astigmatic effect, with this display. In this case, it may be possible to dispense with components in the refraction unit.
In another possible embodiment, such a display unit may be integrated in the refraction unit and/or simultaneously fulfill the task of the refraction unit (cf. above).
The display unit can display single or multi-color visual objects, regardless of the exact design, as well as support different polarizations, such as linear-vertical, linear- horizontal, linear-diagonal, right-circular, and/or left-circular and/or brightnesses.
The display unit can be either stationary, e.g. permanently installed or mounted (e.g. on a wall), or mobile, e.g. as a tablet. For mobile embodiments in particular, it is useful to measure the distance between the test subject and/or the at least one eye of the test subject and the display unit. The measured distance can then be used for eye tracking and/or for determining the object distance during refraction, i.e., correction and/or visual acuity determination. This distance can be measured, for example, using a distance sensor, depth camera, ultrasound, stereo camera, image processing, and/or pattern projection.
R 3314WO- 38 - Categories of visual tasks According to the invention, the test subject can be presented with at least one visual task that can be solved by a viewing movement. Examples of visual tasks and categories of visual tasks are described below.
Most visual tasks are based on a classification of a visual impression perceived by the test subject, which is followed by a reaction of the test subject. The reaction may be intentional (hereafter referred to as active feedback, e.g., an active and conscious auditory feedback) or unintentional (hereafter referred to as passive feedback, e.g., unintentional viewing movement towards a seen object).
The basis of the response is the distinguishability of the visual impression from a reference that is either directly represented in the visual task, i.e., as an external reference, or exists in the mind of the test subject, i.e., as an internal reference.
Furthermore, a distinction can be made between visual tasks in which the psychometric and/or physiological quantity to be determined with the visual task, e.g. a contrast sensitivity threshold, is determined directly or indirectly, i.e. by means of direct or indirect determination.
In direct determination, the test subject is asked directly about property E1 of the visual impression that is to be measured. Direct determination includes visual tasks in which visual symbols or images are presented in different forms of a property E1. For example, different sizes, different contrasts, different brightnesses, different colors, different refractive effects, e.g. in light field displays, etc. can be used as property E1. The test subject is instructed to judge whether they can still recognize, better recognize and/or distinguish the property E1 of the visual objects to be tested, such as visual symbols or images (positive threshold). Alternatively or additionally, the test subject can be instructed to judge the opposite statement, i.e., whether they are just no longer able to recognize, better recognize, or distinguish the property E1 of the visual symbols or images to be tested (negative threshold). If both the positive and the negative R 3314WO- 39 - threshold are determined, the expressions of the property E1 of the visual symbols or pictures reported back by the test subject for both thresholds can be averaged.
If several visual objects such as visual symbols or images with different intensity are presented simultaneously or in a temporal sequence in an order ordered on the basis of intensity (e.g., increasing contrast or increasing size of the visual symbols), it is possible that the test subject communicates the thresholds by a spatial or temporal ranking (e.g., a position of the visual symbol on the display denoted by a number or a time of presentation of the visual symbol denoted by a number).
In contrast, in indirect determination the test subject is asked about a first property Eof the visual impression (e.g., the direction of the gap of one or more Landolt rings), which is not influenced by the psychometric and/or physiological quantity to be determined (e.g., detectability of the opening of the Landolt rings in accordance with their size). For this purpose, one or more visual objects, such as visual symbols, with the same expression of a second property E2, such as size, contrast, brightness, color, associated with the psychometric and/or physiological quantity to be tested, but which differ in the first property E1, are presented. The second property E2 associated with the psychometric and/or physiological quantity to be tested is then varied and the visual task is repeated. By evaluating the frequency of a correct solution of the visual task, which changes parametrically with the varied property, the psychometric and/or physiological threshold is finally determined.
In summary, visual tasks can be categorized by the required feedback (active or passive), by the type of reference (internal or external), and/or by the type of determination (direct or indirect).
Solving visual tasks using viewing movements If a visual task is to be solved using a viewing movement, the feedback from the test subject must be resolvable by the eye tracking unit. Depending on the type of visual task, this results in different implementation forms.
R 3314WO- 40 - The term "viewing direction" used in the following is synonymous to the viewing direction determined with the eye tracking unit (i.e. a possibly calibrated eye signal of the eye tracking unit).
In the case of passive feedback of the test subject, the solution of a visual task with the aid of visual tasks is to be achieved by the fact that an unconscious or at least not directly voluntarily controlled viewing movement arises through the visual objects displayed, which can be resolved by the eye tracking unit, either by using an eye tracking unit with sufficiently high resolution, or alternatively or additionally by arranging the visual objects in such a way that a viewing direction directed towards them can be resolved on average by the eye tracking unit. Thus, for example, a visual object can be displayed on different locations of the display panel, and the viewing direction can be registered at the same time. On the basis of one or more thresholds, which may depend on the resolution of the eye tracking unit, it can be determined whether the viewing direction corresponds to the position of the displayed visual object. For example, exceeding and/or falling below a threshold such as 1°, 2°, 5°, 10°, 15°, 20° would be interpreted by the vertical viewing direction as fixation of a visual object represented in the upper or lower part of the display. Fixation accuracy can also be measured, which would be less accurate in the absence of a perceived visual object than in the presence of the perceived visual object.
In the case of active feedback, the test subject should communicate their solution to the visual task in a predetermined way with their view. As with passive feedback, the viewing movement is recognized either by using an eye tracking unit with sufficiently high resolution or, alternatively or additionally, by selecting the predetermined type of viewing movement or view deflection in such a way that it can be resolved by the eye tracking unit.
If, for example, the visual objects used have a clear direction, the direction of the visual object perceived by the test subject can be fed back by a corresponding viewing direction, e.g. starting from a displayed symbol or from the center of the display, which R 3314WO- 41 is detected by the eye tracking unit. This is possible even for visual tasks where the test subject uses an internal reference. The viewing directions can be displayed in a supportive manner for the test subject by colored and/or flashing markers on the display panel, e.g. near their edge.
If a visual task with an external reference is used, further embodiments arise which differ in the possibilities of feedback:1. Two visual objects can be represented, which serve as a reference for each other.a) Here, the test subject can be instructed to fixate the visual object with the stronger or weaker expression of a given property, e.g., the visual object that appears sharper to them.b) Another way is to communicate whether the symbols or images have the same expression of the property or not by a predetermined viewing direction. 2. Several visual objects can be displayed, which serve as a reference for each other.a) If exactly one of the visual objects differs from the others, the test subject can be instructed to fixate the visual object that differs from the others.b) More than two different visual objects can be presented. The test subject can be instructed acoustically and/or with the aid of a text displayed legibly for them (e.g. in a sufficiently large size) to fixate a visual object that occurs only once. The test subject can also be instructed by additionally enlarging the visual object to be fixed and displaying it legibly, e.g. at one edge of the display panel.In this visual task, the test subject can compare the visual objects presented against the previously described and therefore internal reference, but the other visual objects also serve as an external reference.c) One of the visual objects can be displayed twice, in particular once separately from the other visual objects, but here not necessarily enlarged or particularly clearly recognizable. The test subject is instructed to find and R 3314WO- 42 - fixate the separately displayed visual object in the set of remaining visual objects.d) A single visual object to be classified by the test subject can be presented, as well as several visual objects that are clearly recognizable to the test subject, i.e., presented e.g., magnified and/or with increased contrast. The test subject is asked to fixate the clearly recognizable visual object which corresponds to or looks most similar to the visual object to be classified.
The preceding embodiments may be combined. For example, an arrangement of clearly recognizable visual objects with a clearly defined direction can be arranged around a visual object to be classified in such a way that the direction defined by the visual object to be classified and the clearly recognizable visual objects coincide with the direction of the clearly recognizable visual objects.
The described embodiments can all be used in the sense of indirect determination. If a property of the visual impression is to be determined directly, the determination of the solution of the visual task can be achieved by eye tracking - similar to passive feedback - in that the viewing movement of the test subject can be resolved by the eye tracking unit by using an eye tracking unit with sufficiently high resolution and/or the visual objects are arranged in such a way that their average position can be resolved by the eye tracking unit. In this case, the visual objects are arranged in such a way that the property of the visual impression to be determined changes monotonically over the position of the display, e.g. that the displayed visual objects become increasingly blurred towards the right. The change of the property of the visual impression can thereby be carried out with the aid of the display unit, e.g. by means of displaying different contrasts or sizes of visual objects. Alternatively or additionally, an effect set in the refraction unit can be changed depending on the viewing direction detected with the eye tracking unit and/or the direction of the currently fixed visual object.
Further specific embodiments of visual tasks are described below.
R 3314WO- 43 - Visual task: Preferential Looking (passive feedback, internal reference, indirect determination).
In a Preferential Looking visual task, the test subject is presented with two fields. One of them is empty, the other one shows a visual object. If the test subject can see the visual object, they will look preferentially in its direction. If they cannot see it, they will show no preference for a field. The test subject does not necessarily have to be asked to look at the visual object, which is why this method also works for test subjects who cannot otherwise express themselves, or who cannot be expected to understand or follow the instructions, such as children.
Visual task: orientation of a symbol (active feedback, internal reference, indirect determination).
In a visual task, a visual object such as a Landolt ring is shown as a test image and the test subject is asked to look, starting from the center of the display or the center of the visual object, in the direction corresponding to the orientation of that visual object, e.g., the opening of the Landolt ring, a tip of an arrow, or the like.
This visual task is a direct determination of viewing direction with active feedback, where the test subject determines an internal reference, e.g., a presentation of Landolt rings with differently oriented openings.
Visual task: orientation of multiple symbols (active feedback, internal reference, indirect determination).
Several visual objects such as Landolt rings are shown and the test subject is asked to look in the direction corresponding to the orientation of a given number, most, or fewest of the visual objects, e.g., the openings of the Landolt rings.
R 3314WO- 44 - Visual task: finding a specific symbol (active feedback, internal/external reference, indirect determination).
Different visual objects are shown and the test subject is asked to look at a special visual object. For example, Landolt rings with different directions of opening, letters of the Latin or other alphabet, (familiar) symbols, and/or abstracted sketches (e.g., house, tree, car) can be used.
The task of identifying the visual object can be performed acoustically. Alternatively or additionally, the task can also be performed by means of a magnified visual object that is used as an external reference.
Visual task: Moving symbol (active feedback, internal reference, indirect determination).
A visual object is shown moving across the display panel. The test subject is asked to follow the visual object with their view. The viewing movement can be used to determine whether the test subject actually recognizes the visual object and can follow it, if necessary.
Visual task: red-green images (active feedback, external reference, direct determination).
A test image is shown, such as symbols against a monochrome background. A first area of the test image is red and a second area is green. The areas can divide the test image roughly in half. The test subject is asked to look at the area that appears sharper.
Visual task: symbols or images at different positions and with different refractive values (active feedback, external reference, direct determination).
At least two visual objects are displayed at different positions on the display panel which can be resolved by the eye tracking unit, e.g. sequentially or simultaneously.
R 3314WO- 45 - Each of these positions is associated with a different optical correction set by the refraction unit, i.e. the test subject is optically corrected depending on the position. The test subject is to look at the visual object that appears sharper and/or clearer and/or more legible to them.
Visual task: symbols or images in different manifestations of a property (active feedback, internal reference, direct determination).
Visual objects with different characteristics, e.g. different sizes, different contrasts, different brightnesses, different colors, are shown in the display panel. Each characteristic can occur once or several times. Examples are groups, e.g. rows of visual objects, each with the same characteristic of this property, wherein the characteristic of the property decreases from group to group. The test subject is then asked to look at a visual object or the group of visual objects that they just recognize.
In particular, a light field display and/or a holographic unit (as a refraction and/or display unit) can be used to simultaneously present the visual objects with different optical corrections. The different optical corrections can differ in particular with regard to sphere and/or cylinder thickness and/or cylinder axis.
Visual task: moving symbol with changing expression of a property (active feedback, internal reference, direct determination) A visual object is shown moving across the display panel. The test subject is asked to follow the visual object with their view. The characteristics of a property are changed, e.g. size, contrast, brightness, color of the visual object. The viewing movement can be used to determine at which intensity the test subject recognizes the visual object. Preferably, the best visibility is used, e.g. the largest available size, and the size is changed in the direction of poor visibility, e.g. reduced, until the test subject can no longer perceive the visual object and therefore can no longer follow it.
R 3314WO- 46 - Interaction of the test subject with the system The feedback of the test subject on visual tasks and/or for controlling the sequence in several successive visual tasks (referred to in the following as the measurement process) can be detected, for example, during a refraction as described below. Furthermore, in at least partially automated systems, the test subject can be guided through the measurement process as follows: To identify how and/or when the test subject solved a visual task, this visual task can be solved and/or completed in one of the following ways, for example:• Detection of a fixation time: If the view of the test subject remains on a visual object and/or a position for at least a predefined period of time, e.g. at least 1, 2, 5 or 10 seconds, this visual object and/or this position is considered to be selected. This time period may depend on the difficulty of the visual task, i.e., it may be increased, for example, if in the course of the refraction the visual task becomes more difficult, e.g., because the visual objects represented become smaller. If several visual objects are presented, another criterion can be used additionally or alternatively to calculate the time after which one of the visual objects is considered to be selected by the test subject. Thus, the fixation time may be specified as a multiple of the mean fixation time of the remaining symbols, e.g., as 1.5, 2, 3, 5, or 10 times the mean fixation time of the remaining visual objects, or as a relative proportion of the fixation time of all, e.g., at least 80%, 60%, 40%, 20%, or 10% of the sum of the fixation times of all symbols.• Blink: The test subject first looks at a visual object and/or position for a given time, e.g., at least 1, 2, 5, or 10 seconds, and blinks intentionally, i.e., closes one or both eyes for a given time, e.g., 0.5, 1, or 2 seconds.• Confirmation field: The test subject first looks at a visual object and/or a position for a predefined time, e.g. at least 1, 2, 5 or 10 seconds, and directly afterwards at an actuation field that is displayed on the display panel or next to it.• Other confirmation: There is a button, knob, pedal, and/or the like attached to the refraction unit, or to a separate control unit accessible or held by the test subject, that the test subject triggers with a hand or foot while looking at the R 3314WO- 47 - selected visual object and/or position. This type of confirmation can be effected in the same way by wiping across a touch-sensitive surface, or other tactile input methods, which are attached to the refraction unit, or are located on a separate control unit held by the test subject. Yet another possibility is gesture recognition using cameras and/or depth cameras, e.g., recognition of the gestures of one of the hands of the test subject, or other sensors to determine the position or orientation of hands and/or feet, e.g., using distance sensors, tilt sensors, etc.• Acoustic feedback: The test subject utters an acoustic signal while looking at the selected visual object and/or the selected position. Sounds that do not require the test subject to open their mouth, such as "Mhmm" or similar, are particularly advantageous as a signal.
For example, in the case of at least partially automated measurement processes, it may be useful for the test subject to be able to provide the system with control messages in addition to the feedback for solving the visual tasks, i.e., so-called test subject communication. In this way, the course of the measurement process can be at least partially controlled or at least influenced by the test subject. This can be effected by control messages such as "Back", i.e. repetition of the last visual task, "Pause" or "Cancel".
These control messages can be given in at least one of the following ways:• Viewing movements: Preferably on the right or lower edge of the display panel, control fields are placed that correspond to the corresponding control commands. The test subject can trigger these as described above, e.g. by fixating for a given period of time. Since the test subject is not yet fully corrected during the measurement and/or refraction process, these control fields can be large enough and clearly recognizable. They can be differentiated by color, e.g. green: "Confirmation", yellow: "Back". Blue: "Pause", red: "Cancel").• Button, knob or pedal, gesture: There is a separate button, knob or pedal for each possible control message that the test subject can trigger with one hand or foot, e.g. "Confirm", "Back", "Pause" and "Cancel". Alternatively, a separate gesture can be used for each possible control message.
R 3314WO- 48 - • Acoustic control message: The test subject utters an acoustic signal that is received and interpreted by the system, e.g., "Mhmm", "Ok" and/or "Confirm" for "Confirmation"; "Back" for "Back"; "Pause" for "Pause", "Abort" for Abort. If such an acoustic control message is used, the system may include and/or be associated with a speech recognition unit for recognizing and classifying the acoustic control message.
For example, in the case of at least partially automated measurement processes, which can be achieved without refractionist, the system can communicate with the test subject, i.e. a so-called system communication. The system can at least explain the procedure and the individual visual tasks to the test subject, ask them to solve the visual tasks, and/or possibly give status feedback on the progress of the measurement procedure.
The system communication can happen acoustically or visually by a voice output or display of corresponding texts. As a means to provide system communication, a chat­bot and/or agent (as "artificial intelligence") can be used, with which the test subject can communicate acoustically and/or in text form. A human such as a refractionist may be connected to the system (e.g., via a long-distance communication line) and thus communicate with the test subject.
In addition or alternatively, displays on the display unit used for refraction and/or an additional display unit can be used. It can be taken into account that the test subject is not fully corrected. Therefore, when displaying the system communication, a (e.g. pre­existing) correction can be used and/or a sufficient correction can be made by the refraction unit. The latter can be detected directly by the system, for example by an integrated aberrometric measurement unit.
Embodiments of refraction protocols The following are examples of protocols for automatic subjective refraction.
R 3314WO- 49 - A determination of the best subjective refraction can basically be done as described in the prior art, see e.g. D. Methling: Bestimmung von Sehhilfen, 2nd ed. Ferdinand Enke Verlag, Stuttgart (1996). Here, however, instead of the usual acoustic feedback of the test subject, the feedback determined according to the invention is used, which contains the detection of the viewing direction and/or orientation.
The subjective refraction can be performed on a device which can be remotely controlled or controlled by an algorithm which can depend on the test subject feedback and/or test subject communication described above.
In an embodiment, starting refraction values can be determined first. For this purpose, for example, an objective refraction can first be determined for a test subject, i.e. refraction values are determined on the basis of an objective measurement. For example, a method for determining such objective parameters is known from DE 2011 120 973 A1. The objective parameters may comprise aberrometric measurement data and/or pupillometric measurement data. The objectively determined measurement data, i.e. the aberrometric measurement data and/or the pupillometric measurement data, can be used to calculate an objectively optimized refraction.
As a starting point for determining the subjective refraction, the objectively determined refraction values can be used, for example, which are nebulized by a predetermined distance, e.g., by an additional sphere of 0.50 to 1.00 dpt. The refraction values nebulized in this way can be used as starting refraction values.
Alternatively, the refraction values of any existing optical correction can be used as starting refraction values, e.g. the refraction values of an older pair of glasses. Alternatively, any refraction values and thus any optical effects held in stock can be used as starting refraction values.
Subsequently, the following four main procedural steps a) to d), optionally supplemented by procedural step e), are carried out when determining the subjective refraction of the test subject. In the case of a monocular determination of the subjective R 3314WO- 50 - refraction, only method step a) or b) can alternatively be performed, optionally supplemented by method step e): a) Monocular determination of the most positive spherical-cylindrical refraction at which subjectively the best visual acuity is produced for a first eye of the test subject, e.g. for the right eye;b) Monocular determination of the most positive spherical-cylindrical refraction at which subjectively the best visual acuity is produced for a second eye of the test subject, e.g. for the left eye;c) Setting a binocular accommodative balance;d) Binocular determination of the most positive spherical-cylindrical refraction at which subjectively the best visual acuity is produced for both eyes of the test subject; ande) Subjective evaluation of the refraction values determined in this way in a test frame These method steps follow a logical pattern, during which visual acuity measurements can optionally also be performed. A change in the strength of the spherical correction can preferably be made if the addition of a sphere of 0.25 dpt subjectively improves visual acuity and/or optionally improves visual acuity, i.e. visual acuity, by one line, i.e. causes a change of -0.1 logMAR. This condition and/or a subjective improvement in visual impression can be used throughout as a criterion for change when adding a negative sphere.
In the case of a change towards a positive spherical effect, i.e. with the addition of a +0.25 dpt sphere, the visual acuity must improve or remain the same in order to change the retained optical effect by this +0.25 dpt in the sphere. This condition can be used throughout as a criterion for change with the addition of a positive sphere.
These different criteria result from the fact that one can be on a plateau when adding a positive sphere. For example, if one starts with +3.00 dpt and the refraction actually required is e.g. +3.25 dpt, the visual acuity will not change, regardless of whether +0.25 R 3314WO- 51 dpt is added or -0.25 dpt. Therefore, for the addition of a positive sphere, the further change criterion applies, which also applies for a thereby constant visual acuity.
A line of visual acuity can be considered to have been reached when the test subject can recognize at least 60% of the displayed optotypes of this line.
In the monocular determination of the refraction for the first eye, i.e. in method step a) , an intensity of the required spherical correction of the subjective refraction can first be determined. The second eye can be covered, e.g. the left eye. The measured starting refraction for the right eye is held out to the first eye. If an optional visual acuity determination for the start refraction results in a visual acuity value of 0.1 logMAR or better, a first positive lens for the first eye can be added to the start refraction. Subsequently, the visual acuity can be measured again and/or checked subjectively. If the applicable change criterion is reached, another positive lens will be added until the applicable change criterion is no longer reached. If the applicable change criterion is no longer achieved, the refractionist can add a negative lens instead. If the change criterion is reached, another negative lens can be added until the change criterion is no longer reached.
This completes the determination of the strength of the required sphere correction and is followed by a determination of an axis and/or axis position of any required cylinder correction.
For this purpose, a refractionist can use a cross cylinder 1 to determine an axis of a possibly existing astigmatism of the first eye. Figures 1A and 1Bshow in a schematic representation such a cross cylinder 1, which is also known under the name "Jackson cross cylinder" and Jackson cross cylinder. The cross cylinder 1 has a handle 5 through which a handle axis 2 extends. The cross cylinder 1 is an optical aid and has two cylinders crossed at 90°, namely a plus cylinder and a minus cylinder. The handle axis is arranged at 45° to a cylinder axis 3 of the plus cylinder and arranged at 45° to a cylinder axis 4 of the minus cylinder.
R 3314WO- 52 - Optotypes indicating a worst visual acuity of at least 0.2 logMAR can be displayed to the test subject. The handle axis 2 of the cross cylinder 1 can be located on a suspected and/or the objectively determined axis of an astigmatism of the first eye of the test subject. Subsequently, the cross cylinder 1 may be reversed between the two positions shown in Figures 1A and 1B, leaving the handle axis 2 true to position. The test subject can be asked which of these two rotational positions of the cross cylinder produces a better visual experience. If the test subject does not notice any difference, the axis for the refraction of the first eye has been found and the method continues with a determination of the required cylinder thickness, cf. below. If the test subject prefers one of the two rotation positions shown in Figs. 1A and 1B, the handle axis can be shifted clockwise exactly when the cylinder axis 4 of the minus cylinder is in the preferred rotation position clockwise from the handle axis 2, cf. situation in Fig. 1B. The handle axis 2 can be displaced counterclockwise exactly when the cylinder axis 4 of the minus cylinder is in the preferred rotational position counterclockwise from the handle axis 2, cf. situation in Fig. 1B.
This check with the two rotational positions and the twisting of the handle axis 2 can be repeated until the test subject no longer recognizes a difference between the two rotational positions, or until the handle axis 2 is moved back and forth in the process. In the latter case, the axis can be selected from these most recently used axis positions which most closely matches an older axis, e.g., an axis for this first eye which was used in an older pair of the eyeglasses of the test subject. Alternatively, the axis that is closer to a non-slanting astigmatism can be selected from these most recently used axis positions.
This completes the determination of the axis of the required cylinder correction and is followed by a determination of a strength of the required cylinder correction.
For this purpose, the cross cylinder 1 can be arranged in such a way that its cylinder axis 3 of the plus cylinder and its cylinder axis 4 of the minus cylinder are arranged exactly on the corresponding cylinder axes of the objectively determined refraction which is already presented to the first eye of the test subject. The cross cylinder 1 can R 3314WO- 53 - be rotated in the same way as in the determination of the axis position described above, i.e. with the position of the handle axis 2 true to the position. If a rotational position preferred by the test subject is the rotational position in which the two cylinder axes of the minus cylinders overlap, a negative cylinder power can be added, e.g. - 0.25 dpt. If the rotation position preferred by the test subject is the rotation position in which the cylinder axes 4 of the plus cylinder of the cross cylinder 1 overlap the cylinder axis of the minus cylinder of the refraction held in front, a negative cylinder power can be taken away, also e.g. in steps of quarter diopters. The refractionist can repeat this until the test subject no longer prefers any of the rotational positions, or until the strength of the cylinder correction changes back and forth in the process. In the latter case, the lowest strength of cylinder correction used should be selected.
When determining the strength of the required cylinder correction, care can be taken to ensure that the previously determined strength of the required sphere correction remains the same. This means that, for example, for every 0.50 dpt change in the strength of the cylinder correction, the strength of the sphere correction is also changed by 0.25 dpt in the other direction.
After determining both the strength and the axis of the required cylinder corrector, the sphere correction can be checked again. This can be effected in the same way as described above in connection with the determination of the strength of the required sphere correction. Optionally, if the strength changes considerably, the axis determination can be repeated to achieve a more reliable result.
This completes a monocular determination of the subjective refraction for the first eye, which is composed of the determined intensity of the required sphere correction and the determined intensity and axis of the required cylinder correction.
Subsequently, the refraction for the second eye is determined, i.e. method step b). This is performed in exactly the same way as method step a), only for the second eye and with the first eye covered.
R 3314WO- 54 - As a result, the monocular subjective refraction for the second eye is determined, which is composed of a certain strength of a required sphere correction and a certain strength and axis of a required cylinder correction.
In method step c),the binocular accommodative balance is adjusted. For this purpose, both eyes are uncovered. An obscuration can be added to the two monocular subjective refractions for the first and second eye respectively, e.g. an obscuration of +0.50 dpt in each case. In addition, a separator can be used, e.g., a polarizing filter and/or a red/green filter. One goal here may be to have both eyes enter the same accommodation state. The separator can allow the test subject to see different display parts of a target display with each of their eyes and to compare their sharpness. For this purpose, in addition to the display parts that can only be perceived by one eye at a time, a common display part of the target display should be visible to both eyes. Only then can a meaningful fusion take place and the visual acuity be checked. If one of the two eyes sees sharper than the other, the refraction of that eye can be further fogged with a positive sphere. The refractionist can repeat this process until both display parts of the target display appear approximately equally sharp to the test subject, or until the smallest difference in acuity is perceived in the process.
For method step d) , i.e. for binocular determination of the most positive spherical- cylindrical refraction, which subjectively produces the best visual acuity for both eyes of the test subject, the separator is first removed. Subsequently, an optional visual acuity determination can be performed. For the binocular determination of the strength of the required sphere correction, the same procedure can be followed as in method steps a) and b), only here for both eyes simultaneously. The two monocular cylindrical refractions already determined can remain unchanged here. The result can be used as the binocular subjective refraction. Alternatively, the binocular subjective refraction can be determined as the result of supplementary step e).
This is followed by method step e) , in which the refraction values obtained in method step d) are subjectively evaluated in a test frame. For this purpose, the refraction values determined in method step d) are placed in a test frame and adapted to the face R 3314WO- 55 - of the test subject. For this purpose, in particular, the pupils of the test subject can be centered in the middle of test lenses with these refraction values. A check of the test lenses can be performed in an open outdoor environment, with the test subject fixing a target in the distance.
The refractionist can binocularly add a sphere power of +0.25 dpt and ask the test subject whether the visual impression appears better or remains the same with or without this addition. If the visual impression appears better or remains the same with this addition, the refraction values determined in method step d) are used as the finally determined subjective refraction values, which are supplemented by this binocular addition of +0.25 dpt in the sphere.
If the addition does not lead to an improved or at least constant visual impression, the sphere power can be changed binocularly by -0.25 dpt and the test subject asked whether the visual impression appears better with or without this reduction by a quarter of a diopter. If this change to negative results in a better visual impression, the sphere power can be reduced again binocularly by -0.25 dpt. The patient can be asked whether the visual impression appears better with a change by -0.25 dpt or by -0.dpt. If the -0.25 dpt change results in a better visual impression, the refraction values determined in method step d) are used as the final subjective refraction values, to which the -0.25 dpt binocular change in the sphere is added. If the change of -0.50 dpt leads to a better visual impression, the refraction values determined in method step d) are used as the finally determined subjective refraction values, which are supplemented by the binocular change of -0.50 dpt in the sphere.
This concludes the determination of binocular subjective refraction for the test subject.
During this determination of the binocular subjective refraction, the visual acuity and/or a visual sensitivity can optionally be measured. Visual acuity can be measured with a cylindrical correction deviating from the determined subjective refraction. Jackson cross cylinders can be used for this purpose, e.g. with plus/minus 0.50 dpt, or a plus cylinder with e.g. +1.00 dpt compared to the determined subjective refraction.
R 3314WO- 56 - The intensity of the deviation of the correction from the optimal correction can also depend on the amount of addition or be roughly based on the maximum values of the unwanted astigmatism expected with a progressive lens. The visual acuity of a test subject with a lower addition would thus be measured with less spherical or cylindrical fog than the visual acuity of a test subject with a higher addition.
Visual acuity can be determined by means of optotypes, wherein visual acuity is considered achieved when at least 60% of the optotypes of a corresponding line have been recognized.
The visual acuity can be measured during and/or after method step(s) a), b), c) and/or d) and stored and/or written down for a subsequent calculation of the sensitivity. For example, at least two binocular visual acuity values can be determined during method step c) and/or d). One or more monocular visual acuity values can be determined during method steps a) and/or b).
The visual acuity measurement can be used as a control of the determined subjective refraction. This means that the test subjects may or may not achieve a given visual acuity. The visual acuity measurement can also be used to obtain information about the behavior of the visual system of the test subject.
There are a number of possibilities for visual acuity measurement. For example, the visual acuity can be measured using optotypes, i.e. letters, Landolt rings and/or similar. It can be tested whether the test subject can recognize the optotypes and/or their alignment completely or partially.
Since this may involve a threshold evaluation, it should be ensured that there is no random detection of the optotypes. This can be achieved by repeating visual tasks. A correct recognition of 60% of the optotypes of a set can be evaluated as a successful recognition of this set.
R 3314WO- 57 - Furthermore, a psychophysical evaluation of the visual acuity can be performed. Such a psychophysical evaluation can be based on a display of a sequence of optotypes of different acuity. This sequence can be changed depending on responses of the test subject. The goal of the assessment may be to have the sequence of optotypes converge toward the visual acuity of the test subject, which is used as a threshold for the assessment. The variations of the sequences can be changed depending on the responses of the test subject and depending on the particular method used.
To support an algorithm used in this process in converging to the correct visual acuity, the test subject can be asked for the smallest line of optotypes that they can recognize. Then, depending on the output, it can be checked whether the test subject can actually recognize the selected line and/or a smaller one.
Visual acuity, just like subjective refraction and/or optometric parameters, can generally be determined monocularly and/or binocularly. These are different visual parameters, all of which can be used to calculate lenses.
The methods described above for determining optometric parameters can be used to check for contrast sensitivity.
It can be provided that the refractionist can intervene in the execution of the algorithm at any time or at selected method steps. In particular, such an intervention and/or input can occur during binocular adjustment of the addition and/or determination of a cross cylinder.
In addition or as an alternative to determining the (e.g. best) subjective refraction, visual acuity values for one or more optical corrections or without optical correction can also be detected.
Instead of the usual acoustic feedback from the test subject, the feedback described above can be used. Here, not only the visual acuity with the best correction can be determined, but also corresponding visual acuity values with non-optimal correction.
R 3314WO- 58 - This makes it possible to detect the effect of a changed effect on visual acuity, such as occurs in progressive lenses (e.g. caused by an unwanted astigmatism or a not perfectly fitting spherical effect), and which can be used in the calculation of ophthalmic lenses.
When performing a measurement procedure with the size of a visual object as a varied property, visual acuity can be inferred directly from the smallest visual object detected and the distance of the display panel or display field.
The detection of visual acuity values made possible in this way allows the sensitivity of the visual acuity to be determined, based on this visual acuity determination. For this purpose, the visual acuity can be measured with at least two preceding refractions, e.g. zero refractions.
Preferably, the visual acuity is determined at the best optical correction, i.e. at the best subjective or optimized refraction. In addition, the visual acuity can be determined at a correction deviating from the best optical correction, preferably at a plus correction, since a minus correction can possibly be compensated by accommodation by the test subject, particularly preferably in the range 0.50dpt to 1.25dpt.
Individual or all of the above method steps a) to e) can be carried out using a light field display which does not actually display the optical corrections but merely simulates them. During at least some of the method steps a) to e), several different optical corrections can be displayed simultaneously on different test image areas in order to make it easier for the test subject to select the best subjective correction in each case.
Coupling with additional systems The manual and the at least partially automatic subjective refraction may benefit from knowledge of additional data such as objective refraction values, data from an older prescription, and/or values from a possibly already existing correction device (e.g. glasses or contact lens). Therefore, the system may be coupled to and/or combined R 3314WO- 59 - with at least one of the following.
Measuring unit for objective data of the eye: For performing an at least partially automated subjective refraction, objective measurement data of the eye, such as objective refraction data or aberrometric data, can be accessed, which can be determined by a measuring unit for objective data of the eye. For this purpose, either an external autorefractometer and/or aberrometer can be connected to the system, or an autorefractometer or aberrometry unit can be integrated into the refraction unit.
The measurement results of this measuring unit can then be used, for example, as initial values for manual or at least partially automated subjective refraction.
Further objective measurement data can be detected such as aberrometric measurement data for one or two illumination states, pupillometric measurement data for both illumination states, possibly aberrometric and/or pupillometric measurement data for a second distance.
Measurement unit for measuring an already existing corrective device: Values of a possibly already existing corrective device, such as glasses or a contact lens, can be taken into account. Since the test subject often does not know the correction values of the existing correction means, a measurement unit can measure the possibly already existing correction means and determine its correction values, i.e., e.g., a vertex refraction meter with a single measurement point or a wavefront measurement unit for whole-area analysis of a lens, possibly with lens type and measurement point identification.
The results of the measurement can then be used, for example, as initial values for manual or at least partially automated subjective refraction.
The use of the measuring unit for measuring a possibly already existing correction means, such as a vertex refraction meter, can be carried out independently of the use of a measuring unit described above for measuring objective data of the eye or R 3314WO- 60 - combined with it.
Specialized software: Furthermore, prescription values of a possibly already existing prescription can be taken into account. The prescription values can be entered manually or provided digitally by another software system. This means that these prescription values can be imported from a specialized software.
For example, the prescription values can be used as baseline values for manual or at least partially automated subjective refraction.
If specialized software is used, measurement data determined by the system, such as the determined refraction and/or other visual parameters, e.g. recognition or discrimination thresholds, monocular and/or binocular visual acuity with best correction, visual acuity with spherical and/or astigmatic nebulization, etc., can be transmitted to the specialized software by means of a transmission unit. The measurement data determined by the system can be used, for example, for ordering, manufacturing and/or consulting on ophthalmic lenses and/or other optical correction devices.
Figure 2shows a schematic representation of a first embodiment of a system for determining optometric parameters of a test subject 10 in a first display state.
The system comprises a refraction unit 14 for setting an optical correction for at least one eye 12 of the test subject 10. The refraction unit 14 may, for example, be configured as a phoropter and/or comprise a phoropter. The system further comprises an eye tracking unit 16, which may be arranged, for example, on the refraction unit 14, and which is configured and/or configured to detect a viewing direction R and/or an orientation of the at least one eye 12 of the test subject 10, in particular while the test subject 10 is viewing a displayed test image. This test image may be displayed on a display unit 24 and may include a plurality of visual symbols 26 and 28 as visual objects. The visual symbol 26 and 28 may be displayed in test image areas, e.g., one visual symbol 26, 28 in each test image area. In addition, each visual symbol 26, 28 R 3314WO- 61 may be displayed with an associated optical correction and/or applied refraction.
In the exemplary embodiment shown, the test subject 10 looks with their eye through the refraction unit 14 along the viewing direction R at the visual symbol 26, which is shown as an "A" in Figure 2. In doing so, the eye tracking unit 16 can detect the viewing direction R. Based on the viewing direction R detected in this way, it can be checked that the test subject 10 is looking at the viewed visual symbol 26 and not at one of the un-viewed visual symbols 28, which are shown as "B", "C" and "D" in Figure 2. Thus, it can be distinguished which of the visual symbols 26, 28 the test subject is looking at.
The system may further comprise a control unit 18, which may comprise a controller of the refraction unit 14 and/or the display unit 24. The control unit 18 may further be adapted and/or configured to read and/or receive the viewing direction R detected by the eye tracking unit 16.
The system may further comprise a trigger 22, which may be formed, for example, on the control unit 18, for example as a push button. The control unit 18 may be formed and/or configured to read and/or receive signals generated by the trigger 22.
The control unit 18 may be configured and/or adapted to evaluate signals generated by the refraction unit 14 and/or the display unit 24 and/or the eye tracking unit 16 and/or the trigger 22.
The control unit 18 may further be configured as a signal unit that generates an eye signal containing information about the detected viewing direction R and/or orientation of the at least one eye 12 of the test subject 10. The control unit 18 can be configured as an evaluation unit which determines the optometric parameters of the test subject by evaluating the eye signal in accordance with the displayed test image.
Figure 3shows a schematic representation of a second embodiment of a system for determining optometric parameters of a test subject 10 in a second display state. This R 3314WO- 62 - system is similar or identical to the system shown in Fig. 2, wherein identical reference characters indicate identical or similar features. In this regard, the control unit 18 may be configured and/or configured to additionally evaluate signals generated by a display unit 30, and the controller 20 may comprise a controller of the display unit 30. The display unit 30 of this system may be identical to the display unit 24 of the system shown in Fig. 2.
The system comprises the display unit 30 on which at least one confirmation field and/or at least one cancellation field 34 may be displayed and/or provided. Such confirmation and/or cancellation fields 32 and 34 may also be additionally displayed and/or provided in the visual symbols 26, 28 shown in Figure 2. The confirmation field and/or the cancellation field 34 can be configured as an actuation field by means of which the test subject 10 can give a feedback to the system.
For example, the test subject 10 can be asked whether their viewing direction R has been correctly detected. This can be effected via an audio signal or, for example, via a corresponding display on the display unit 30 and/or 24. If the viewing direction R has been correctly detected, e.g., if the test subject 10 has just looked at the visual symbol (e.g., "A"), the test subject 10 can fix the confirmation field 32 if this is correct. If the viewing direction R was not correctly detected, the test subject 10 can fixate one of the cancellation fields 34. A fixation of the confirmation and/or cancellation field 32, 34 can be detected by the eye tracking unit 16 and evaluated by the control unit 18.
Figure 4shows a schematic representation of a light field display 36 as a light field display of a system for determining optometric parameters of a test subject. The light field display 36 can be used as a refraction unit and/or a display unit. The system comprises an eye tracking unit 16, which may be integrated with and/or connected to the light field display 36, for example. Similar or the same as the embodiments shown in Figures 2 and 3, the system comprises a control unit 18 which may be configured to control the light field display 36 and/or the eye tracking unit 16. The control unit 18 may be configured and/or configured to acquire and/or evaluate signals generated by the light field display 36 and/or the eye tracking unit 16.
R 3314WO- 63 - A test image is displayed on the light field display 36, which is used to determine the subjective refraction of the test subject 10 and comprises a plurality of test image areas. Here, at least one visual symbol 38, 40 may be displayed in each test image area, for example, which are projected in rows with the same optical correction and/or effect. For each of the test image areas, an associated optical correction can be simulated for the at least one eye 12 of the test subject 10 in such a way that the impression is created that the at least one eye 12 is looking at the respective visual symbol 38, 40 through the respective associated optical correction. Here, at least two of the simulated, assigned optical corrections may differ from each other with respect to their optical effect. The test image areas are displayed simultaneously with the assigned optical corrections.
The visual symbols 38, 40 of each row can be projected with the same optical correction (within the row). The optical corrections with which the visual symbols 38, of the individual rows are projected differ from one another, e.g. in the defocus component used. By means of these different defocus components, the subjectively required mean sphere can be determined.
Alternatively, the optical correction of each row can also deviate the sphere effect, the cylinder effect and/or the axis from each other. For example, the optical correction may differ from each other by a fixed or variable amount, e.g., ¼ diopter each in the sphere and/or cylinder. The corrections can be projected per row rotated against each other about a certain cylinder axis, e.g. by 45° in each case.
This allows the selection of the subjectively best optical correction to be made with certainty, since several visual symbols 38, 40 are available for each optical correction, namely a whole series of visual symbols with the same optical correction. The test subject 10 can thus select the row which is projected with the subjectively best optical correction for the test subject 10. In this way, a more reliable determination of the visual acuity for each correction used is possible, if desired.
R 3314WO- 64 - Similar to Figure 4, Figure 5shows a schematic representation of a light field display as a light field display of a system for determining optometric parameters of a test subject. Here, the same reference characters indicate the same or similar features.
On the light field display 36, visual symbols 38, 40 are displayed in rows and columns, which can each be projected with different optical corrections. Within each row (or alternatively column), the visual symbols 38, 40 can be projected with the same astigmatism components J0 of the optical correction in each case. However, the astigmatism components J0 of the optical correction of the individual rows (or alternatively columns) differed from each other. On the other hand, in each column (or alternatively row), the visual symbols 38, 40 are projected with equal astigmatism components J45 of optical correction, respectively. However, the astigmatism components J45 of the optical correction of the individual columns (or alternatively rows) differed from each other.
For example, the approach shown schematically in Fig. 4 can first be used to determine the defocus component and thus the mean sphere. Subsequently, for example, the astigmatism components J0 and J45 can be determined using the approach shown schematically in Fig. 5. Together, this results in the sphere, cylinder and axis of the subjectively required optical correction.
R 3314WO- 65 - 13512162024283236R

Claims (23)

R 3314WO- 66 - Claims
1. Method for determining optometric parameters of a test subject comprising the steps of:- setting an optical correction for at least one eye (12) of the test subject (10) by means of a refraction unit (14; 36);- displaying a test image for determining the subjective refraction of the test subject (10);- detecting a viewing direction (R) and/or an orientation of the at least one eye (12) of the test subject (10) by means of an eye tracking unit (16) while the test subject (10) looks at the displayed test image;- generating an eye signal containing information on the detected viewing direction (R) and/or orientation of the at least one eye (12) of the test subject (10); and- determining the optometric parameters of the test subject (10) by evaluating the eye signal in accordance with the displayed test image.
2. Method according to claim 1, wherein at least one refraction value, at least one sensitivity, at least one parameter for a contrast vision, and/or at least one acuteness of vision are determined as the optometric parameters.
3. Method according to claim 1 or 2, wherein the eye signal is created so as to have a horizontal and a vertical component, which depend on the viewing direction (R) and/or the orientation of the at least one eye (12).
4. Method according to any one of the preceding claims, wherein the eye signal is generated by means of a signal unit (18) from measurement data generated by the eye tracking unit (16).
5. Method according to any one of the preceding claims, wherein the test image is displayed by a display unit (24; 30; 36) which is controlled by a controller (20). R 3314WO- 67 -
6. Method according to any one of the preceding claims, wherein- a calibration is performed during which calibration correction data are generated with respect to a deviation of the viewing direction (R) and/or orientation determined by means of the eye tracking unit (16) from an actually adopted viewing direction (R) and/or orientation of the at least one eye (12) of the test subject (10), and- a calibrated eye signal is created and evaluated in which the viewing direction (R) and/or orientation detected by the eye tracking unit (16) is or are corrected by the calibration correction data.
7. Method according to any one of claims 1 to 5, wherein the eye signal is calibrated by detecting the viewing direction (R) and/or the orientation of the at least one eye (12) of the test subject (10) by means of an eye tracking unit (16) while the test subject (10) looks at predefined areas and/or points.
8. Method according to any one of the preceding claims, wherein a time-dependent eye signal is generated which contains information relating to directional changes in the viewing direction (R) and/or orientation of the at least one eye (12) of the test subject (10).
9. Method according to any one of the preceding claims, wherein the test image is used to set a viewing-direction-dependent visual task for the test subject (10), in which, in accordance with the detected and evaluated viewing direction (R) and/or orientation of the at least one eye (12) of the test subject (10), conclusions are drawn about the optometric parameters of the test subject (10) at the corresponding optical correction set.
10. Method according to any one of the preceding claims, wherein:- the test image is configured in such a way that it causes the test subject (10) to unconsciously adopt a predetermined viewing direction (R) and/or to perform a predetermined viewing movement in accordance with his/her acuteness of vision, and R 3314WO- 68 - - in doing so, the viewing direction (R) and/or the viewing movement and/or the orientation of the at least one eye (12) of the test subject (10) is detected by means of the eye tracking unit (16) and is registered as a passive feedback while the test subject (10) looks at this test image.
11. Method according to any one of the preceding claims, wherein the test subject (10) is requested to give a predetermined active feedback dependent on his/her optometric parameters while looking at the test image, which is done at least partly by means of a viewing direction (R) actively adopted by the test subject (10), from the detection of which information on these optometric parameters of the test subject (10) is ascertained.
12. Method according to claim 11, wherein the active feedback comprises, in addition to the actively adopted viewing direction (R), an additional feedback which is given by the test subject (10) in at least one of the following ways:- by manually operating a trigger (22);- by performing a gesture;- by blinking and/or closing the at least one eye (12);- by an acoustic feedback.
13. Method according to claim 11 or 12, wherein the active feedback includes the test subject (10) recognizing and fixating one of multiple visual objects (26, 28; 38, 40) displayed as a test image.
14. Method according to any one of claims 11 to 13, wherein the active feedback includes the test subject (10) adopting a viewing direction (R) that depends on a property of at least one visual object (26, 28; 38, 40) displayed as a test image, which the test subject (10) recognizes.
15. Method according to any one of claims 11 to 14, wherein the active feedback includes the test subject (10) R 3314WO- 69 - - first adopting a viewing direction (R) relevant for the feedback, which depends on at least one visual object (26, 28; 38, 40) displayed as a test image and which is detected by the eye tracking unit (16), and- subsequently looking at an operation field (32, 34), which is also detected by the eye tracking unit (16).
16. Method according to any one of claims 11 to 15, wherein the active feedback includes the test subject (10) following with his/her view at least one visual object (26, 28; 38, 40) displayed as a test image, and the viewing direction (R) and/or orientation of his/her at least one eye (12) is detected in this process by the eye tracking unit (16) in a time-dependent manner as a viewing movement and/or eye movement.
17. Method according to any one of the preceding claims, wherein a visual task associated with the test image is explained to the test subject (10) in at least one of the following ways:- by displaying at least one text, image and/or video;- by reproducing an acoustic explanation; and/or- by a chat-bot and/or agent which is controlled by an artificial intelligence and with which the test subject communicates acoustically and/or in text form.
18. Method according to any one of the preceding claims, wherein- multiple at least partially different test images are successively displayed to the test subject (10) and/or at least partially different optical corrections are successively and/or simultaneously presented to the at least one eye (12), and- the viewing direction (R) and/or orientation of the test subject (10) is detected for each of the displayed test images and/or each of the presented optical corrections and is evaluated in accordance with the respectively displayed test image and/or the respectively presented optical correction.
19. Method according to any one of the preceding claims, wherein the method is carried out semi-automatically or fully automatically and, in this process, a subjective R 3314WO- 70 - refraction is carried out semi-automatically or fully automatically and/or the visual acuity and/or the sensitivity of visual acuity of the test subject is ascertained semi- automatically or fully automatically.
20. Method for determining optometric parameters of a test subject (10), in particular according to any one of the preceding claims, comprising the steps of:- providing a light field display (36);- actuating the light field display (36) in such a way that the light field display (36) displays a test image for determining the subjective refraction of the test subject (10), wherein- the test image has a plurality of test image areas;- the light field display (36) simulates, for each of the test image areas,an associated optical correction for at least one eye of the test subject in such a way that the impression is created that the at least one eye (12) looks at the respective test image area through the respectively associated optical correction;- at least two of the simulated, associated optical corrections differ from one another with respect to their optical effect; and- the test image areas with the associated optical corrections are displayed simultaneously; and- determining the optometric parameters of the test subject (10) in accordance with the displayed test image and the simulated optical corrections.
21. System for determining optometric parameters of a test subject (10) comprising: - a refraction unit (14; 36) for setting an optical correction for at least one eye( 12) of the test subject (10);- a display unit (24; 30; 36) for displaying a test image for determining the subjective refraction of the test subject (10);- an eye tracking unit (16) which is formed and configured to detect a viewing direction (R) and/or an orientation of the at least one eye (12) of the test R 3314WO- 71 subject (10) while the test subject (10) looks at the displayed test image; and- a signal unit (18) which creates an eye signal containing information on the detected viewing direction (R) and/or orientation of the at least one eye (12) of the test subject (10); and- an evaluation unit (18) which determines the optometric parameters of the test subject (10) by evaluating the eye signal in accordance with the displayed test image.
22. System for determining optometric parameters of a test subject, in particular according to claim 21, comprising a light field display (36) configured to- display a test image for determining the subjective refraction of the test subject (10), which image has a plurality of test image areas; and- simulate, for each of the test image areas, an associated optical correction for at least one eye (12) of the test subject (10) in such a way that the impression is created that the at least one eye (12) looks at the respective test image area through the respectively associated optical correction;- wherein at least two of the simulated, associated optical corrections differ from one another with respect to their optical effect; and- wherein the test image areas with the associated optical corrections are displayed simultaneously.
23. Computer program product comprising computer-readable program parts which, when loaded and executed, cause a device according to claim 21 or 22 to perform a method according to any one of claims 1 to 20, wherein the computer program product at least partially controls and/or regulates at least one of the following units:- the refraction unit (14; 36); and/or- the display unit (24; 30; 36); and/or- the eye tracking unit (16); and/or- the signal unit (18); and/or- the evaluation unit (18).
IL305329A 2021-03-12 2022-03-11 Method, system and computer program product for determining optometric parameters IL305329A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102021202451.3A DE102021202451A1 (en) 2021-03-12 2021-03-12 Method, system and computer program product for determining optometric parameters
PCT/EP2022/056354 WO2022189640A1 (en) 2021-03-12 2022-03-11 Method, system and computer program product for determining optometric parameters

Publications (1)

Publication Number Publication Date
IL305329A true IL305329A (en) 2023-10-01

Family

ID=81074291

Family Applications (1)

Application Number Title Priority Date Filing Date
IL305329A IL305329A (en) 2021-03-12 2022-03-11 Method, system and computer program product for determining optometric parameters

Country Status (4)

Country Link
EP (1) EP4304448A1 (en)
DE (1) DE102021202451A1 (en)
IL (1) IL305329A (en)
WO (1) WO2022189640A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011120973B4 (en) 2011-12-13 2015-07-30 Rodenstock Gmbh A method, apparatus and computer program product for acquiring objective refraction data for fitting and optimizing spectacles
US9706910B1 (en) 2014-05-29 2017-07-18 Vivid Vision, Inc. Interactive system for vision assessment and correction
US9517008B1 (en) * 2014-11-06 2016-12-13 Bertec Corporation System and method for testing the vision of a subject
EP3592204B1 (en) 2017-03-05 2024-05-08 Virtuoptica Ltd. Eye examination method and apparatus therefor
KR101922343B1 (en) * 2017-07-13 2018-11-26 광운대학교 산학협력단 Method and system for testing diynamic visual acuity
US11253149B2 (en) 2018-02-26 2022-02-22 Veyezer, Llc Holographic real space refractive sequence
WO2019167899A1 (en) * 2018-03-01 2019-09-06 株式会社Jvcケンウッド Visual function detection device, visual function detection method, and program

Also Published As

Publication number Publication date
DE102021202451A1 (en) 2022-09-15
EP4304448A1 (en) 2024-01-17
WO2022189640A1 (en) 2022-09-15

Similar Documents

Publication Publication Date Title
CN110573061B (en) Ophthalmologic examination method and apparatus
JP4503354B2 (en) Optometry equipment
JP6098061B2 (en) Fundus photographing device
KR20220116159A (en) Systems and methods for determining refractive characteristics of both first and second eyes of a subject
RU2634682C1 (en) Portable device for visual functions examination
JPWO2003041572A1 (en) Optometry apparatus and optometry chart
JP6537843B2 (en) Optometry device, awareness measurement method using chart for optometry
US20220007929A1 (en) Holographic Real Space Refractive System
US11445904B2 (en) Joint determination of accommodation and vergence
JP7024295B2 (en) Awareness-based optometry device
JP6822798B2 (en) Ophthalmic examination equipment
KR102295587B1 (en) Method and system for virtual reality-based visual field inspection
JP4494075B2 (en) Optometry equipment
IL305329A (en) Method, system and computer program product for determining optometric parameters
US20220369921A1 (en) Ophthalmologic apparatus and measurement method using the same
Goyal et al. Estimation of spherical refractive errors using virtual reality headset
KR102570392B1 (en) Method and apparatus for measuring refractive error
JP7298134B2 (en) Optometry system
JP6968644B2 (en) Ophthalmic equipment
JP2022038942A (en) Optometer and control program of optometer
JP2022073154A (en) Optometer
AU2021347290A1 (en) Holographic real space refractive system
JP2023146675A (en) Ophthalmologic apparatus