WO2022189640A1 - Procédé, système et produit-programme informatique destinés à déterminer des paramètres optométriques - Google Patents

Procédé, système et produit-programme informatique destinés à déterminer des paramètres optométriques Download PDF

Info

Publication number
WO2022189640A1
WO2022189640A1 PCT/EP2022/056354 EP2022056354W WO2022189640A1 WO 2022189640 A1 WO2022189640 A1 WO 2022189640A1 EP 2022056354 W EP2022056354 W EP 2022056354W WO 2022189640 A1 WO2022189640 A1 WO 2022189640A1
Authority
WO
WIPO (PCT)
Prior art keywords
subject
eye
test image
visual
refraction
Prior art date
Application number
PCT/EP2022/056354
Other languages
German (de)
English (en)
Inventor
Stephan Trumm
Adam MUSCHIELOK
Yohann Bénard
Wolfgang Becken
Anne Seidemann
Original Assignee
Rodenstock Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rodenstock Gmbh filed Critical Rodenstock Gmbh
Priority to EP22714166.0A priority Critical patent/EP4304448A1/fr
Priority to IL305329A priority patent/IL305329A/en
Publication of WO2022189640A1 publication Critical patent/WO2022189640A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/103Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining refraction, e.g. refractometers, skiascopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement

Definitions

  • the invention relates to a method, a system and a computer program product for determining optometric parameters of a subject.
  • a subjective refraction the refractive power of that optical correction is determined, with which the eye or eyes of a subject produce or produce a sharp image of an object located in the distance.
  • measuring devices such as trial glasses, test glasses and/or a phoropter are conventionally used. These measuring devices can be operated either manually or electrically by the refractionist.
  • the subject's subjective visual impression is decisive for determining the required optical correction.
  • the test person communicates with the refractionist by giving him acoustic feedback regarding a visual task that has been set for him.
  • the refractionist accepts the patient's feedback and controls the refraction process depending on the patient's feedback.
  • the refractionist is required to carry out the refraction in order to evaluate the feedback given by the subject on the visual tasks and to control the course of the refraction. On the one hand, this means that refractionists have to spend time determining the visual acuity of the subject. Furthermore, the operation and implementation of the refraction is the responsibility of the refractionist error-prone in that it is controlled by a human and errors can also occur when giving and/or receiving the acoustic feedback.
  • the object of the invention is to enable an improved method for carrying out a determination of optometric parameters, in particular to support a refractionist in carrying out a refraction and/or to enable a refraction to be carried out at least partially automatically.
  • One aspect relates to a method for determining optometric parameters of a subject with the steps:
  • an eye signal which contains information about the detected viewing direction and/or orientation of the at least one eye of the subject
  • the individual method steps do not necessarily have to be carried out in the order listed above. This means that the individual process steps must be carried out either in the order listed, in a different order and/or can also be carried out at least partially simultaneously.
  • the refraction unit can be configured and/or used to manipulate curvatures of the wave fronts and, if necessary, additionally a mean propagation direction and/or wavelength and/or intensity and/or a polarization state of the light emanating from the test image and the light manipulated in this way into the at least one guide the subject's eye.
  • the optical correction is defined, with which the light entering the eye is manipulated.
  • the optical correction can either be incorporated into the propagation path of the light, or it can be simulated purely virtually in the context of a projection, e.g. using the phoropter and/or using a light field display.
  • the refraction unit can be designed, for example, as a phoropter, in particular as an automated phoropter, which has refractive and/or diffractive elements such as lenses and/or gratings.
  • refractive glasses can be used for this.
  • the refractive and/or diffractive elements can be adaptive as optical correction, e.g. be designed as a deformable mirror and/or a deformable lens.
  • a zero correction can also be used as an optical correction, e.g. a window glass without actual optical correction or the omission of a material correction. This can be used, for example, to determine the visual acuity of the uncorrected, "naked" eye.
  • the optical corrections can correspond to a retained optical effect, in particular an optical effect with a spherical and/or cylindrical optical effect.
  • the test image can be displayed on a display unit.
  • the display unit can be separate and/or independent of the refraction unit and can be operated by the refractionist, for example. Alternatively, the display unit also be electrically connected to the refraction unit and/or communicate, whereby they are partially or fully automatically controlled.
  • the display unit can display the test image on a display panel, for example on a screen.
  • a test image is displayed which shows different visual objects such as images and/or symbols which are to be recognized and/or identified by the subject.
  • the subjective refraction of the test person can be determined depending on which visual objects displayed in the test image the test person recognizes and how well.
  • At least one visual task can be assigned to each test image, which can be set for the test person when displaying the test image in order to be able to determine, for example, their subjective eyesight depending on the optical correction that has been set.
  • the refraction unit and the display unit can be combined with one another in such a way that they have a projection unit which projects light from a virtual test image directly into the at least one eye of the subject depending on the defined optical correction.
  • the direction of view and/or the orientation of the at least one eye of the subject is recorded by the eye tracking unit.
  • the line of sight and the orientation of the eyes depend on one another, so that it is sufficient to record either the line of sight or the orientation of the at least one eye. In particular, it can be recorded at which part of the test image the subject is looking, ie at which area of the display field the subject is currently looking.
  • the eye-tracking unit can either only record the viewing direction and/or orientation of exactly one eye of the subject, or both eyes.
  • the eye-tracking unit can have at least one camera, which takes pictures and/or videos of the at least one eye while the subject looks at the displayed test image.
  • the eye-tracking unit can include program parts and interact with such, which carry out image recognition on the recorded images and/or videos in order to use one or more image features to determine the viewing direction and/or orientation to determine the at least one eye.
  • the eye tracking unit generates measurement data relating to the viewing direction and/or the orientation of the at least one eye of the subject, which can contain images and/or videos.
  • a gaze movement can also be detected, ie a temporally variable viewing direction of the at least one eye.
  • the eye signal is created from the measurement data recorded by the eye tracking unit.
  • the eye signal contains information about the recorded viewing direction and/or orientation.
  • the eye signal can be digital and e.g. contain a directional vector together with a starting point of the directional vector in three-dimensional space, e.g.
  • the eye signal is sufficiently detailed, sufficiently precise and/or has sufficient resolution to be able to use the eye signal to determine which area of the test image the respective eye of the subject is currently looking at.
  • the eye signal is generated and/or evaluated as a function of that test image which is displayed precisely when the viewing direction and/or orientation associated with this eye signal is detected by the eye tracking unit.
  • the test image can be calibrated relative to the refraction unit, and in particular a distance between the test image and the refraction unit can be known. This distance can be taken into account, for example, in a calculation of visual acuity, ie visual acuity.
  • the axial distance and/or a lateral relative position can be previously known as calibration.
  • Information can then be accessed about the distance and/or direction in which individual areas of the test image are arranged relative to the refraction unit. Calibration data from this calibration can be used when evaluating the eye signal.
  • the calibration data of this calibration can be used to calculate visual acuity from the distance and the size of the visual objects displayed on the test image identify subjects.
  • the evaluation of the eye signal can include information about how far the subject's pupil(s) are from the refraction unit and in which direction, in particular a pupil center point, a point of rotation of the eye, a point on the subject's retina, and/or a reflex , especially a Purkinje reflex.
  • a calibration can be used for which the precise positioning of the refraction unit relative to the test image does not have to be known.
  • the subject can be requested during a calibration process to view and/or fixate on predetermined areas and/or points of the test image. An eye signal associated with these predetermined areas and/or points can be registered and used as a calibration.
  • the evaluation of the eye signal includes a determination of that area of the test image which the subject is currently looking at.
  • the test image can have a number of areas that are different from one another and whose fixation leads to eye signals that are different from one another. These different eye signals are distinguished from one another within the framework of the evaluation. For example, several different visual objects can be displayed on the test image and the subject can be prompted to look at a specific optotype. The resolution and/or accuracy of the eye-tracking unit is sufficient to recognize which visual object the subject is looking at, e.g. when prompted to do so.
  • test image with an associated visual task provides the display of two visual objects, such as optotypes, which differ in terms of at least one property, for example in terms of their contrast and/or their size.
  • optotypes differ in terms of at least one property, for example in terms of their contrast and/or their size.
  • the subject can be asked to look at the visual object that appears sharper and/or better visible to him. From the viewing direction then taken by the subject, conclusions can be drawn about the subject's optometric parameters, in particular about his or her subjective visual acuity. This can be done, for example, as part of a subjective refraction.
  • subjective means that the result of the measurement method depends on the subject's subjective perception of vision.
  • the procedure supports the refractionist in such a way that possible sources of error, e.g. based on the human factor, can be reduced.
  • sources of error which can be reduced by the method, can be based on an acoustic misunderstanding of a response from the subject, on the subject giving incorrect response, and/or on the incorrect registration of the response by the refractionist.
  • the detection of the viewing direction and/or the orientation of the eye enables at least a partial automation of a subjective refraction, in particular a full automation of a subjective refraction in which a refraction is no longer absolutely necessary. This makes it possible, for example, to carry out a subjective refraction from afar, e.g. at the test person's home, without the refractionist having to be on site with the test person.
  • the optometric parameters can relate to information about the at least one eye of the test person, in particular also about the visual system of the test person as a whole.
  • the optometric parameters can in particular be subjective optical parameters which depend on a subjective visual perception of the subject. This can in particular be subjective refraction values, ie those refraction values at which the subject subjectively perceives the best visual impression.
  • the optometric parameters can include, in particular, refraction values that are optimized for near vision and/or far vision.
  • the optometric parameters can also relate to and/or contain physiological parameters.
  • the optometric parameters can include subjective optical parameters such as a subjectively determined best optical correction, ie a subjective refraction, and/or objective optical parameters such as aberrometric measurement data and/or pupillometric measurement data.
  • the optometric parameters can include information about visual acuity and/or contrast vision.
  • At least one refraction value, at least one sensitivity, at least one parameter for contrast vision, and/or at least one visual acuity are determined as the optometric parameters.
  • the optometric parameters can in particular be a single parameter.
  • the optometric parameters preferably include at least one refraction value, in particular at least one subjective refraction value, which is optimized for near vision and/or distance vision, for example.
  • the refraction value can correspond to an optical correction, ie an applied optical effect.
  • the optometric parameters can be determined monocularly and/or binocularly for the subject.
  • the eye signal is created in such a way that it has a horizontal and a vertical component which are dependent on the viewing direction and/or the orientation of the at least one eye.
  • the eye signal can in particular have at least one horizontal and/or at least one vertical component. In a linear approximation, these components can be proportional to a horizontal and/or vertical component of the viewing direction and/or orientation.
  • the eye signal can be created as a vector which has at least one horizontal and/or at least one vertical component. The vector can be aligned approximately parallel to the viewing direction and/or orientation of the at least one eye.
  • the eye signal can in particular be vector-like and/or have a vector, in which case it can contain both a vector direction and a vector basis.
  • the use of a vector as an eye signal can be evaluated mathematically in a simple manner using known vector calculation, possibly taking into account a calibration, eg a calibration of the displayed test image relative to the eye and/or the refraction unit.
  • the eye signal can be designed as a one-to-one function of the viewing direction and/or orientation.
  • a function value of the eye signal is thus assigned to each line of sight and/or orientation, from which it is possible to infer this line of sight and/or orientation.
  • the eye signal is generated by a signal unit from measurement data generated by the eye tracking unit.
  • this measurement data can contain one or more images, videos or the like.
  • the signal unit can be at least partially software-controlled and/or contain image recognition software, in particular image recognition software that is trained using “machine learning”.
  • the signal unit can be designed as part of the eye tracking unit.
  • the signal unit can be embodied as part of a controller and/or control unit, in particular as part of a computer system with a processor which controls and/or regulates one, several or all of the method steps.
  • the test image is displayed by a display unit that is controlled by a controller.
  • the display unit can have a screen, a projection and/or a holographic display unit, for example.
  • the display unit can be controlled electronically by the controller in such a way that it displays a test image selected by the controller displayed on a display panel.
  • the controller can also be configured and/or used to control the refraction unit, the eye tracking unit, the optionally present signal unit and/or an optionally present evaluation unit for evaluating the eye signal.
  • the degree of automation of the process depends on how many of these units the controller actually controls and/or regulates. If the controller actually controls all of the units listed above that are implemented, then the method can be carried out fully automatically.
  • the controller can make it possible to record the viewing direction and/or orientation precisely when the display unit displays a new test image which the subject is looking at.
  • a calibration is carried out, during which calibration correction data are generated with regard to a deviation of the viewing direction and/or orientation determined by the eye tracking unit from an actually assumed viewing direction and/or orientation of the at least one eye of the subject.
  • a calibrated eye signal is created and evaluated, with the direction of vision and/or orientation detected by the eye tracking unit being corrected by the calibration correction data in the case of the calibrated eye signal.
  • the determined viewing direction can deviate from the subject's actual viewing direction. This deviation can have different causes and can be compensated by the calibration.
  • the subject can be told that he should first look at a predetermined area of a display field on which the test image can later be displayed. For example, the subject can first fix an upper area, a lower area, a left area and/or a right area in the display field.
  • the eye tracking unit can record the directly determined viewing direction and/or orientation. A few selected areas and/or points on the display panel can be sufficient for adequate calibration.
  • the calibration can take place automatically, in particular controlled by an AI or a machine. After the calibration, the actual viewing direction and/or orientation of the eye can be determined more reliably and evaluated as a calibrated eye signal.
  • the eye signal is calibrated by detecting the viewing direction and/or the orientation of the at least one eye of the subject using an eye tracking unit while the subject is looking at predetermined areas and/or points.
  • the subject can be instructed to first look at a predetermined area of a display field on which the test image can later be displayed.
  • the subject can first fix an upper area, a lower area, a left area and/or a right area in the display field.
  • the eye tracking unit can record the directly determined viewing direction and/or orientation. A few selected areas and/or points on the display panel can be sufficient for adequate calibration.
  • the calibration can take place automatically, in particular controlled by an AI or a machine. After this calibration, points and/or areas of the display unit can be assigned to the viewing directions and/or orientations of the eye detected by the eye tracking unit.
  • a time-dependent eye signal is generated, which contains information regarding changes in direction of the viewing direction and/or orientation of the at least one eye of the subject. From the time-dependent eye signal can thus an eye movement and / or Orientation movement can be derived and evaluated. This makes it possible to record a plurality of viewing directions taken one after the other and/or to use moving test images, such as a test video, for example, when viewing the test subject is prompted and/or prompted to perform a visual movement dependent on the test video.
  • the test image is used to set a viewing direction-dependent visual task on the test subject, in which, depending on the recorded and evaluated viewing direction and/or orientation of the at least one eye of the test subject, conclusions can be drawn about the optometric parameters of the test subject at the respective set optical correction to be made.
  • viewing direction-dependent visual tasks are described in more detail below.
  • Different types of viewing direction-dependent visual tasks can be set.
  • the viewing direction and/or eye orientation taken by the test person can depend on which of the several displayed visual objects the test person can see well or poorly.
  • An area of the test image and/or display field which the subject looks at as part of the visual task can be dependent on visual acuity (e.g. dependent on the optical correction). By capturing the area of the test image which the subject is looking at during the visual task, the refraction used and/or the visual acuity of the subject can be evaluated and these can be determined as optometric parameters.
  • the test image is designed in such a way that it prompts the test person to unconsciously adopt a predetermined viewing direction and/or to carry out a predetermined viewing movement, depending on his or her visual acuity.
  • the direction of gaze and/or the movement of the gaze and/or the orientation of the at least one eye of the test person is recorded by means of the eye tracking unit and registered as passive feedback while the test person looks at this test image.
  • the subject is unconsciously prompted by the image itself to take a predetermined viewing direction and/or a predetermined one perform eye movement. This can be done, for example, by presenting the test person with two fields as a test image. One of the two fields can be empty, the other field can show a visual object.
  • the test image can be designed as a moving test image that encourages the subject to perform a predetermined eye movement depending on his visual acuity.
  • This can in particular be an eye movement in which the subject follows a visual stimulus such as a optotype with his gaze, provided he can clearly recognize the displayed visual stimulus.
  • the eye movement can thus be designed in particular as an eye movement following a visual stimulus.
  • This type of visual task is therefore particularly suitable for subjects who cannot be expected to follow any instructions (e.g. children) or subjects who cannot express themselves differently due to a disability and/or for subjects who cannot speak the language of the refraction specialists.
  • the test subject is asked to provide predetermined active feedback when viewing the test image, which is dependent on his or her optometric parameters.
  • active feedback for solving the visual task can thus be provided.
  • the use of a visual task with active feedback does not exclude the use of a visual task with passive feedback.
  • tasks with passive feedback can be performed after or before visual tasks with active feedback, or vice versa, so that different types of visual tasks are part of the same measurement procedure be able.
  • the visual task is usually explained to the test person, with it being explained in particular in which direction he should look when performing the visual task.
  • the viewing direction to be taken here can depend on the visual acuity and/or at least one other optometric parameter of the subject.
  • the subject can be requested to look at that one of a plurality of optotypes which he/she subjectively sees best and/or better and/or with higher contrast.
  • the active feedback includes additional feedback in addition to the actively assumed line of sight, which is given by the subject in at least one of the following ways:
  • Manual actuation of a trigger can be done with a hand and/or foot, for example by pressing a button or a pedal. Alternatively, it can be done by activating such as swiping over a sensitive field such as a touchpad and/or touch display.
  • Performing a gesture can enable gesture control.
  • the subject can perform a confirmation gesture, for example with his hand and/or his foot, at the moment he is looking at a region of the test image that depends on his visual acuity. Deliberate blinking and/or closing of the at least one eye for a predetermined period of a few seconds can be detected directly by the eye tracking unit.
  • the subject can be instructed to first look at a specific area of the test image and then to blink immediately.
  • An acoustic feedback can also be registered.
  • the acoustic feedback is preferably possible be simple, for example only consist of a sound like "mhm" for which the test person does not have to open his mouth, so that his line of vision
  • Additional feedback enables particularly reliable and conscious feedback, which makes it possible to reliably register the viewing direction associated with the test image.
  • this active feedback includes the subject recognizing and fixating on one of a plurality of visual objects displayed as a test image.
  • fixating on the visual object the test person assumes the actively assumed viewing direction, which is recorded and registered as part of the active feedback. For example, two different optotypes can be displayed as a visual task and the subject can be asked to fixate on and look at the visual object that he recognizes better.
  • the subject can be instructed to look at the visual object that has a stronger or weaker expression of a given property.
  • this active feedback includes the subject actively taking a viewing direction that depends on a property of at least one visual object displayed as a test image, which the subject recognizes.
  • the visual task can be designed in such a way that it can only be solved by the subject if he recognizes the displayed visual object. If, for example, a Landolt ring that is customary for visual tasks is used as the optotype and visual object, the subject can be instructed to look actively in that direction and/or at the edge of the test image to which the Landolt ring is open. The viewing direction then taken is recorded and evaluated in the form of the eye signal. With this type of visual task, for example, only a single optotype can be displayed.
  • Test variants can also be used in which the line of sight to be taken can be written out in writing. For example, "top""bottom”"right” or “left” can be displayed as a visual object, so that the subject can follow the instruction written as a visual object with his line of sight if he can solve the visual task.
  • this active feedback means that the subject first takes a line of sight that is relevant for the feedback, which depends on at least one visual object displayed as a test image and which is detected by the eye tracking unit, and then looks at an actuation field, which is also the eye tracking unit is detected.
  • the subject can, for example, first fixate on a visual object displayed on the test image for a predetermined time and then on an actuation field.
  • a field or a plurality of fields can be provided as the control field, which is/are displayed either as part of the test image or adjacent to the test image. In one embodiment of this, it is first detected which of the displayed visual objects the test person fixes on.
  • This detection of the fixation can be confirmed by an automatic feedback, for example by a colored or other marking of the visual object fixed in this way, for example by a circle or the like.
  • the subject can thus select one of the displayed visual objects and activate it with his gaze, similar to clicking on it with a computer mouse.
  • the subject can then confirm his selection by fixing the field of action. Additional control fields can be provided, the fixing of which leads to a termination, to deselecting the selected visual object, to a pause, or to similar functions.
  • the test person can control a visible or invisible cursor through the test image with his/her gaze.
  • this active feedback includes the subject following his gaze at least one visual object displayed as a test image and the direction and/or orientation of his at least one eye as a function of time from the eye tracking unit Gaze movement and / or eye movement is detected. It can be checked here whether the subject recognizes the displayed visual object to the extent that he can reliably follow it.
  • the evaluation not only places a single viewing direction and/or orientation in connection with the test image, but rather a movement sequence of the viewing direction in connection with a test video.
  • the eye signal created from the recorded measurement data can also be time-dependent and can be evaluated in this time-dependency.
  • test person can follow at least one visual object displayed as a test image with his/her gaze as part of passive and/or unconscious feedback and for the eye tracking unit to track the direction and/or orientation of his/her at least one eye as a time-dependent eye movement and/or eye movement is detected.
  • a visual task associated with the displayed test image is explained to the subject in at least one of the following ways:
  • chatbot or agent which is controlled by artificial intelligence and with which the subject communicates acoustically and/or in text form.
  • the explanation can contain how the subject can control and/or solve the visual task associated with the test image with his eyes.
  • the explanation can also contain how the subject can give active feedback, which contains his solution to the visual task. Since the subject's exact visual acuity has not yet been determined when the task is explained, it may be difficult to explain the test task in a text form that the subject cannot read clearly enough. Therefore, the explanation of the visual task in an acoustic way is preferred, in particular the explanation by an AI-controlled chatbot.
  • several at least partially different test images are displayed to the subject one after the other and/or at least partially different optical corrections are presented to the at least one eye one after the other and/or simultaneously.
  • the viewing direction and/or orientation of the subject is recorded for each of the displayed test images and/or for each provided optical correction and evaluated depending on the respectively displayed test image and/or the respective provided optical correction.
  • the test images can differ in that they display ever smaller visual objects and it is examined up to which size of optotypes the test person can still recognize them.
  • the sequence of the test images displayed can follow a previously determined pattern and in particular can be controlled fully automatically.
  • the test images can be activated one after the other by a refractionist and only the image evaluation can take place automatically.
  • the selection of the following test image can depend on the extent to which the subject was able to recognize the current test image.
  • the different optical corrections can be selected and held up to at least one eye.
  • the selection of the optical correction used in each case can depend on the gaze evaluation.
  • the optical corrections can differ from one another in particular in terms of their spherical and/or cylindrical strength and their cylinder axis.
  • both the test images and the provided and/or preceding optical corrections can be varied.
  • the at least one eye can be presented with a plurality of different optical corrections at the same time, through which the subject can, for example, view different areas of the test image.
  • a light field display can be used for this, for example, which is explained in more detail below.
  • the test person can be asked to look at the area that is subjectively best perceived by the test person.
  • the at least partially different test images are displayed to the test person by means of a light field display.
  • the light field display can at least partially generate the at least partially different optical corrections. This means that the Light field display generates the optical effects of the different optical corrections either alone, or that the different optical corrections are composed of, for example, a physically available optical correction and additional optical effects generated by the light field display. Details of this embodiment are given below.
  • the method is carried out semi-automatically or fully automatically and a subjective refraction is carried out and/or the visual acuity and/or visual sensitivity of the subject is determined semi-automatically or fully automatically.
  • One or more intermediate results can be stored during the refraction in order to generate sufficient measurement data from various visual tasks when approaching the actually required optical correction, from which the sensitivity of the visual acuity can be determined.
  • the visual sensitivity can be used to adapt the addition and/or the refractive power transition to the visual sensitivity of the subject in the case of ophthalmic spectacle lenses.
  • One aspect relates to a method for determining optometric parameters of a subject, in particular according to the above aspect, with the steps:
  • the light field display displays a test image for determining the subjective refraction of the subject, the test image having a plurality of test image areas; the light field display for each of the test image areas simulates an assigned optical correction for at least one eye of the subject in such a way that the impression is created that the at least one eye is viewing the respective test image area through the respectively assigned optical correction; at least two of the simulated, assigned optical corrections differ from one another with regard to their optical effect; and the test image areas with the assigned optical corrections simultaneously are displayed; and
  • the individual method steps do not necessarily have to be carried out in the order listed above. This means that the individual method steps can be carried out either in the order listed, in a different order and/or also at least partially simultaneously.
  • the method can be designed either as an embodiment of the aspect described above or as an independent method.
  • the light field display can be used both (at least partially) as the refraction unit and for displaying the test image.
  • the viewing direction and/or orientation of the at least one eye can be detected by means of the eye tracking unit in order to determine which of the displayed test areas the subject is currently viewing with the simulated, assigned optical correction.
  • the simultaneous display of different test image areas with different assigned optical corrections offers technical advantages.
  • the simultaneous display enables a direct comparison of the different optical corrections with each other.
  • test images are displayed one after the other.
  • the test person is not always sure which display actually has a better visual impression. This often leads to repeated switching back and forth between test images and/or corrections made available.
  • the different optical corrections can be directly and immediately compared with one another in order to be able to more easily determine and/or select the subjectively best and/or subjectively better visual impression. This can allow for a more reliable determination of the optometric parameters.
  • the simultaneous and thus simultaneous display can speed up and/or shorten the parameter determination.
  • the test image areas can either all display the same image and/or visual object, or at least partially different images and/or visual objects.
  • the display of the same images and/or visual objects in the different test image areas can make it easier to select the best associated optical correction.
  • a large coherent test image can also be displayed, which extends over all or at least several of the test image areas. Different optical corrections can be assigned to individual areas of the test image.
  • a phoropter as a refraction unit can be dispensed with, as can optical corrections that are physically present. This makes it possible to reduce the required components and/or the material costs.
  • the light field display can generate a light field which is projected into the subject's eye or eyes.
  • the light field display can simulate different spherical and/or cylindrical optical effects without actually reproaching them to the eye or eyes.
  • the light field display can display the test image from several viewing angles at the same time. This allows for example two images - one for each eye - to be displayed at the same time from slightly different perspectives to create a 3D image.
  • the test image displayed can be adapted to the position of the subject.
  • the subject and/or his eyes can be detected, for example by a camera and/or the eye tracking unit that may be present.
  • 3D structures behind the Monitor level are simulated and / or light sources that illuminate individual areas differently.
  • the light field display can simulate the different spherical and/or cylindrical optical effects simultaneously and/or sequentially.
  • the simultaneously simulated effects can be simulated on objects that are stepped on (e.g. optotypes), which are perceived at different positions, so that the viewer can perceive them separately from each other.
  • different spherical optical effects can first be simulated simultaneously, and then (e.g. for a specific optimized spherical optical effect) several different cylindrical optical effects can be simulated simultaneously.
  • a light field display can be in the form of, for example, a planar light source, in which the position and/or the direction and/or the intensity and/or the color of the light emission can be varied.
  • the light field display can have, for example, a field of lenses and/or a lens array, which can be arranged at a distance equal to the focal length of the individual lenses in front of a monitor, which has a pixel array.
  • a 4D light field can be generated.
  • the spatial resolution can depend on the number of individual lenses in the lens field and/or lens array, while the angular resolution of the light field display can depend on a number of monitor pixels in the monitor behind each individual lens. At least one individual lens and/or one emission direction of the generated part of the light field can be or will be assigned to each monitor pixel by means of a calibration.
  • each superpixel is formed by all pixels arranged behind a respective lens of the lens field (ie behind each lenslet) and/or pixels assigned to this lens.
  • the light field display can have at least one array of micromirrors. In this case, two or more arrays of micromirrors can also be arranged one behind the other in order to generate the light field.
  • a light field display with at least one scanning mirror can be used, e.g. with a single scanning mirror or a combination of two or more scanning mirrors in a row.
  • the lenses and/or mirrors of the light field display can each be designed in one color or in multiple colors, in particular red or green or red/green. This makes it possible to use a chromatic aberration of the eye to determine optometric parameters, e.g. using bichromatic refraction methods, red-green tests and/or red-green matching. This offers a better comparison with simultaneous display with different corrections.
  • the light field display can display at least one optotype and/or another visual object in each test image area, and also simulate a different optical effect of the correction for each test image area. At least two, preferably all, of the simultaneously simulated optical corrections differ from one another with regard to their spherical and/or cylindrical effect.
  • the light field display simultaneously displays at least two test image areas. In some embodiments, three, four or even more test image areas can also be displayed simultaneously. Approximately two to a maximum of eight test image areas are preferably displayed at the same time. The number of test image areas displayed at the same time can therefore have an upper limit in order not to overwhelm and/or confuse the subject. In individual embodiments, however, this maximum number can also be exceeded.
  • test image areas displayed can vary. So eg first a large number of test image areas are displayed, each with different optical corrections, but then the number is reduced to a few test image areas (e.g. conveying a particularly good subjective visual impression). At least in one method step, only two different test image areas with two different optical corrections can be displayed to the subject in order to enable a direct comparison of a specific change. This can take place in particular in the last steps of a subjective refraction determination, cf. method steps e) described below of an exemplary embodiment for carrying out a subjective refraction.
  • a 2D, a 3D, and/or a 4D test image can be displayed as the test image, which has a number of test image areas.
  • test image areas can be approximately the same size and/or can be displayed uniformly on a test plane, in particular in rows and/or columns.
  • corrections that differ only with regard to the spherical power can be simulated and thereby “displayed” simultaneously
  • corrections that differ simultaneously only with regard to the cylindrical power can be simulated.
  • different corrections can be simulated and thus “displayed” only with regard to the axis position of the cylindrical effect
  • in another process step only corrections that differ from one another with regard to the cylinder strength.
  • the simulated, different optical corrections can be sensibly grouped, in particular grouped according to method steps for carrying out a subjective refraction (cf. method steps a) to e) described below).
  • different optical corrections are simulated simultaneously, at least with regard to their cylindrical effect and/or axis position.
  • Test images each with a red and a green area, can be displayed by the light field display.
  • a bichromatic refraction method can thus be carried out in which, for example, a chromatic aberration of the eye can be used and/or determined.
  • red-green tests and/or (binocular) red-green comparisons can be carried out. If these images are displayed simultaneously with different simulated optical effects from the light field display, this allows for a better comparison of the different optical corrections.
  • light fields can be generated with a test image for an individual eye, and/or binocular light fields can be generated in which each eye is assigned and simulated a dedicated optical effect.
  • each eye can be shown dedicated test images which show different objects and/or the same object to each eye.
  • Binocular light fields can be generated in such a way that test images for determining phoria and/or correction prisms are displayed for each eye.
  • a light field can be generated which shows a circle to one eye and a cross to the other eye, each with an assigned optical effect (corresponding to the assigned optical correction). If the subject sees the cross in the center of the circle, a binocular fine adjustment can be achieved.
  • other symbols and/or objects may be displayed which together and properly matched create a coherent picture.
  • Binocular light fields can be generated in such a way that each eye is shown at least one dedicated object and, in addition, at least one common object for both eyes. In this way, a fusion incentive can be generated, which can be used to carry out binocular refractions, for example with binocular fine adjustment.
  • a light field control can be provided, via which light field control signals for displaying specific test image areas and/or for simulating specific optical corrections are sent to the light field display.
  • These light field control signals can be generated automatically and/or processed and/or generated by a refractionist.
  • the selection of at least some of the light field control signals can depend on feedback from the subject, in particular feedback recorded by means of an eye tracking unit and/or by the refractionist.
  • the subject can select that or those areas for which he or she has the best subjective visual impression. He can give feedback, e.g. telling a refractionist in the classic way which of the test image area(s) gives the subject the best visual impression.
  • the feedback can also be recorded by an eye tracking unit, a voice recognition unit and/or a button or the like.
  • a (monocular or binocular) best subjective refraction result can be determined as the subject's optometric parameters.
  • the optometric parameters result from the displayed test image and the respectively assigned, simulated optical corrections.
  • the optometric parameters can also relate to and/or contain physiological parameters.
  • the optometric parameters can include subjective optical parameters such as a subjectively determined best optical correction, ie a subjective refraction, and / or objective optical parameters such aberrometric measurement data and/or pupillometric measurement data.
  • the optometric parameters can include information about visual acuity and/or contrast vision.
  • the light field display is used to display optical corrections with higher order (i.e. at least third order) optical effects, in particular simultaneously.
  • higher order optical effects i.e. at least third order
  • Both corrections with an optical effect of the second order can be displayed, i.e. corrections with a spherical and/or cylindrical optical effect, as well as additional (or exclusively) corrections with an optical effect of the third and/or fourth order, i.e. corrections for e.g. coma, asymmetry errors, spherical Aberration, Trefoil and/or Pentafoil.
  • Optical effects with even higher orders can also be used, but the ones listed above are the most relevant in eyewear calculation and/or manufacture.
  • the choice of available optical corrections can be increased and/or made more flexible.
  • the desired optical effect can be generated and/or simulated directly without having to make a real correction (e.g. in advance).
  • the optical effects of the different optical corrections are generated from an interaction of an optical effect of at least one optical correction provided in rem and additional optical effect generated by the light field display.
  • the correction provided in rem can be, for example, at least one common lens with a previously known optical effect, or, for example, a dedicated lens for each eye, such as old, existing glasses and/or test glasses.
  • the correction provided in rem has at least one classic lens.
  • the correction provided in rem has at least one adaptive lens and/or another adaptive element.
  • the correction provided in rem has at least one displaceable lens and/or a lens system with at least two lenses, which are arranged displaceably relative to one another and/or to the pixel array in such a way that different spherical and/or cylindrical optical effects can be brought about as a result.
  • the correction provided in rem has both at least one classic lens and at least one adaptive lens and/or another adaptive element.
  • a lens is selected from a lens magazine as the correction held in rem, such as in a classic phoropter. This enables the use of a plurality of lenses with different optical powers.
  • At least one Fresnel lens is used as the correction provided in rem.
  • the optical effect of the overall correction provided is made up of the optical effect of the optical correction(s) provided in rem and the additional optical effects generated by the light field display.
  • the optical correction provided in rem can be arranged in the beam path between the subject and all test image areas and thus roughly preset the optical corrections.
  • the light field display can be used to fine-tune the optical effect roughly preset by the optical correction provided. In this way, the optical effect of the real correction can be changed by means of the light field display in individual test image areas, for example, by approximately +/-0.50 dpt or approximately +/-1.00 dpt.
  • this optical effect can be fine-tuned by means of the light field display in the individual test image areas between +4.50 dpt and +5.50 dpt.
  • the correction provided in rem only generates a spherical optical effect, while the light field display additionally generates different cylindrical optical effects.
  • a change in the refraction values of the subject can be determined, in particular by means of a light field display with a low dynamic range.
  • One aspect relates to a system for determining optometric parameters of a subject with: - A refraction unit for setting an optical correction for at least one eye of the subject;
  • a display unit for displaying a test image for determining the subjective refraction of the subject
  • an eye tracking unit which is designed and configured to detect a line of sight and/or an orientation of the at least one eye of the subject while the subject is looking at the displayed test image;
  • a signal unit which creates an eye signal which contains information about the recorded viewing direction and/or orientation of the at least one eye of the subject;
  • An evaluation unit which determines the optometric parameters of the subject, evaluating the eye signal as a function of the displayed test image.
  • the device can be used to carry out the method according to the preceding aspect. For this reason, all statements relating to the method also relate to the device and vice versa.
  • the device can be designed in several parts.
  • the display unit can be separate and/or designed separately from a refraction device.
  • the refraction device can include, for example, the refraction unit and the eye tracking unit and optionally a signal unit and an evaluation unit.
  • An integrated or separate control can be provided, which contains the signal unit and/or the evaluation unit and/or contributes to the control of the different units of the system or takes over these completely in order to be able to carry out an automatic detection of the subjective refraction.
  • the device and/or method can be used in performing subjective refraction.
  • the optometric parameters recorded and determined in this way can also be used to be able to calculate a visual aid for the test person, which is individually adjusted to the test person and is optimized.
  • measurement data can be generated which are relevant for the production of a visual aid such as glasses for the subject.
  • the device can determine and/or calculate optical parameters (such as, for example, the pupillary distance) of the subject, which are required and used in the manufacture of the glasses.
  • One aspect relates to a system for determining optometric parameters of a subject, in particular according to the aspect described above, with a light field display which is configured to: display a test image for determining the subjective refraction of the subject, which has a plurality of test image areas; and for each of the test image areas to simulate an assigned optical correction for at least one eye of the subject in such a way that the impression is created that the at least one eye is viewing the respective test image area through the respectively assigned optical correction; wherein at least two of the simulated, assigned optical corrections differ from one another with regard to their optical effect; and wherein the test image areas with the associated optical corrections are displayed simultaneously.
  • the system can also have an evaluation unit, which determines the optometric parameters of the subject depending on the displayed test image and the displayed optical corrections.
  • the evaluation can be carried out by an operator such as an optician, possibly using aids such as a computer and/or a computer program product that do not necessarily belong to the system.
  • This evaluation can at least be performed by a computer program product, which can be configured as part of the system.
  • the computer program product can carry out the evaluation fully automatically, especially in combination with an eye tracking unit.
  • the system can either be designed as an embodiment of the aspect described above or as a stand-alone system which is independent of the eye tracking unit, the signal unit and/or the eye signal.
  • the light field display can be used as the refraction unit and/or as the display unit.
  • the viewing direction and/or orientation of the at least one eye can be detected by means of the eye tracking unit in order to determine which of the displayed test areas the subject is viewing with the simulated, assigned optical correction.
  • the device can be used to carry out the method according to the above aspect with the light field display. For this reason, all statements relating to the method also relate to the device and vice versa.
  • the system is designed fully automatically in such a way that no specialist personnel and/or no personnel at all are required for use.
  • the test person can even operate the system alone as a layperson.
  • the system can be designed as a kind of kiosk, similar to a machine for making passport photos.
  • the subject can enter the kiosk and/or position themselves in front of the system and their optometric parameters are determined fully automatically, e.g. triggered by a start signal such as a push of a button and/or proof of payment.
  • the system is integrated into a mobile terminal such as a smartphone and/or tablet and/or laptop.
  • the mobile terminal device can have a camera that can be used, for example, to detect the distance of the subject and/or the subject's eyes.
  • the camera can be designed as a normal digital camera and/or as a depth camera.
  • the mobile end device can have an infrared camera for pupil detection and/or a distance sensor for distance detection and/or calibration.
  • a display of the mobile terminal can be used to display the test image or images be trained.
  • the mobile end device can also have a light field display in order to (at least partially) generate the test image and the optical corrections.
  • the light field display can be integrated into the regular display, for example.
  • the camera is also used for eye tracking according to the aspect of the invention described at the outset.
  • One aspect relates to a computer program product comprising computer-readable program parts which, loaded into a processor and executed, cause a device according to one of the above aspects to carry out a method according to one of the aspects described above, the computer program product controlling at least one of the following units at least partially and/or or regulates:
  • the computer program product can be executed, for example, on a controller and/or control unit, and can be used, for example, to control all of the units mentioned above, so that the subjective refraction can be carried out fully automatically.
  • a light field display can be used both as a refraction unit and as a display unit.
  • FIG. 1A shows a cross cylinder in a first position in a schematic representation
  • FIG. 1B shows a cross cylinder in a second position in a schematic representation
  • FIG. 2 shows a schematic representation of a first embodiment of a system for determining optometric parameters of a subject in a first display state
  • FIG. 3 shows a schematic representation of a second embodiment of a system for determining optometric parameters of a subject in a second display state
  • FIG. 4 shows a schematic representation of a light field display of a system for determining optometric parameters of a subject
  • FIG. 5 shows a schematic representation of a light field display of a system for determining optometric parameters of a subject.
  • a refraction unit is combined with an eye-tracking unit, with which an eye movement of the subject to be refractioned can be recorded at least as part of feedback on a visual task.
  • This feedback can be displayed to the person doing the refraction, ie the refractionist, for example on an operating unit of the system, and can thus be used for a subjective refraction by the refractionist.
  • the conventional acoustic communication between the refraction and the subject can be supplemented or replaced as feedback, which can be tedious and error-prone.
  • acoustic feedback for the subject is usually associated with uncomfortable movement of the lower jaw. Furthermore, such a movement can lead to an unwanted change in the position and/or orientation of the eyes to the refraction device, and thus to measurement errors. This can be improved by using eye movement as at least part of the feedback.
  • the feedback can be used for a partially or fully automated refraction.
  • This feedback can be used for a distance refraction where the refractionist is not directly on site.
  • the test person's feedback can be transmitted to the refractionist either completely and immediately as part of a partially automated refraction, or only as required or with a delay.
  • the invention can be used for the following types of subjective refractions:
  • the subjective refraction is performed manually.
  • the refraction is itself able to specify the settings of the refraction unit and/or the contents of the display unit for each step.
  • the subjective refraction is performed in a fully automated manner.
  • the system can automatically determine the settings of the refraction unit and the content of the display unit based on the feedback it has received from the subject. If required and available, content displayed on a control unit can also be automatically assigned. Furthermore, the end of the refraction process and the result can be determined automatically. The intervention of a refractionist is not usually required here.
  • a subjective refraction process is performed in which the system performs some parts of the refraction process (e.g. a determination of the sphere, cylinder and/or the axis position) in an automated manner, other parts of the refraction process (e.g. a binocular adjustment of the addition). ) are performed manually.
  • the system can at least partially make suggestions for settings of the refraction unit and/or the content of the display unit, which is at least partially the refraction override and replace with your own settings or content.
  • the operating unit is preferably used for this purpose.
  • the system has at least one refraction unit and one eye tracking unit. It can preferably have a display unit for displaying the visual task and/or visual object to be viewed by the subject.
  • the system has an operating unit for displaying at least the current and/or final state of the refraction set in the refraction unit and/or for changing it manually and/or the recorded feedback from the subject.
  • the visual object can be displayed to the test person as a visual task, which can be in the form of a symbol such as an optotype, a number, a letter, a pictogram, an image or the like.
  • the visual object can be displayed in one or more colors.
  • the system can additionally have a transmission unit, which transmits the result of the refraction determination to a recipient system, such as a recipient system for ordering ophthalmic lenses, and/or to a recipient system for advising on ophthalmic lenses.
  • a transmission unit which transmits the result of the refraction determination to a recipient system, such as a recipient system for ordering ophthalmic lenses, and/or to a recipient system for advising on ophthalmic lenses.
  • a device is used as the refraction unit that is at least capable of measuring the curvature of the wave fronts and, if necessary, also a mean propagation direction and/or wavelength and/or intensity and/or state of polarization of the light emanating from a visual object (e.g. displayed on the display unit). manipulate and guide them into at least one eye of the subject. For this purpose, optical corrections can be placed in front of the test person.
  • the refraction unit can also be able to directly generate suitable light for displaying visual objects on the retina of the subject (e.g. with the help of a light field display and/or a holographic display.
  • the display unit can be integrated in the refraction unit.
  • the refraction unit can be designed as an automated phoropter or as refraction glasses with at least one refractive and/or diffractive element (such as a lens and/or grating).
  • the refractive and/or diffractive element(s) can be configured adaptively, such as at least one deformable mirror and/or at least one deformable lens.
  • the refraction unit is designed as a phoropter unit and is used to present the subject with optical corrections as test lenses with different effects, i.e., for example, with different spheres, cylinders and/or axes.
  • the subject can be given, among other things, color filters (especially red and green) as well as polarizers for image separation (vertical and horizontal and/or left and right circular), gray filters (different intensities), apertures and prisms (according to amount and base top/bottom/right /left or by amount and direction).
  • the different effects can traditionally be achieved using lenses with different spherical and cylindrical effects, whereby the axis position of the cylindrical lenses can be adjusted.
  • the different effects of the optical corrections can be realized by liquid lenses that can be filled variably, lenses with variable boundary surfaces, Alvarez lenses or other elements with variable optical effects, such as adaptive mirrors.
  • the latter have the advantage of not having any chromatic aberration.
  • the different effects can also be achieved with diffractive elements.
  • optical corrections mentioned as active elements can be combined with one another, for example as a combination of a liquid-filled lens with a rotatable Alvarez cylinder lens. If a light field display and/or a holographic display is used as the display unit, with which the necessary light fields can be generated directly, an additional unit with optical corrections (such as a phoropter) can be dispensed with.
  • a light field display and/or a holographic display is used as the display unit, with which the necessary light fields can be generated directly, an additional unit with optical corrections (such as a phoropter) can be dispensed with.
  • Trial glasses as a refraction unit can either be operated manually, with the help of automated feedback, or be provided with the variable elements described above, which can be controlled by a computer and/or control unit.
  • the eye tracking unit is used to record the viewing direction and/or orientation of the respective eye.
  • the viewing direction depends on the orientation and vice versa. This can be done, for example, in at least one of the ways described below:
  • the eye tracking unit can capture an image of the pupil and/or the iris.
  • a lighting unit can be used for this purpose.
  • the eye position and/or viewing direction is derived from the position and/or the perspective distortion of the pupil and/or iris.
  • the eye tracking unit can capture an image of one or more Purkinje reflexes.
  • one or more preferably punctiform illumination units are used, which generate the Purkinje reflection or reflections.
  • the viewing direction and/or eye position and/or orientation of the eye is derived from the positions of the one or more Purkinje reflexes in the recorded image.
  • the eye tracking unit can capture an image of the pupil and/or the iris and one or more Purkinje reflexes.
  • one or more preferably punctiform illumination unit(s) are used, which generate the Purkinje reflection or reflections.
  • From the positions of the one or more Purkinje reflexes and the position and/or the Perspective distortion of the pupil and/or the iris is used to derive the eye position or viewing direction.
  • the relative position of the one or more Purkinje reflexes relative to the position of the pupil and/or the iris can also be evaluated.
  • the eye tracking unit can record an image of the edge of the cornea and/or the sclera, which can be used to determine a viewing direction of the eye. Distinctive points on the dermis, such as visible veins, can also be suitable for this.
  • the eye tracking unit can record a fundus image, i.e. an image of the back of the eye. This can be used to determine a direction of the eye's gaze, since the retina has a well-recognized pattern of blood vessels.
  • the eye tracking unit can take an image of a pattern reflected from the cornea, for example a pattern of multiple point light sources or extended light sources with known position. This can be used to determine a direction of gaze of the eye. For this purpose, a topographical model of the cornea can be created, with the help of which a reflex image dependent on the viewing direction can be calculated.
  • a coil (as a "coil” system) can be attached to the eye, e.g. using a contact lens without an optical effect with an integrated coil on the cornea.
  • the position and/or orientation of the coil can be determined by measuring the inductance and the eye position or viewing direction can be derived from this.
  • the viewing direction can be determined using images. From these images, a time course of the gaze can be determined, i.e. the eye movement. This can be done either live or by means of at least one video recording.
  • the recorded data determines an eye signal.
  • the eye signal can have at least one horizontal and/or one vertical component and be proportional to a component of the viewing direction in a linear approximation, eg to the horizontal and/or vertical component of the unit vector pointing in the viewing direction.
  • the eye signal can additionally or instead contain components with the help of which not only the viewing direction or instead of the viewing direction the orientation of the eye can be determined.
  • the eye signal is only approximately linear with respect to the viewing direction or the orientation of the eye.
  • Such a calibration gives a calibrated eye signal which matches the viewing direction and/or the orientation of the eye better than the original signal.
  • the data required for calibration can be measured by setting the test person visual tasks in which he should look at specific points of a display field of the display unit (eg center of the display field, center of an edge of the display field, corner of the display field). If the display field of the display unit is used for calibration, the points that are to be fixed by the subject during the calibration can be displayed on the display field with suitable markings.
  • the marking can be designed in such a way that it can be recognized despite the ametropia. For example, it can have a particularly striking color (eg bright blue) or it can flash additionally or alternatively.
  • the signal corresponding to the viewing direction specified by the points is recorded during the calibration and a connection between the recorded data and the eye position and/or viewing direction is derived therefrom. If there are indications of the ametropia, for example through a known previous refraction, known values of an existing visual aid and/or an autorefractometric measurement carried out beforehand, the Calibration can also be preceded by such a correction. This can make calibration easier for the subject and improve the quality of the calibration.
  • the eye tracking unit can either be designed as an independent device or integrated into the refraction unit and/or the display unit.
  • the display unit is an optional component.
  • the refractionist can use an independent display unit to carry out subjective refractions based on the test subject's feedback displayed by the system in a semi-automated process.
  • a display unit with a fixed content can be used, which contains all the necessary symbols (e.g. optotypes such as Landolt rings, Snellen E, letters and/or numbers) or images (e.g. classic visual chart with different large symbols).
  • the display unit can have a display field on which the display unit can present test images to the subject.
  • the test person can be given visual tasks, the solution of which depends on the test person's eyesight with the optical correction currently being used.
  • the display unit can be implemented in the following designs, for example:
  • the display unit can be designed as a display, eg as a display in TFT, LED, LCD, OLED or similar technologies which individual elements or pixels can be controlled individually.
  • a display eg as a display in TFT, LED, LCD, OLED or similar technologies which individual elements or pixels can be controlled individually.
  • separate screens can be used for each eye, for example to separate the visual impressions of both eyes or for a 3-dimensional display.
  • the display unit can be designed as a projection display, which has a projection unit and a screen. With the projection unit, an image can be projected onto this screen.
  • the projection unit can show individual predefined images, such as “slides”, or individual elements, eg parts of an optotype, or pixels can be controlled individually.
  • the display unit can be designed as a light field display and/or as a holographic display, which can display several visual objects such as symbols or optotypes simultaneously in the image field with different refractive effects. For example, three symbols with different spherical refractive powers, e.g. -O.Sdpt, Odpt and +0.5dpt, can be displayed at the same time. Along with a refractive effect of the refraction unit sph, the subject would see three symbols with the effects sph -O.Sdpt, sph, and sph +0.5dpt.
  • the refraction unit can be combined with a light field display and/or holographic display in order to carry out part of the effect change, such as an astigmatic effect, with this display. In this way, components in the refraction unit can be dispensed with.
  • such a display unit can be integrated in the refraction unit and/or at the same time fulfill the task of the refraction unit (cf. above).
  • the display unit can display single-color or multicolor visual objects as well as different polarizations, such as linear-vertical, linear-horizontal, linear-diagonal, right-circular and/or left-circular and/or brightness levels.
  • the display unit can either be stationary, e.g. set up or mounted (e.g. on a wall), or mobile, e.g. as a tablet. In particular for mobile versions, it makes sense to measure the distance between the subject and/or his at least one eye and the display unit. The measured distance can then be used for eye tracking and/or for determining the object distance during refraction, ie the correction and/or visual acuity determination. This distance can be detected, for example, using a distance sensor, depth camera, ultrasound, stereo camera, image processing, and/or pattern projection.
  • the test person can be given at least one visual task that can be solved by an eye movement. Examples of visual tasks and categories of visual tasks are described below.
  • Most visual tasks are based on a classification of a visual impression perceived by the test person, which is followed by a reaction from the test person.
  • the reaction can be intentional (hereinafter referred to as active feedback, e.g. active and conscious acoustic feedback) or unintentional (hereinafter referred to as passive feedback, e.g. unintentional eye movement towards an object seen).
  • the basis of the reaction is the ability to distinguish the visual impression from a reference that is either presented directly in the visual task, i.e. as an external reference, or that exists in the subject’s imagination, i.e. as an internal reference.
  • Direct determination includes visual tasks in which optotypes or images are presented in different forms of a property E1. For example, different sizes, different contrasts, different brightnesses, different colors, different refractive effects, e.g. in light field displays, etc. can be used as property El.
  • the subject is instructed to assess whether he can still recognize, better recognize and/or differentiate (positive threshold) the property E1 of the visual objects to be tested, such as optotypes or images.
  • the subject can be instructed to assess the opposite statement, i.e.
  • the test person can define the thresholds by means of a spatial or chronological ranking (e.g. a numbered position of the optotype on the display or a numbered time of the display of the optotype).
  • a first property E1 of the visual impression e.g. the direction of the gap in one or more Landolt rings
  • the psychometric and/or physiological variable to be determined e.g. recognizability of the opening of the Landolt rings in depending on their size
  • one or more visual objects such as optotypes are presented with the same expression of a second property E2 associated with the psychometric and/or physiological variable to be tested, such as size, contrast, brightness, color, but which differ in the first property E1.
  • the second property E2 associated with the psychometric and/or physiological variable to be tested is then varied and the visual task is repeated.
  • the psychometric and/or physiological threshold is finally determined by evaluating the frequency of a correct solution to the visual task, which changes parametrically with the varied property.
  • visual tasks can be categorized by the type of feedback required (active or passive), by the type of reference (internal or external), and/or by the type of determination (direct or indirect).
  • line of sight is synonymous with the line of sight determined with the eye tracking unit (i.e. a possibly calibrated eye signal of the eye tracking unit).
  • the solution of a visual task can be achieved with the help of visual tasks in that the displayed visual objects result in an unwanted or at least not directly consciously controlled eye movement, which can be resolved by the eye-tracking unit either by using an eye-tracking unit with a sufficiently high resolution is used, or alternatively or additionally the visual objects are arranged in such a way that a viewing direction directed thereto can be resolved on average by the eye tracking unit.
  • a visual object can be displayed in different places on the display field and the viewing direction can be registered at the same time.
  • Using one or more thresholds that may depend on the resolution of the eye tracking unit, it can be determined whether the viewing direction matches the position of the displayed visual object.
  • a threshold such as 1°, 2°, 5°, 10°, 15°, 20° through the vertical viewing direction
  • Fixation accuracy can also be measured, which would be less accurate in the absence of a perceived visual object than in the presence of the perceived visual object.
  • the test person should communicate their solution to the visual task in a previously defined way with their eyes.
  • the recognizability of the eye movement that takes place is achieved by either using an eye tracking unit with a sufficiently high resolution, or alternatively or additionally the previously defined type of eye movement or eye deflection is selected in such a way that it can be resolved by the eye tracking unit.
  • the direction of the visual object perceived by the subject can be reported back by a corresponding line of sight, e.g. starting from a displayed symbol or from the center of the display, which is detected by the eye tracking unit. This is even possible with visual tasks in which the subject uses an internal reference.
  • the lines of sight can be shown to support the test person by colored and/or flashing markings on the display field, e.g. near their edge.
  • Two visual objects can be represented, mutually as
  • the subject can be instructed to fixate on the visual object with the stronger or weaker expression of a given property, e.g. the visual object that appears sharper to him.
  • Another possibility is to use a previously determined line of sight to communicate whether the symbols or images have the same expression of the property or not.
  • the subject can be instructed to fixate on the visual object that differs from the others.
  • More than two different visual objects can be displayed.
  • the subject can be instructed acoustically and/or with the help of a text that is presented in a legible manner (e.g. in a sufficiently large size) to fixate on a visual object that only occurs once.
  • the subject can also be instructed by the fact that the visual object to be fixed is additionally enlarged and legibly displayed, e.g. on an edge of the display field.
  • the subject can compare the displayed visual objects against the previously described and therefore internal reference, but the other visual objects also serve as an external reference.
  • One of the visual objects can be displayed twice, in particular once separately from the other visual objects, but not necessarily enlarged or particularly clearly recognizable here. The subject is instructed to find and fixate the separately presented visual object in the set of remaining visual objects.
  • a single visual object can be displayed, that of the subject is to be classified, as well as several visual objects that are clearly recognizable for the subject, ie, for example, are displayed enlarged and/or with increased contrast. The subject is instructed to fixate on the clearly recognizable visual object that corresponds or looks most similar to the visual object to be classified.
  • an arrangement of clearly recognizable visual objects with a clearly defined direction can be arranged around a visual object to be classified in such a way that the direction defined by the visual object to be classified and the clearly recognizable visual objects coincide with the direction of the clearly recognizable visual objects.
  • the described embodiments can all be used in the sense of indirect determination. If one wants to determine a property of the visual impression directly, the determination of the solution to the visual task by eye tracking can be achieved - similar to passive feedback - in that the eye movement of the test person can be resolved by the eye tracking unit by using an eye tracking unit with a sufficiently high resolution is used and/or the visual objects are arranged in such a way that their mean position can be resolved by the eye tracking unit.
  • the visual objects are arranged in such a way that the property of the visual impression to be determined changes monotonically over the position of the display, e.g. that the visual objects displayed become increasingly blurred to the right.
  • the change in the property of the visual impression can be carried out using the display unit, e.g. by displaying different contrasts or sizes of visual objects.
  • an effect set in the refraction unit can be changed depending on the viewing direction detected with the eye tracking unit and/or the direction of the currently fixed visual object.
  • Visual task Preferential Looking (passive feedback, internal reference, indirect determination)
  • the subject In a preferential looking visual task, the subject is presented with two fields. One of them is empty, the other shows a visual object. If the subject can see the visual object, he will prefer to look in its direction. If he can't see it, he won't show a preference for a square. The subject does not necessarily have to be instructed to look at the visual object, which is why this method also works for subjects who cannot express themselves in any other way, or who cannot be assumed to be able to understand the instructions or follow them, e.g. with children.
  • a visual object such as a Landolt ring is shown as a test image and the subject is asked to look from the center of the display or the center of the visual object in the direction that shows the orientation of this visual object, e.g. the opening of the Landolt -Rings, a tip of an arrow or something similar.
  • This visual task is a direct determination of the viewing direction with active feedback, whereby the subject determines an internal reference, e.g. an idea of Landolt rings with differently oriented openings.
  • Landolt rings with different opening directions letters of the Latin or other alphabet, (known) symbols and/or abstract sketches (e.g. house, tree, car) can be used.
  • the task of determining which visual object is to be identified can be carried out acoustically. Alternatively or additionally, the task can also be set using an enlarged visual object that is used as an external reference.
  • a visual object is shown moving across the display panel.
  • the subject is asked to follow the visual object with their gaze. Based on the eye movement, it can be determined whether the subject actually perceives the visual object and can follow it if necessary.
  • test image is shown, such as symbols on a solid color background.
  • a first area of the test image is red and a second area is green.
  • the areas can roughly divide the test image in half. The subject is asked to look at the area that appears sharper.
  • Visual task Symbols or images in different positions and with different refractive powers (active feedback, external reference, direct determination)
  • At least two visual objects are displayed at different positions on the display field that can be resolved by the eye tracking unit, e.g. sequentially or simultaneously. Each of these positions is linked to a different optical correction set by the refraction unit, i.e. the subject is optically corrected depending on the position.
  • the subject should look at the visual object that appears sharper and/or clearer and/or easier to read.
  • Visual task Symbols or images in different forms of a property (active feedback, internal reference, direct determination)
  • Visual objects are shown in different characteristics of a property, e.g. different sizes, different contrasts, different brightnesses, different colors, in the display field. Each expression can occur one or more times. Examples are groups such as rows of visual objects, each with the same expression of this property, with the expression of the property decreasing from group to group. The subject should then look at a visual object or the group of visual objects that he just recognizes.
  • the visual objects can be presented simultaneously with different optical corrections.
  • the different optical corrections can differ in particular with regard to sphere and/or cylinder power and/or cylinder axis.
  • Visual task moving symbol with changing characteristics of a property (active feedback, internal reference, direct determination)
  • a visual object is shown moving across the display panel.
  • the subject is asked to follow the visual object with their gaze.
  • the characteristics of a property are changed, e.g. size, contrast, brightness, color of the visual object.
  • Based on the eye movement it can be determined at what level the subject recognizes the visual object. Preference is given to starting with the form of best visibility, e.g. the largest available size, and changing the form in the direction of poor visibility, e.g. reducing it, until the subject no longer perceives the visual object and can therefore no longer follow it.
  • test person's feedback on visual tasks and/or on controlling the process for several consecutive visual tasks can be recorded, for example, during a refraction as described below.
  • the subject can be guided through the measurement process as follows:
  • this visual task can be solved and/or ended in one of the following ways, for example:
  • this visual object and/or this position is considered to be selected.
  • This period of time can depend on the degree of difficulty of the visual task, ie it can be increased, for example, if the visual task becomes more difficult in the course of the refraction, for example because the visual objects displayed become smaller. If several visual objects are displayed, another criterion can be used in addition or as an alternative Calculation of the period of time can be used after which one of the visual objects is considered to be selected by the subject.
  • the fixation time can be specified as a multiple of the mean fixation time of the remaining symbols, eg as 1.5, 2, 3, 5, or 10 times the mean fixation time of the remaining visual objects, or as a relative proportion of the fixation time at all, eg at least 80%, 60%, 40%, 20%, or 10% of the sum of the fixation times of all symbols.
  • Blink The subject first looks at a visual object and/or a position for a specified time, e.g. at least 1, 2, 5 or 10 seconds, and consciously blinks, i.e. he closes one or both eyes for a specified time, e.g. 0.5, 1 or 2 seconds.
  • Confirmation field The subject first looks at a visual object and/or a position for a specified period of time, e.g. at least 1, 2, 5 or 10 seconds, and then immediately at an action field that is displayed on the display field or next to it.
  • buttons, button, pedal, and/or similar on the refraction unit or on a separate operating unit that the subject can reach or hold, which the subject triggers with a hand or foot while pointing to the selected visual object and/or the selected position looks.
  • This type of confirmation can equally be done by swiping over a touch-sensitive surface or other tactile input methods attached to the refraction unit or located on a separate control unit held by the subject.
  • gesture recognition using cameras and/or depth cameras, e.g. recognizing the gestures of one of the test subject's hands, or other sensors for determining the position or orientation of hands and/or feet, e.g. using distance sensors, inclination sensors, etc ..
  • the subject gives an acoustic signal while looking at the selected visual object and/or the selected position. Sounds that do not require the subject to open their mouth, such as “Mmmm” or something similar, are particularly advantageous as a signal.
  • Sound that do not require the subject to open their mouth such as “Mmmm” or something similar, are particularly advantageous as a signal.
  • test person in the case of at least partially automated sequences of the measurement process, it can be useful for the test person to be able to give the system control messages in addition to the feedback for solving the visual tasks, ie so-called test person communication takes place.
  • the course of the measurement process can thus be at least partially controlled or at least influenced by the subject. This can be done with control messages such as "Back", ie repetition of the last visual task, "Pause” or "Cancel".
  • Control fields that correspond to the corresponding control commands are preferably placed on the right or bottom edge of the display field.
  • the subject can trigger this as described above, for example by fixing for a predetermined period of time. Since the subject has not yet been completely corrected during the measurement and/or refraction process, these control fields can be large enough and clearly recognizable. They can differ in color, for example green: “Confirmation”, yellow: “Back”. Blue: “Pause”, red: “Cancel”).
  • buttons, button or pedal gesture: There is a separate button, button or pedal for each possible control message, which the subject can trigger with a hand or a foot, e.g. "Confirmation”, “Back”, “Pause” and “Cancel”. “. Alternatively, a separate gesture can be used for each possible control message.
  • Acoustic control message The subject gives an acoustic signal, which is received and interpreted by the system, e.g. "Mhmm”, “Ok” and/or “Confirmation” for “Confirmation”; “Back” for “Back”; “Pause” for “Pause”, “Cancel” for cancellation. If such an acoustic control message is used, the system can
  • the system can communicate with the test subject, ie what is known as system communication.
  • the system can at least explain the process and the individual visual tasks to the test person, prompt him to solve the visual tasks, and/or possibly give status feedback on the progress of the measuring process.
  • the system communication can take place acoustically or visually by means of a voice output or the display of corresponding texts.
  • a chatbot and/or agent (as “artificial intelligence”) can be used as a means of system communication, with which the subject can communicate acoustically and/or in text form.
  • a human such as a refractionist can be connected to the system (e.g. via a long-distance communication line) and thus communicate with the subject.
  • displays can be used on the display unit used for the refraction and/or on an additional display unit. It can be noted that the subject is not completely corrected.
  • an existing (e.g. previously) correction can therefore be used and/or a sufficient correction can be made by the refraction unit.
  • the latter can be recorded directly by the system, for example by an integrated aberrometric measuring unit.
  • the best subjective refraction can be determined as described in the prior art, see, for example, D. Methling: Determination of visual aids, 2nd edition, Gustav Enke Verlag, Stuttgart (1996).
  • determined feedback is used, which includes detecting the line of sight and/or orientation.
  • the subjective refraction can take place on a device that can be remote-controlled or controlled by an algorithm, which can depend on the feedback from the subject described above and/or the subject's communication.
  • initial refraction values may be determined first.
  • an objective refraction can first be determined for a subject, i.e. refraction values are determined on the basis of an objective measurement.
  • the objective parameters can include aberrometric measurement data and/or pupillometric measurement data.
  • the objectively determined measurement data ie the aberrometric measurement data and/or the pupillometric measurement data, can be used to calculate an objectively optimized refraction.
  • the objectively determined refraction values can be used, for example, which are obscured by a predetermined distance, e.g. by an additional sphere of 0.50 to 1.00 dpt. These nebulized refraction values can be used as starting refraction values.
  • the refraction values of an already existing optical correction can be used as starting refraction values, e.g. the refraction values of older glasses.
  • any refraction values and thus any available optical effects can be used as start refraction values.
  • step e) when determining the subject's subjective refraction, the following four main method steps a) to d) are optionally supplemented by the Process step e) carried out.
  • a monocular determination of the subjective refraction alternatively only method step a) or b) can be carried out, optionally supplemented by method step e): a) Monocular determination of the most positive spherical-cylindrical refraction, in which subjectively the best visual acuity for a first eye of the Subjects is effected, for example, for the right eye; b) Monocular determination of the most positive spherical-cylindrical refraction, which subjectively produces the best visual acuity for a second eye of the subject, eg for the left eye; c) setting a binocular, accommodative balance; d) Binocular determination of the most positive spherical-cylindrical refraction, in which subjectively the best visual a
  • a change in the strength of the spherical correction can preferably take place if the addition of a sphere of 0.25 dpt improves the visual acuity subjectively and/or optionally improves the visual acuity, i.e. the visual acuity, by one line, i.e. a change of -0, 1 logMAR causes.
  • This condition and/or a subjective improvement in the visual impression can be used throughout as a change criterion in the addition of a negative sphere.
  • a line of visual acuity can be regarded as achieved if the subject can recognize at least 60% of the displayed optotypes of this line.
  • a strength of the required sphere correction of the subjective refraction can first be determined.
  • the second eye can be covered, e.g. the left eye.
  • the measured starting refraction for the right eye is presented to the first eye.
  • a first positive lens can be added to the starting refraction for the first eye.
  • the visual acuity can then be measured again and/or checked subjectively. If the respective change criterion is reached, another positive lens will be added again until the applicable change criterion is no longer reached. If the applicable change criterion is no longer met, the refractionist can switch on a negative lens instead. If the change criterion is reached, another negative lens can be added until the change criterion is no longer reached.
  • a refractionist can use a cross cylinder 1 in order to determine an axis of any astigmatism that may be present in the first eye.
  • Figures 1A and 1B show such a cross cylinder 1 in a schematic representation, which is also known under the name “Jackson cross cylinder” and Jackson cross cylinder.
  • the cross cylinder 1 has a handle 5 through which a handle axis 2 runs.
  • the cross cylinder 1 is an optical aid and has two cylinders crossed at 90°, namely a plus cylinder and a minus cylinder.
  • the handle axis 2 is arranged at 45° to a cylinder axis 3 of the plus cylinder and at 45° to a cylinder axis 4 of the minus cylinder.
  • the subject can be shown optotypes that indicate a worst visual acuity of at least 0.2 logMAR.
  • the handle axis 2 of the cross cylinder 1 can be arranged on a presumed and/or the objectively determined axis of an astigmatism of the first auger of the subject. Subsequently, the cross cylinder 1 can be reversed between the two positions shown in FIGS. 1A and 1B, with the handle axis 2 remaining in the same position. The subject can be asked which of these two rotational positions of the cross cylinder 1 causes a better viewing experience. If the subject does not notice any difference, the axis for the refraction of the first eye has been found and the required cylinder power is determined, see below.
  • the grip axis 2 can be shifted clockwise precisely when the cylinder axis 4 of the minus cylinder is in the preferred rotational position clockwise from the grip axis 2, cf. the situation in Fig .1B.
  • the handle axis 2 can then be precisely shifted counterclockwise when the cylinder axis 4 of the minus cylinder is in the preferred rotational position counterclockwise from the handle axis 2, cf. the situation in Fig. 1B.
  • the axis can be selected from these last used axis positions, which is most likely to have an older one Axis coincides, ie for example with an axis for this first eye, which was used in an older pair of glasses of the subject.
  • the axis that is closer to a non-oblique astigmatism can be selected from these last-used axis positions.
  • the cross cylinder 1 can be arranged such that its cylinder axis 3 of the plus cylinder and its cylinder axis 4 of the minus cylinder are arranged exactly on the corresponding cylinder axes of the objectively determined refraction, which is already presented to the first eye of the subject.
  • the cross cylinder 1 can be turned over in the same way as in the determination of the axis position described above, i.e. with the grip axis 2 in the correct position. If a rotational position preferred by the subject is the rotational position in which the two cylinder axes of the minus cylinder overlap, a negative cylinder strength can be added e.g. -0.25 dpt.
  • the rotational position preferred by the subject is the rotational position in which the cylinder axis 4 of the plus cylinder of the cross cylinder 1 overlaps the cylinder axis of the minus cylinder of the refraction held up, a negative cylinder power can be removed, also e.g. in steps of quarter dioptres.
  • the refractor can repeat this until the subject no longer prefers any of the rotational positions, or until the strength of the cylinder correction changes back and forth. In the latter case, the lowest amount of cylinder correction used should be selected.
  • the strength of the required cylinder correction When determining the strength of the required cylinder correction, it can be ensured that the previously determined strength of the required sphere correction remains the same. This means that, for example, for every change in the strength of the cylinder correction by 0.50 dpt, the strength of the sphere correction is also changed by 0.25 dpt in the other direction. After determining both the strength and the axis of the required cylinder correction, the sphere correction can be checked again. To do this, the same procedure can be followed as described above in connection with the determination of the strength of the required sphere correction. Optionally, in the event that the strength changes significantly, the axis determination can be repeated in order to achieve a more reliable result.
  • the refraction for the second eye is then determined, ie method step b). This takes place exactly analogously to method step a), only for the second eye and with the first eye covered.
  • the monocular subjective refraction for the second eye is determined, which is made up of a specific strength of a required sphere correction and a specific strength and axis of a required cylinder correction.
  • the binocular, accommodative equilibrium is adjusted. Both eyes are revealed.
  • a nebulization can be added to the two monocular subjective refractions for the first and second eye, eg a nebulization of +0.50 dpt each.
  • a separator can also be used, for example a polarization filter and/or a red/green filter.
  • One goal can be that both eyes enter the same state of accommodation.
  • the separator can allow the subject to see different display parts of a target display with each of his eyes and to compare their sharpness with one another. For this purpose, in addition to the parts of the display that can only be seen by one eye at a time, a common display part of the target display must be visible to both eyes.
  • step d ie for the binocular determination of the most positive spherical-cylindrical refraction, in which subjectively the best visual acuity is achieved for both eyes of the subject, the separator is first removed. An optional visual acuity determination can then be carried out.
  • the binocular determination of the strength of the required sphere correction the same procedure can be followed as in method steps a) and b), only here for both eyes at the same time.
  • the two already determined monocular cylindrical refractions can remain unchanged.
  • the result can be used as the binocular subjective refraction.
  • the binocular subjective refraction can be determined as the result of the supplementing step e).
  • Method step e) follows, in which a subjective evaluation of the refraction values contained according to method step d) takes place in a test frame.
  • the refraction values determined in method step d) are placed in a test frame and adjusted to the subject's face.
  • the subject's pupils can be centered in the middle of test lenses with these refraction values.
  • a review of the trial lenses can be done in an open, outdoor environment with the subject fixating on a distant target.
  • the refractionist can add a binocular sphere power of +0.25 dpt and ask the subject whether the visual impression appears better or remains the same with or without this addition. If the visual impression should appear better or remain the same as a result of this addition, then subjective ones are determined as final.
  • the refraction values determined in method step d) are used for the refraction values, which are supplemented by this binocular addition of +0.25 dpt in the sphere.
  • the sphere strength can be changed binocularly by -0.25 dpt and the subject can be asked whether the visual impression appears better with or without this reduction of a quarter dioptre. If this change in negatives should lead to a better visual impression, the sphere strength can be reduced binocularly again by -0.25 dpt. The subject can be asked whether the visual impression appears better with a change of -0.25 dpt or of -0.50 dpt.
  • the refraction values determined in method step d) are used as the finally determined subjective refraction values, which are supplemented by the binocular change by -0.25 dpt in the sphere. If the change of -0.50 dpt leads to a better visual impression, the refraction values determined in method step d) are used as the finally determined subjective refraction values, which are supplemented by the binocular change of -0.50 dpt in the sphere.
  • the visual acuity and/or a visual sensitivity can optionally be measured.
  • Visual acuity can be measured with a cylindrical correction deviating from the determined subjective refraction.
  • Jackson cross cylinders can be used for this, e.g. with plus/minus 0.50 D, or a plus cylinder with e.g. +1.00 D compared to the determined subjective refraction.
  • the extent of the deviation of the correction from the optimal correction can also depend on the level of the addition or be based roughly on the maximum values of the unwanted astigmatism expected with a progressive lens.
  • the visual acuity of a subject with a lower addition would therefore be measured with less spherical or cylindrical fogging than the visual acuity of a subject with a higher addition.
  • Visual acuity can be determined using optotypes, with visual acuity being considered achieved when at least 60% of the optotypes of an associated line have been recognized.
  • the visual acuity can be measured during and/or after the method step(s) a), b), c) and/or d) and stored and/or written down for a subsequent calculation of the sensitivity. For example, at least two binocular vision values can be determined during method step c) and/or d). One or more monocular vision values can be determined during method steps a) and/or b).
  • the vision measurement can be used as a control of the determined subjective refraction. This means that subjects may or may not achieve a given visual acuity.
  • the visual acuity measurement can also be used to obtain information about the behavior of the subject's visual system.
  • the visual acuity can be measured, for example, using optotypes, i.e. using letters, Landolt rings and/or the like. It can be checked whether the subject can fully or partially recognize the optotypes and/or their orientation.
  • a psychophysical assessment of the visual acuity can be carried out.
  • Such a psychophysical evaluation can be based on a Display a sequence of optotypes of different sharpness. This sequence can be changed depending on the respondent's responses.
  • the aim of the assessment may be to converge the sequence of optotypes towards the subject's visual acuity, which is used as a threshold for the assessment.
  • the variations of the sequences can be altered depending on the respondent's responses and depending on the particular method used.
  • the subject can be asked for the smallest line of optotypes which he can recognize. Depending on the outcome, it can then be checked whether the subject can actually recognize the selected line and/or a smaller one.
  • the visual acuity can generally be determined monocularly and/or binocularly. These are different visual parameters that can all be used to calculate eyeglass lenses.
  • the methods described above for determining optometric parameters can be used to check for contrast sensitivity.
  • the refraction is able to intervene in the execution of the algorithm at any time or at selected method steps.
  • such an intervention and/or input can take place during binocular adjustment of the addition and/or determination of a cross cylinder.
  • visual acuity values can also be recorded for one or more optical corrections or without optical correction.
  • the feedback described above can be used instead of the usual acoustic feedback from the subject.
  • the visual acuity can be determined with the best correction, but also corresponding visual acuity values with less than optimal correction. This allows detecting the effect of a changing effect on visual acuity, such as that found in progressive lenses (e.g. caused by unwanted astigmatism or an imperfectly matching spherical power), and which can be used in the calculation of ophthalmic lenses.
  • the visual acuity can be inferred directly from the smallest recognized visual object and the distance of the display field.
  • the sensitivity of visual acuity can be determined, which is based on this visual acuity determination.
  • the visual acuity can be measured with at least two preceding refractions, e.g. zero refractions.
  • the visual acuity is preferably determined with the best optical correction, i.e. with the best subjective or an optimized refraction.
  • the visual acuity can be determined with a correction that deviates from the best optical correction, preferably with a plus correction, since a minus correction can possibly be compensated for by the subject through accommodation, particularly preferably in the range 0.50dpt to 125dpt.
  • the manual and the at least partially automatic subjective refraction can benefit from the knowledge of additional data such as objective refraction values, data from an older prescription, and/or values of a possibly existing correction device (e.g. glasses or contact lenses). Therefore, the system can be coupled to and/or combined with at least one of the following units.
  • additional data such as objective refraction values, data from an older prescription, and/or values of a possibly existing correction device (e.g. glasses or contact lenses). Therefore, the system can be coupled to and/or combined with at least one of the following units.
  • Measurement unit for objective data of the eye In order to carry out an at least partially automated subjective refraction, objective measurement data of the eye, such as objective refraction data or aberrometric data, can be accessed, which can be determined by a measurement unit for objective data of the eye.
  • objective measurement data of the eye such as objective refraction data or aberrometric data
  • an external auto refractometer and/or aberrometer can be connected to the system, or an auto refractometer or aberrometry unit can be integrated into the refraction unit.
  • the measurement results of this measurement unit can then be used, for example, as starting values for manual or at least partially automated subjective refraction.
  • further objective measurement data can be recorded, such as aberrometric measurement data for one or two illumination states, pupillometric measurement data for both illumination states, possibly aberrometric and/or pupillometric measurement data for a second distance.
  • Measurement unit for measuring an already existing correction means Values of a possibly already existing correction means, such as glasses or contact lenses, can be taken into account. Since the subject often does not know the correction values of the correction means that are present, a measuring unit can measure the correction means that may already be present and determine its correction values, e.g. a focimeter with a single measuring point or a wavefront measuring device for the full-surface analysis of a lens, possibly with glass type and measuring point recognition.
  • results of the measurement can then be used, for example, as starting values for manual or at least partially automated subjective refraction.
  • the measurement unit can be used to measure a correction device that may already be present, such as a lensmeter, independently of the use of a measurement unit described above for measuring objective data of the eye, or in combination with this.
  • prescription values of a prescription that may already exist can be taken into account.
  • the prescription values can be entered manually or made available digitally by another software system. This means that these regulation values can be imported from industry software.
  • the prescription values can be used, for example, as starting values for the manual or at least partially automated subjective refraction.
  • measurement data determined by the system such as the determined refraction and/or other visual parameters, e.g. recognition or differentiation thresholds, monocular and/or binocular visual acuity with the best correction, visual acuity with spherical and/or astigmatic fog, etc. , are transmitted to the industry software by means of a transmission unit.
  • the measurement data determined by the system can be used, for example, for ordering, production and/or for advice on ophthalmic glasses and/or other optical correction devices.
  • Figure 2 shows a schematic representation of a first embodiment of a System for determining optometric parameters of a subject 10 in a first display state.
  • the system has a refraction unit 14 for setting an optical correction for at least one eye 12 of the subject 10 .
  • the refraction unit 14 can, for example, be designed as a phoropter and/or have a phoropter.
  • the system also has an eye tracking unit 16, which can be arranged, e.g the subject 10 views a displayed test image.
  • This test image can be displayed on a display unit 24 and can include multiple optotypes 26 and 28 as visual objects.
  • the optotypes 26 and 28 can be displayed in test image areas, e.g. one optotype 26, 28 in each test image area.
  • Each optotype 26, 28 can be displayed with an associated optical correction and/or applied refraction.
  • the test person 10 looks with his eye 12 through the refraction unit 14 along the viewing direction R at the viewed optotype 26, which is shown in FIG. 2 as an “A”.
  • the eye tracking unit 16 can detect the line of sight R.
  • the viewing direction R recorded in this way, it can be checked that the subject 10 is looking at the looked at optotype 26 and not at one of the unlooked optotypes 28, which are shown in FIG. 2 as “B”, “C” and “D”. It can thus be distinguished which of the optotypes 26, 28 the subject is looking at.
  • the system can further include a control unit 18 which can include a controller 20 of the refraction unit 14 and/or the display unit 24 .
  • the control unit 18 can also be designed and/or configured to read out and/or receive the viewing direction R detected by the eye tracking unit 16 .
  • the system can also include a trigger 22, which can be embodied, for example, on the control unit 18, for example as a button.
  • the control unit 18 can be designed and/or configured to read and/or receive signals generated by the trigger 22 .
  • the control unit 18 can be designed and/or configured to evaluate signals generated by the refraction unit 14 and/or the display unit 24 and/or the eye tracking unit 16 and/or the trigger 22 .
  • the control unit 18 can also be embodied as a signal unit that creates an eye signal that contains information about the recorded viewing direction R and/or orientation of the at least one eye 12 of the subject 10 .
  • Control unit 18 can be embodied as an evaluation unit which determines the optometric parameters of subject 10 while evaluating the eye signal as a function of the test image displayed.
  • FIG. 3 shows a schematic representation of a second embodiment of a system for determining optometric parameters of a subject 10 in a second display state.
  • This system is designed similarly or identically to the system shown in FIG. 2, with the same reference symbols identifying the same or similar features.
  • the control unit 18 can be designed and/or configured to additionally evaluate signals generated by a display unit 30
  • the controller 20 can include a controller for the display unit 30 .
  • the display unit 30 of this system can be identical to the display unit 24 of the system shown in FIG.
  • the system has the display unit 30 on which at least one confirmation field 32 and/or at least one cancellation field 34 can be displayed.
  • Such confirmation and/or cancellation fields 32 and 34 can also be additionally displayed and/or provided in the optotypes 26, 28 shown in FIG.
  • the confirmation field 32 and/or the cancel field 34 can be used as an operating field can be formed, by means of which the subject 10 can give feedback to the system.
  • the subject 10 can be asked whether his line of sight R was correctly recorded. This can be done via an audio signal or e.g. via a corresponding display on the display unit 30 and/or 24. If he has correctly recorded the viewing direction R, e.g. that the subject 10 has just looked at the optotype 26 (e.g. "A"), so the subject 10 can fix the confirmation field 32 if this is correct. If the viewing direction R was not recorded correctly, the subject 10 can fix one of the break-off fields 34 . Fixing of the confirmation and/or cancellation field 32, 34 can be detected by the eye tracking unit 16 and evaluated by the control unit 18.
  • the confirmation and/or cancellation field 32, 34 can be detected by the eye tracking unit 16 and evaluated by the control unit 18.
  • FIG. 4 shows a schematic representation of a light field display 36 as a light field display of a system for determining optometric parameters of a subject.
  • the light field display 36 can be used as a refraction unit and/or a display unit.
  • the system includes an eye tracking unit 16 which may be integrated into and/or connected to the light field display 36, for example. Similar to or exactly the same as the embodiments shown in FIGS. 2 and 3, the system has a control unit 18 which can be designed to control the light field display 36 and/or the eye tracking unit 16 .
  • the control unit 18 can be designed and/or configured to detect and/or evaluate signals generated by the light field display 36 and/or the eye tracking unit 16 .
  • a test image is displayed on the light field display 36, which is used to determine the subjective refraction of the subject 10 and has a plurality of test image areas.
  • at least one optotype 38, 40 can be displayed in each test image area, which are projected in rows with the same optical correction and/or effect.
  • an associated optical correction for the at least one eye 12 of the subject 10 can be simulated in such a way that the impression is created that the at least one Eye 12 views the respective optotype 38, 40 through the respectively associated optical correction.
  • at least two of the simulated, assigned optical corrections can differ from one another with regard to their optical effect.
  • the test image areas are displayed simultaneously with the assigned optical corrections.
  • the optotypes 38, 40 of each row can be projected with the same optical correction (within the row).
  • the optical corrections with which the optotypes 38, 40 of the individual rows are projected differ from one another, e.g. in the defocus component used.
  • the mean sphere that is subjectively required can be determined by means of these different defocus components.
  • the optical correction of each row may differ from each other in terms of sphere power, cylinder power and/or axis.
  • the optical correction can deviate from each other by a fixed or variable amount, e.g. by % diopters in the sphere and/or in the cylinder.
  • the corrections can be projected in each row rotated about a specific cylinder axis, e.g. by 45° each time.
  • the subjectively best optical correction can be selected with certainty, since several optotypes 38, 40 are available for each optical correction, namely a whole series of optotypes with the same optical correction.
  • the subject 10 can thus select that row which is projected with the subject 10 subjectively best optical correction. In this way, a more reliable determination of the visual acuity for each correction used is possible, if desired.
  • FIG. 5 shows a schematic representation of a light field display 36 as a light field display of a system for determining optometric parameters of a subject.
  • the same reference symbols identify the same or similar features.
  • optotypes 38, 40 are displayed in rows and columns, which can be projected with different optical corrections.
  • the optotypes 38, 40 can be projected with the same astigmatism components JO of the optical correction within each row (or alternatively column).
  • the astigmatism components JO of the optical correction of the individual rows (or alternatively columns) differ from one another.
  • the optotypes 38, 40 are projected with the same astigmatism component J45 of the optical correction.
  • the astigmatism components J45 of the optical correction of the individual columns (or alternatively rows) differ from one another.
  • the defocus component and thus the mean sphere can first be determined.
  • the astigmatism components J0 and J45 can then be determined, e.g. with the approach shown schematically in FIG. Together, this results in the sphere, cylinder and axis of the subjectively required optical correction.

Abstract

L'invention concerne un procédé destiné à déterminer des paramètres optométriques d'un patient examiné (10), selon lequel une correction optique pour au moins un œil (12) du patient examiné (12) est réglée au moyen d'une unité de réfraction (14 ; 36). Une image de contrôle destinée à déterminer la réfraction subjective du patient examiné (10) est affichée. Une direction du regard (R) et/ou une orientation dudit au moins un œil (12) du patient examiné (10) sont détectées au moyen d'un oculomètre (16), pendant que le patient examiné (10) observe l'image de contrôle affichée. Un signal oculaire est créé, lequel contient des informations relatives à la direction du regard (R) et/ou à l'orientation de l'au moins un œil (12) détectées du patient examiné (10). Les paramètres optométriques du patient examiné (10) sont déterminés par évaluation du signal oculaire, en fonction de l'image de contrôle affichée.
PCT/EP2022/056354 2021-03-12 2022-03-11 Procédé, système et produit-programme informatique destinés à déterminer des paramètres optométriques WO2022189640A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22714166.0A EP4304448A1 (fr) 2021-03-12 2022-03-11 Procédé, système et produit-programme informatique destinés à déterminer des paramètres optométriques
IL305329A IL305329A (en) 2021-03-12 2022-03-11 Method, system and computer program product for determining optometric parameters

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102021202451.3 2021-03-12
DE102021202451.3A DE102021202451A1 (de) 2021-03-12 2021-03-12 Verfahren, System und Computerprogrammprodukt zur Bestimmung optometrischer Parameter

Publications (1)

Publication Number Publication Date
WO2022189640A1 true WO2022189640A1 (fr) 2022-09-15

Family

ID=81074291

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/056354 WO2022189640A1 (fr) 2021-03-12 2022-03-11 Procédé, système et produit-programme informatique destinés à déterminer des paramètres optométriques

Country Status (4)

Country Link
EP (1) EP4304448A1 (fr)
DE (1) DE102021202451A1 (fr)
IL (1) IL305329A (fr)
WO (1) WO2022189640A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011120973A1 (de) 2011-12-13 2013-06-13 Rodenstock Gmbh Universelle objektive Refraktion
US9517008B1 (en) * 2014-11-06 2016-12-13 Bertec Corporation System and method for testing the vision of a subject
JP2020528304A (ja) * 2017-07-13 2020-09-24 クァンウン ユニバーシティー インダストリーアカデミック コラボレーション ファウンデーションKwangwoon University Industry−Academic Collaboration Foundation 動体視力検査方法およびシステム
US20200383568A1 (en) * 2018-03-01 2020-12-10 Jvckenwood Corporation Visual function detection apparatus, method of detecting visual function, and program

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9706910B1 (en) 2014-05-29 2017-07-18 Vivid Vision, Inc. Interactive system for vision assessment and correction
EP3592204A4 (fr) 2017-03-05 2020-12-23 Virtuoptica Ltd. Procédé d'examen de l'oeil et appareil associé
US11253149B2 (en) 2018-02-26 2022-02-22 Veyezer, Llc Holographic real space refractive sequence

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011120973A1 (de) 2011-12-13 2013-06-13 Rodenstock Gmbh Universelle objektive Refraktion
US9517008B1 (en) * 2014-11-06 2016-12-13 Bertec Corporation System and method for testing the vision of a subject
JP2020528304A (ja) * 2017-07-13 2020-09-24 クァンウン ユニバーシティー インダストリーアカデミック コラボレーション ファウンデーションKwangwoon University Industry−Academic Collaboration Foundation 動体視力検査方法およびシステム
US20200383568A1 (en) * 2018-03-01 2020-12-10 Jvckenwood Corporation Visual function detection apparatus, method of detecting visual function, and program

Also Published As

Publication number Publication date
IL305329A (en) 2023-10-01
EP4304448A1 (fr) 2024-01-17
DE102021202451A1 (de) 2022-09-15

Similar Documents

Publication Publication Date Title
DE102015100147A1 (de) Verfahren und Vorrichtungen zur Augenanalyse
EP3958726B1 (fr) Détermination d'une erreur de réfection d'un il
EP3218762B1 (fr) Dispositif de correction optique pourvu d'une correction supplémentaire pour l'astigmatisme
EP2922460B1 (fr) Dispositif ainsi que procédé de contrôle de l'acuité visuelle humaine
EP3955800B1 (fr) Détermination d'une erreur de réfection d'un il
DE102013000295B4 (de) Vorrichtung und Verfahren zur Bestimmung eines Satzes ophthalmologischer Daten
EP3192431A2 (fr) Système d'examen optométrique et procédé d'examen des yeux
EP3542703B1 (fr) Dispositif et procédé de détection d'un champ visuel d'une personne comportant un scotome
EP3972478B1 (fr) Détermination commune du pouvoir d'accommodation et de la vergence
EP0598738B1 (fr) Installation de refraction automatique commandee sur la base de la methode subjective
EP4304447A1 (fr) Procédé, dispositif et produit programme d'ordinateur de détermination d'une sensibilité d'au moins un oeil d'un sujet de test
EP4304448A1 (fr) Procédé, système et produit-programme informatique destinés à déterminer des paramètres optométriques
WO2024056632A1 (fr) Procédé, utilisation d'optotypes adaptés et dispositif pour déterminer des caractéristiques d'acuité visuelle d'un sujet
DE102017115958A1 (de) System und Verfahren für das Erfassen von Kenngrößen eines Auges
EP3691515B1 (fr) Système et procédé de détermination de valeurs caractéristiques d'une amétropie d'un sujet
DE102014113679B3 (de) Vorrichtung zur Verbesserung des Sehvermögens
Predebon Convergence responses to monocularly viewed objects: implications for distance perception
WO2023111026A1 (fr) Procédé, dispositif et produit programme d'ordinateur de détermination d'une sensibilité d'au moins un œil d'un sujet de test
WO2021121944A1 (fr) Dispositif d'examen et procédé d'examen de l'oeil
Frisch Determination of stereoacuity thresholds and their inherent test retest reliabilities at various eccentricities with a monitor-based two-rod-test
DE4228663A1 (de) Automatische, subjektiv gesteuerte Refraktionseinrichtung

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22714166

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 305329

Country of ref document: IL

WWE Wipo information: entry into national phase

Ref document number: 2022714166

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022714166

Country of ref document: EP

Effective date: 20231012