CN114255204A - Amblyopia training method, device, equipment and storage medium - Google Patents

Amblyopia training method, device, equipment and storage medium Download PDF

Info

Publication number
CN114255204A
CN114255204A CN202011019488.4A CN202011019488A CN114255204A CN 114255204 A CN114255204 A CN 114255204A CN 202011019488 A CN202011019488 A CN 202011019488A CN 114255204 A CN114255204 A CN 114255204A
Authority
CN
China
Prior art keywords
amblyopia
image
eye
training
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011019488.4A
Other languages
Chinese (zh)
Inventor
何庭波
李江
张朋
李瑞华
刘迎春
臧磊
郭帮辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011019488.4A priority Critical patent/CN114255204A/en
Priority to PCT/CN2021/095234 priority patent/WO2022062436A1/en
Publication of CN114255204A publication Critical patent/CN114255204A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H5/00Exercisers for the eyes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2205/00Devices for specific parts of the body
    • A61H2205/02Head
    • A61H2205/022Face
    • A61H2205/024Eyes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Abstract

The embodiment of the application discloses a method, a device, equipment and a storage medium for amblyopia training, and belongs to the technical field of image processing. The method comprises the following steps: determining dominant and amblyopic eyes in both eyes of a user; determining a type of amblyopia of the amblyopic eye, the type of amblyopia comprising strabismus amblyopia or anisometropic amblyopia; and when the amblyopia type of the amblyopia eye is determined to be strabismus amblyopia, performing homography conversion processing on the amblyopia training image to perform amblyopia training, and when the amblyopia type of the amblyopia eye is determined to be anisometropic amblyopia, performing image size adjustment processing on the amblyopia training image to perform amblyopia training. According to the embodiment of the application, the amblyopia training is performed after the amblyopia training image is processed by determining the amblyopia type of the amblyopia eye and according to the amblyopia type of the amblyopia eye. That is, by differentiating the type of amblyopia, it is possible to perform targeted training according to different symptoms, and improve the amblyopia training effect.

Description

Amblyopia training method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a method, a device, equipment and a storage medium for amblyopia training.
Background
Amblyopia is an eye disease in which the eye has no organic lesions but the best corrected vision is lower than normal vision. Usually caused by monocular strabismus, ametropia, high ametropia, visual deprivation and other reasons, and if the treatment is not timely, amblyopia can be aggravated, and even blindness can be caused. Therefore, a method for training amblyopia is needed to correct the vision of amblyopic eyes.
The related art provides an amblyopia training instrument which can display binocular visual field contents, wherein the binocular visual field contents comprise moving targets. The amblyopia training instrument can track the watching tracks of the left eye and the right eye of a user on a moving target respectively, and determines the dominant eye and the amblyopia eye according to the closeness degree between the watching tracks of the left eye and the right eye and the actual moving track of the moving target respectively. Then, for the amblyopia eye, repeatedly displaying the field of view content containing the moving target for a plurality of times, and recording the proximity degree of the watching track of the moving target watched by the amblyopia eye and the actual moving track of the moving target.
However, for the amblyopia patients with different causes or different degrees, the training effect is not ideal after the training is carried out according to the amblyopia training instrument.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for amblyopia training, which can solve the problem of unsatisfactory training effect in the related technology. The technical scheme is as follows:
in a first aspect, a method for amblyopia training is provided, which is applied to an amblyopia training device, and the method includes: determining the dominant eye and the amblyopia of a user, determining the amblyopia type of the amblyopia, wherein the amblyopia type comprises strabismus amblyopia and anisometropic amblyopia, performing homography transformation processing on an amblyopia training image to perform amblyopia training when the amblyopia type of the amblyopia is determined as the strabismus amblyopia, and performing image size adjustment processing on the amblyopia training image to perform amblyopia training when the amblyopia type of the amblyopia is determined as the anisometropic amblyopia.
In the embodiment of the application, the amblyopia training image is processed and then subjected to amblyopia training according to the amblyopia type of the amblyopia eye by determining the amblyopia type of the amblyopia eye. That is, by differentiating the type of amblyopia, it is possible to perform targeted training according to different symptoms, and improve the amblyopia training effect.
Wherein, the implementation process for determining the amblyopia type of the amblyopia eye comprises the following steps: and displaying the first test image in a display area corresponding to the dominant eye, displaying the second test image in a display area corresponding to the amblyopia eye, and determining the coordinates of the gaze position of the dominant eye in the displayed first test image and the coordinates of the gaze position of the amblyopia eye in the displayed second test image in an eye movement tracking manner to obtain a first gaze position coordinate and a second gaze position coordinate. And determining the amblyopia type of the amblyopia eye according to the first gaze position coordinate and the second gaze position coordinate.
As an example, determining the coordinates of the gaze location of the dominant eye in the displayed first test image by means of eye movement tracking is performed by: and determining a position transformation matrix, determining eyeball position coordinates of the dominant eye in an eye movement tracking mode, and multiplying the eyeball position coordinates of the dominant eye by the position transformation matrix to obtain the coordinates of the gaze position of the dominant eye in the first test image, namely the coordinates of the gaze position. Similarly, the implementation process of determining the coordinates of the gaze position of the amblyopic eye in the displayed second test image in the eye movement tracking mode comprises the following steps: and determining a position transformation matrix, determining eyeball position coordinates of the amblyopia eyes in an eye movement tracking mode, and multiplying the eyeball position coordinates of the amblyopia eyes by the position transformation matrix to obtain the coordinates of the gaze position of the amblyopia eyes in the second test image, namely the second gaze position coordinates.
The position transformation matrix refers to a transformation matrix between the eyeball position and the position of the target point in the image.
As an example, the determination of the amblyopia type of the amblyopic eye according to the first gaze position coordinate and the second gaze position coordinate is performed by: and determining binocular deviation information according to the first gaze position coordinate and the second gaze position coordinate, wherein the binocular deviation information is deviation information between the sight line direction of the amblyopia eye and the sight line direction of the main sight eye. And if the binocular disparity information is greater than or equal to the first threshold, determining that the type of amblyopia of the amblyopic eye is strabismus amblyopia. And if the binocular disparity information is less than the first threshold, determining that the amblyopia type of the amblyopia eye is anisometropic amblyopia.
Note that normally, the binocular disparity information should be as close to 0 as possible. That is, the larger the binocular disparity information is, the more serious the degree of the heterotropia is indicated, and the smaller the binocular disparity information is, the more slight the degree of the heterotropia is indicated. Therefore, in the case where it is determined that an amblyopic eye exists in both eyes of the user, it may be determined whether or not the binocular disparity information is greater than or equal to a first threshold value, and if the binocular disparity information is greater than or equal to the first threshold value, the type of amblyopia characterizing the amblyopic eye of the user is strabismus amblyopia. If the binocular disparity information is less than the first threshold, the amblyopia type characterizing the amblyopic eyes of the user is anisometropic amblyopia.
That is, in the case where it is determined that an amblyopic eye exists in both eyes of the user, it is possible to determine whether the type of amblyopia of the amblyopic eye is strabismus amblyopia or anisometropic amblyopia by the first threshold.
Wherein the first threshold value refers to any value within a reference distance range, which is a distance range for distinguishing strabismus amblyopia from anisometropic amblyopia. Furthermore, the first threshold is related to the angular resolution of the amblyopia training device. For example, in the case of a display screen of a amblyopia training device with an average angular resolution of 20, the first threshold is 100 pixels. In the case of a display screen of the amblyopia training device with an average angular resolution of 30, the first threshold is 150 pixels.
It is to be noted that, based on the above description, amblyopia is generally caused by monocular strabismus, ametropia, high ametropia, visual deprivation, and the like, but, for both types of high ametropia and visual deprivation, since damage has been already physiologically made to the eye, the vision of the amblyopic eye has not been corrected by amblyopia training. Therefore, the present examples perform amblyopia training by distinguishing the two types of strabismus amblyopia and anisometropic amblyopia, and do not involve the two types of amblyopia, hyperrefractive amblyopia and visual deprivation.
Wherein, the realization process of determining the dominant eye and the amblyopia eye in the eyes of the user is as follows: detecting diopter of two eyes of the user, determining eyes with low diopter in the two eyes of the user as dominant vision eyes, and determining eyes with high diopter in the two eyes of the user as amblyopia eyes.
Diopters generally refer to near vision, distance vision, or astigmatism, and in common, diopters are also referred to as vision. Vision mainly refers to the ability to image the retina of the eye fundus. As an example, the eye chart is displayed in two display areas of the amblyopia training device, respectively. And in the process that the eyes of the user respectively watch the eye chart, the diopter of the eyes of the user is detected by adjusting the virtual image distance. The visual chart can be projected to the amblyopia training device for the terminal device, and certainly, the visual chart can also be stored by the amblyopia training device, and the embodiment of the application does not limit the amblyopia training device.
In some embodiments, the homography transformation process for the amblyopia training image is implemented as follows: and displaying the amblyopia training image in the display area corresponding to the dominant eye, performing homography conversion processing on the amblyopia training image, and displaying the amblyopia training image in the display area corresponding to the amblyopia eye, so that the images seen by the eyes of the user can be combined to perform amblyopia training. Similarly, the process of performing the amblyopia training by adjusting the image size of the amblyopia training image comprises the following steps: and displaying the amblyopia training image in the display area corresponding to the dominant eye, and displaying the amblyopia training image in the display area corresponding to the amblyopia eye after carrying out image size adjustment on the amblyopia training image so as to enable the images seen by the two eyes of the user to be combined for carrying out the amblyopia training.
In other embodiments, in order to improve the imaging ability of the amblyopic eye, the image contrast corresponding to the dominant eye and the image contrast corresponding to the amblyopic eye may also be determined so that the user's binocular perception ability is the same. In this way, the amblyopia training image can be displayed in the display area corresponding to the dominant eye according to the image contrast corresponding to the dominant eye, and the amblyopia training image can be subjected to the homography conversion processing according to the amblyopia type of the amblyopia eye according to the image contrast corresponding to the amblyopia eye, and then displayed in the display area corresponding to the amblyopia eye, so that the images seen by the eyes of the user can be combined to perform the amblyopia training. Or, according to the image contrast corresponding to the dominant eye, displaying the amblyopia training image in the display area corresponding to the dominant eye, and according to the image contrast corresponding to the amblyopia eye, performing image size adjustment processing on the amblyopia training image and displaying the amblyopia training image in the display area corresponding to the amblyopia eye, so that the images seen by the eyes of the user can be combined to perform amblyopia training.
For the two embodiments, the operation of processing the amblyopia training image according to the amblyopia type of the amblyopia eye and the operation of performing the amblyopia training are the same, and the difference is only whether the image contrast corresponding to the two eyes needs to be determined, so that the amblyopia training image is displayed according to the image contrast corresponding to the two eyes in the process of the amblyopia training. Next, taking a second embodiment as an example, the amblyopia training process provided in the embodiment of the present application will be described.
In some embodiments, determining the image contrast for the dominant eye and the image contrast for the amblyopic eye comprises: and displaying the third test image in the display area corresponding to the dominant eye and displaying the fourth test image in the display area corresponding to the amblyopia eye. Then, the contrast of the displayed third test image is decreased, and the contrast of the displayed fourth test image is increased. When a contrast determining instruction is detected, the contrast of the reduced third test image is determined as the image contrast corresponding to the dominant eye, the contrast of the increased fourth test image is determined as the image contrast corresponding to the amblyopia eye, and the contrast determining instruction is triggered when the contrast which can be perceived by the two eyes of the user according to the displayed third test image and the feedback of the fourth test image is the same.
That is, by displaying one test image on each of the two display areas of the amblyopia training device and by adjusting the contrast of the two test images through subjective feedback of the user, the contrast perceived by both eyes of the user is the same. Moreover, in the embodiment of the present application, the contrast that can be perceived by both eyes of the user is ensured to be the same by decreasing the contrast of the third test image and increasing the contrast of the fourth test image. That is, the contrast of the dominant eye is reduced, and the contrast of the amblyopia eye is increased, so that in the subsequent amblyopia training process, pictures with different contrasts are respectively displayed for the dominant eye and the amblyopia eye, so that the amblyopia eye is strengthened, the imaging capability of the dominant eye is inhibited, and the aim of amblyopia training is fulfilled.
The above implementation process determines the image contrast corresponding to the dominant eye and the image contrast corresponding to the amblyopic eye according to the subjective feedback of the user, but the contrast of the subjective feedback of the user may be deviated. Therefore, in other embodiments, a plurality of third test images including the first moving object are displayed in the display area corresponding to the dominant eye, and a plurality of fourth test images including the second moving object are displayed in the display area corresponding to the weak eye. In this way, before determining the contrast of the third test image after the decrease as the image contrast corresponding to the dominant eye and determining the contrast of the fourth test image after the increase as the image contrast corresponding to the amblyopia eye, the method further includes: and determining the fixation track of the dominant eye to the first moving target in the third test images and the fixation track of the amblyopic eye to the second moving target in the fourth test images in an eye movement tracking mode to obtain the first fixation track and the second fixation track. And acquiring the actual motion track of the first motion target and the actual motion track of the second motion target to obtain a first actual motion track and a second actual motion track. And if the first watching track is matched with the first actual motion track and the second watching track is matched with the second actual motion track, determining the contrast of the reduced third test image as the image contrast corresponding to the dominant eye and determining the contrast of the increased fourth test image as the image contrast corresponding to the amblyopia eye.
That is, whether the image contrast corresponding to the two eyes determined by the subjective feedback of the user is accurate can be further accurately determined by means of eye movement tracking, so that errors caused by the subjective feedback are avoided.
Based on the above description, the amblyopia type includes strabismus amblyopia and anisometropic amblyopia, and the operation of processing the amblyopia training image is different for different amblyopia types, which will be described separately below.
The amblyopia type of amblyopia is strabismus amblyopia
In this case, the target object is included in the amblyopia training image, and the target object is used for performing amblyopia training. At this time, homography conversion is performed on the amblyopia training image according to the binocular deviation information. And displaying the first training image in the display area corresponding to the dominant eye according to the image contrast corresponding to the dominant eye, and displaying the second training image in the display area corresponding to the amblyopia eye according to the image contrast corresponding to the amblyopia eye. And determining the gaze position of the dominant eye in the first training image and the gaze position of the amblyopia eye in the second training image in an eye movement tracking mode to obtain a first gaze position and a second gaze position. And if the first gaze position coincides with the actual position of the target object in the first training image and the second gaze position coincides with the actual position of the target object in the second training image, reducing the homography transformation amount of the amblyopia training image, returning the amblyopia training image before homography transformation as the first training image and the amblyopia training image after homography transformation as the second training image until the amblyopia training is finished or the second gaze position coincides with the actual position of the target object in the second training image under the condition that the amblyopia training image is not subjected to homography transformation.
A homography transformation refers to a mapping of one image to another, typically involving translation and/or rotation. Since the eye is usually not sensitive enough to rotate, homography transformation in the embodiments of the present application refers to translation mainly. In addition, based on the above-mentioned binocular disparity calculation process, the binocular disparity information can be divided into disparity information in the horizontal direction and disparity information in the vertical direction, so that the implementation process of performing homography transformation on the amblyopia training image according to the binocular disparity information is as follows: and determining the deviation information in the horizontal direction in the binocular deviation information as the translation amount in the horizontal direction, and determining the deviation information in the vertical direction in the binocular deviation information as the translation amount in the vertical direction. And then, translating the amblyopia training image according to the translation amount in the horizontal direction and the translation amount in the vertical direction, thereby realizing the homography transformation of the amblyopia training image.
For strabismus amblyopia, due to the deviation between the visual line direction of the amblyopic eye and the visual line direction of the dominant eye, the amblyopia training image needs to be subjected to homography transformation to adjust the position of the target object in the amblyopia training image. And then displaying the unadjusted amblyopia training image in the display area corresponding to the dominant eye, and displaying the adjusted amblyopia training image in the display area corresponding to the amblyopia eye, so that the images seen by the two eyes of the user can be combined in the process of training the amblyopia eye, namely, the two eyes can see the same picture. Moreover, the two eyes watch the amblyopia training images in the corresponding display areas simultaneously, so that the stereoscopic vision of the two eyes can be ensured.
In addition, based on the above description, the image contrast corresponding to the dominant eye and the image contrast corresponding to the amblyopia eye may be different, so that the first training image is displayed according to the image contrast corresponding to the dominant eye, and the second training image is displayed according to the image contrast corresponding to the amblyopia eye, so that the imaging capability of the amblyopia eye is improved, the comfort level of the user can be ensured, the user can accept the training method and the training process more easily, the training effect is improved, and the user viscosity is increased.
The amblyopia type of amblyopia is anisometropic amblyopia
In this case, the target object is included in the amblyopia training image, and the target object is used for performing amblyopia training. At this time, the position and/or size of the target object in the amblyopic training image is adjusted a plurality of times to obtain a plurality of training images, the image scaling ratio of the amblyopic eye with respect to the dominant eye is determined, and the sizes of the plurality of training images are scaled according to the image scaling ratio. The method comprises the steps of using a plurality of training images before zooming as a plurality of third training images, using the plurality of training images after zooming as a plurality of fourth training images, sequentially displaying the plurality of third training images in a display area corresponding to dominant eyes according to the image contrast corresponding to the dominant eyes, sequentially displaying the plurality of fourth training images in a display area corresponding to weak eyes according to the image contrast corresponding to the weak eyes, and enabling the display sequence and the switching frequency of the plurality of third training images and the plurality of fourth training images to be the same.
For ametropia amblyopia, since the diopter of the amblyopia is higher than the diopter of the dominant eye, the size of the image seen by the amblyopia may be different from the size of the image seen by the dominant eye, and therefore, after the position and/or size of the target object in the amblyopia training image is adjusted for a plurality of times to obtain a plurality of training images, it is necessary to determine the image scaling ratio of the amblyopia relative to the dominant eye, and further scale the sizes of the plurality of training images according to the image scaling ratio, so that the images seen by the dominant eye and the amblyopia eye can be combined, that is, both eyes can see the same picture.
The implementation process for determining the image scaling of the amblyopia eye relative to the dominant eye comprises the following steps: and displaying a fifth test image in a display area corresponding to the dominant eye, and displaying a sixth test image in a display area corresponding to the amblyopia eye, wherein the fifth test image comprises a first test target, the sixth test image comprises a second test target, and the first test target and the second test target have the same proportion. Scaling the sixth test image. When a proportion determining instruction is detected, determining the proportion between the first test target and the second test target after zooming to obtain the image zooming proportion, wherein the proportion determining instruction is triggered when the user feeds back that the two eyes of the user can see the first test target and the second test target with the same proportion.
That is, the first test object in the fifth test image and the second test object in the sixth test image are initially in the same proportion for the same eye. Then, in the process that the main sight eye gazes at the fifth test image and the weak sight eye gazes at the sixth test image, the scaling of the image is determined through the size of the first test target and the size of the scaled second test target when the user feeds back that the two eyes of the user can see the first test target and the second test target which have the same scale.
After amblyopia training is carried out according to different amblyopia types, the amblyopia training effect can be fed back, so that a user can adjust a training plan in a pertinence manner. Two ways of feeding back the amblyopia training effect will be described next.
In the first mode, the gaze position of the amblyopia eye in the image displayed in the display region corresponding to the amblyopia eye is determined by means of eye movement tracking, and the gaze position of the amblyopia eye is obtained. And if the gaze position of the amblyopia eye coincides with the actual position of the target object in the image displayed in the display area corresponding to the amblyopia eye, modifying the display mode of the target object in the image displayed in the display area corresponding to the amblyopia eye into a reference display mode to indicate the amblyopia training effect.
The reference display mode may refer to a highlight mode, a color change mode, and the like, which is not limited in the embodiment of the present application.
In the second mode, the gaze position of the amblyopia eye in the image displayed in the display area corresponding to the amblyopia eye is determined in an eye movement tracking mode, so that the gaze position of the amblyopia eye is obtained. And drawing an amblyopia training curve according to the amblyopia eye gaze position and the actual position of the target object so as to indicate the amblyopia training effect.
As an example, the total number of times of training in the current amblyopia training period, and the percentage of the number of times of coincidence of the amblyopic eye gaze position and the actual position of the target object in the current amblyopia training period to the total number of times of training are counted. And then, drawing an amblyopia training curve by combining the data of a plurality of historical amblyopia training periods. That is, the amblyopia training curve is drawn by taking a plurality of amblyopia training periods as the horizontal axis and taking the determined percentage in each amblyopia training period as the vertical axis.
The two types of systems may be used alone or in combination, and the embodiments of the present application are not limited to these. Of course, in practical application, the amblyopia training effect can also be fed back in other ways. For example, after performing amblyopia training in each amblyopia training period, the diopter of the amblyopia eye may be detected, and then, the amblyopia training curve may be drawn with the plurality of amblyopia training periods as the horizontal axis and the diopter of the amblyopia eye as the vertical axis.
In a second aspect, an amblyopia training device is provided, which has the function of implementing the behavior of the amblyopia training method in the first aspect. The amblyopia training device comprises at least one module for implementing the amblyopia training method provided by the first aspect.
In a third aspect, an amblyopia training device is provided, which comprises a processor and a memory, wherein the memory is used for storing a program for executing the amblyopia training method provided by the first aspect, and storing data used for realizing the amblyopia training method provided by the first aspect. The processor is configured to execute programs stored in the memory. The operating means of the memory device may further comprise a communication bus for establishing a connection between the processor and the memory.
In a fourth aspect, there is provided a computer-readable storage medium having stored therein instructions, which when run on a computer, cause the computer to perform the amblyopia training method of the first aspect described above.
In a fifth aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the amblyopia training method of the first aspect described above.
The technical effects obtained by the above second, third, fourth and fifth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described herein again.
The technical scheme provided by the embodiment of the application can at least bring the following beneficial effects:
in the embodiment of the application, the amblyopia training image is processed in different modes according to the amblyopia type of the amblyopia eye by determining the amblyopia type of the amblyopia eye, and then the amblyopia training is carried out. That is, by differentiating the type of amblyopia, it is possible to perform targeted training according to different symptoms, and improve the amblyopia training effect.
Drawings
FIG. 1 is a block diagram of an amblyopia training system according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for amblyopia training provided by an embodiment of the present application;
fig. 3 is a schematic diagram of a plurality of determined binocular disparity information according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of training squint amblyopia according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an embodiment of the present application for restoring strabismus amblyopia to emmetropia;
FIG. 6 is a diagram illustrating an adjustment of an amblyopia training image according to a first method according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a variation of an amblyopia training image displayed in a first manner according to an embodiment of the present application;
FIG. 8 is a diagram illustrating an adjustment of an amblyopia training image according to a second method provided by an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a variation of an amblyopia training image displayed in a second manner according to an embodiment of the present application;
FIG. 10 is a diagram illustrating an adjustment of an amblyopia training image according to a third method provided by an embodiment of the present application;
FIG. 11 is a schematic diagram illustrating a variation of an amblyopia training image displayed in a third manner according to an embodiment of the present application;
FIG. 12 is a diagram illustrating an adjustment of a amblyopia training image according to a fourth method according to an embodiment of the present application;
FIG. 13 is a schematic diagram illustrating a variation of an amblyopia training image displayed in a fourth manner according to an embodiment of the present application;
FIG. 14 is a schematic diagram of determining image scaling for displaying a test image in both eyes according to an embodiment of the present disclosure;
fig. 15 is a schematic diagram of binocular imaging provided by an embodiment of the present application;
FIG. 16 is a schematic diagram of a first example of an amblyopia training effect provided by an embodiment of the present application;
FIG. 17 is a diagram illustrating a second example of an amblyopia training effect provided by an embodiment of the present application;
FIG. 18 is a schematic diagram of a third example of an amblyopia training effect provided by the present application;
FIG. 19 is a schematic structural diagram of an amblyopia training device provided by an embodiment of the present application;
FIG. 20 is a schematic structural diagram of an amblyopia training device provided in an embodiment of the present application;
fig. 21 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 22 is a schematic structural diagram of another terminal provided in an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
Before explaining the amblyopia training method provided by the embodiments of the present application in detail, terms and implementation environments related to the embodiments of the present application will be described.
First, terms related to embodiments of the present application will be described.
Amblyopia: is an eye disease which has no organic lesions but has the best corrected vision lower than the normal vision. Usually caused by monocular strabismus, ametropia, high refractive error, visual deprivation and the like.
Monocular strabismus: it is a phenomenon that central control is disordered, the strength of extraocular muscles is unbalanced, two eyes cannot watch a target simultaneously, visual axes are in a separated state, one eye watches the target, and the other eye deviates from the target. It is also understood that, due to the abnormal binocular interaction caused by the deviation of the eye position, the different objects received by the fovea of the macula of the oblique eye (confusing vision) are suppressed, resulting in a phenomenon that the best corrected vision of the oblique eye is lower than the normal vision.
Ametropia: means that the diopters of the two eyes have the difference. Wherein the eye with the higher diopter forms amblyopia. Typically, anisometropic amblyopia is monocular amblyopia.
High refractive error: when the eye is not used for adjustment, parallel rays cannot form a clear object image on the retina after passing through the refractive action of the eye, and the object image is formed in front of or behind the retina. Mainly, hyperopic ametropia or high astigmatism, causes form deprivation due to blurring of object images in both eyes. Typically, high refractive errors cause binocular amblyopia.
Visual deprivation: refers to a phenomenon that prevents transmission of visual information without damaging the tissue structure of the eye. It is also understood that complete ptosis during critical periods of vision due to refractive interstitial confusion (leukoplakia, cataracts, vitritis, or hematocele) results in a decrease in eye vision.
Dominant eye: refers to eyes with low diopter, and is a relative concept with respect to amblyopia eyes. That is, of the user's eyes, the eye having a relatively low diopter is the dominant eye.
Amblyopia eye: refers to eyes with higher diopter, and is a relative concept with respect to the dominant eye. That is, of the user's eyes, the eye having a relatively high diopter is a weak eye.
Eye movement tracking: the method is to track the movement of the eyeball by measuring the gaze position of the eye or the movement of the eyeball relative to the head, and the most common means is to acquire the position of the eye through a video shooting device.
Next, an implementation environment related to the embodiments of the present application will be described.
Referring to fig. 1, fig. 1 is a schematic diagram of an amblyopia training system according to an embodiment of the present application. The system comprises a terminal device 101 and an amblyopia training device 102, wherein the terminal device 101 and the amblyopia training device 102 are communicated in a wireless or wired mode.
The terminal device 101 is configured to display the amblyopia training image, and project the displayed amblyopia training image onto the amblyopia training device 102. The amblyopia training device 102 includes two display areas corresponding to the eyes of the user, a display area corresponding to the dominant eye and a display area corresponding to the amblyopic eye, respectively. The two display areas of the amblyopia training device 102 are used for displaying the amblyopia training images projected by the terminal device 101, the amblyopia training device 102 is also used for determining the amblyopia type of the amblyopia eye of the user, and the amblyopia training images are processed according to the amblyopia type, so that the amblyopia training is performed through the amblyopia training images displayed on the two display areas to correct the vision of the amblyopia eye.
In some embodiments, amblyopia training device 102 includes a display module, an eye tracking module, and an image processing module. The display module is used for displaying the amblyopia training image projected by the terminal equipment 101. The eye tracking module is to determine a type of amblyopia of a user's amblyopic eye. The image processing module is used for processing the amblyopia training image according to the amblyopia type.
Optionally, the eye tracking module is further configured to determine a gaze location of the user's eyes, thereby determining the amblyopia training effect.
It should be noted that the terminal device 101 is any electronic product that can perform human-computer interaction with a user through one or more modes, such as a keyboard, a touch pad, a touch screen, a remote controller, voice interaction, or a handwriting device. Such as a Personal Computer (PC), a mobile phone, a smart phone, a Personal Digital Assistant (PDA), a wearable device, a pocket PC, a tablet PC, a smart car, a smart tv, a smart speaker, etc. Amblyopia training device 102 is any head mounted device having two display areas. For example, devices such as Virtual Reality (VR) glasses, VR helmets, viewing glasses, Augmented Reality (AR) glasses, AR helmets, Mix Reality (MR) glasses, MR helmets, and the like. Wherein the two display areas of the amblyopia training device 102 are two display screens of the amblyopia training device 102, or different display areas in the same display screen of the amblyopia training device 102.
It should be noted that, in the above system architecture, the terminal device 101 is configured to project the amblyopia training image into the amblyopia training device 102, so that the amblyopia training device 102 performs amblyopia training on the amblyopia of the user. In other embodiments, the system architecture does not include the terminal device 101, that is, the amblyopia training device 102 can store the amblyopia training images and directly display the stored amblyopia training images during the amblyopia training process.
Next, the amblyopia training method provided in the embodiments of the present application will be explained in detail.
Referring to fig. 2, fig. 2 is a flowchart of a amblyopia training method provided in an embodiment of the present application, and the method is applied to an amblyopia training apparatus, where the amblyopia training apparatus includes two display areas, and the two display areas correspond to two eyes of a user. The method comprises the following steps.
Step 201: dominant and amblyopic eyes of the user are determined.
In some embodiments, determining the dominant and amblyopic eyes of the user is accomplished by: and respectively detecting diopters of the two eyes of the user, determining the eyes with low diopter in the two eyes of the user as dominant vision eyes, and determining the eyes with high diopter in the two eyes of the user as amblyopia eyes.
Diopters generally refer to near vision, distance vision, or astigmatism, and in common, diopters are also referred to as vision. Vision mainly refers to the ability to image the retina of the eye fundus. As an example, the eye chart is displayed in two display areas of the amblyopia training device, respectively. And in the process that the eyes of the user respectively watch the eye chart, the diopter of the eyes of the user is detected by adjusting the virtual image distance. The visual chart can be projected to the amblyopia training device for the terminal device, and certainly, the visual chart can also be stored by the amblyopia training device, and the embodiment of the application does not limit the amblyopia training device.
Step 202: and determining the amblyopia type of the amblyopia eye, wherein the amblyopia type comprises strabismus amblyopia or anisometropic amblyopia.
In some embodiments, the determination of the amblyopic type of the amblyopic eye is performed by: and displaying the first test image in the display area corresponding to the dominant eye and displaying the second test image in the display area corresponding to the amblyopia eye. And determining the coordinates of the gaze position of the dominant eye in the displayed first test image and the coordinates of the gaze position of the amblyopic eye in the displayed second test image in an eye movement tracking mode to obtain the coordinates of the first gaze position and the coordinates of the second gaze position. And determining the amblyopia type of the amblyopia eye according to the first gaze position coordinate and the second gaze position coordinate.
The first test image and the second test image may be the same or different. The first test image and the second test image may be projected onto the amblyopia training device for the terminal device, and of course, the first test image and the second test image may also be stored by the amblyopia training device itself, which is not limited in this application.
As an example, determining the coordinates of the gaze location of the dominant eye in the displayed first test image by means of eye movement tracking is performed by: and determining a position transformation matrix, determining eyeball position coordinates of the dominant eye in an eye movement tracking mode, and multiplying the eyeball position coordinates of the dominant eye by the position transformation matrix to obtain the coordinates of the gaze position of the dominant eye in the first test image, namely the coordinates of the gaze position. Similarly, the implementation process of determining the coordinates of the gaze position of the amblyopic eye in the displayed second test image in the eye movement tracking mode comprises the following steps: and determining a position transformation matrix, determining eyeball position coordinates of the amblyopia eyes in an eye movement tracking mode, and multiplying the eyeball position coordinates of the amblyopia eyes by the position transformation matrix to obtain the coordinates of the gaze position of the amblyopia eyes in the second test image, namely the second gaze position coordinates.
The position transformation matrix refers to a transformation matrix between the eyeball position and the position of the target point in the image. The determination method of the position transformation matrix may be: the amblyopia training equipment sequentially displays a plurality of target points, and then determines eyeball position coordinates of a user in an eye movement tracking mode when the eyes of the user watch the target points to obtain a plurality of corresponding eyeball position coordinates. Next, a position conversion matrix is determined by the following formula.
P1=T*P2
Wherein, in the above formula, P1Is a matrix formed by coordinates of a plurality of target points, T is a position transformation matrix, P2The eye position coordinates are determined by means of eye movement tracking when the eyes of the user are gazing at the target points.
As an example, the determination of the amblyopia type of the amblyopic eye according to the first gaze position coordinate and the second gaze position coordinate is performed by: and determining binocular deviation information according to the first gaze position coordinate and the second gaze position coordinate, wherein the binocular deviation information is deviation information between the sight line direction of the amblyopia eye and the sight line direction of the main sight eye. And if the binocular disparity information is greater than or equal to the first threshold, determining that the type of amblyopia of the amblyopic eye is strabismus amblyopia. And if the binocular disparity information is less than the first threshold, determining that the amblyopia type of the amblyopia eye is anisometropic amblyopia.
The binocular disparity information is also referred to as a distance between binocular fixation positions, so, in some embodiments, the amblyopia training device determines binocular disparity information according to the first fixation position coordinate and the second fixation position coordinate by the following formula.
Figure BDA0002700149050000091
Wherein, in the aboveIn the formula, ED is binocular deviation information, (L)x,Ly) Is the first gaze location coordinate, (R)x,Ry) Is the second gaze location coordinate.
Note that normally, the binocular disparity information should be as close to 0 as possible. That is, the larger the binocular disparity information is, the more serious the degree of the heterotropia is indicated, and the smaller the binocular disparity information is, the more slight the degree of the heterotropia is indicated. Therefore, in the case where it is determined that an amblyopic eye exists in both eyes of the user, it may be determined whether or not the binocular disparity information is greater than or equal to a first threshold value, and if the binocular disparity information is greater than or equal to the first threshold value, the type of amblyopia characterizing the amblyopic eye of the user is strabismus amblyopia. If the binocular disparity information is less than the first threshold, the amblyopia type characterizing the amblyopic eyes of the user is anisometropic amblyopia.
That is, in the case where it is determined that an amblyopic eye exists in both eyes of the user, it is possible to determine whether the type of amblyopia of the amblyopic eye is strabismus amblyopia or anisometropic amblyopia by the first threshold.
Wherein the first threshold value refers to any value within a reference distance range, which is a distance range for distinguishing strabismus amblyopia from anisometropic amblyopia. Furthermore, the first threshold is related to the angular resolution of the amblyopia training device. For example, in the case of a display screen of a amblyopia training device with an average angular resolution of 20, the first threshold is 100 pixels. In the case of a display screen of the amblyopia training device with an average angular resolution of 30, the first threshold is 150 pixels.
It is to be noted that, based on the above description, amblyopia is generally caused by monocular strabismus, ametropia, high ametropia, visual deprivation, and the like, but, for both types of high ametropia and visual deprivation, since damage has been already physiologically made to the eye, the vision of the amblyopic eye has not been corrected by amblyopia training. Therefore, the present examples perform amblyopia training by distinguishing the two types of strabismus amblyopia and anisometropic amblyopia, and do not involve the two types of amblyopia, hyperrefractive amblyopia and visual deprivation.
The implementation process is that the binocular deviation information is determined by one-time test in an eye movement tracking mode, so that the amblyopia type of the amblyopia eye is determined. In other embodiments, the amblyopia type of the amblyopic eye can be determined by performing multiple tests to determine binocular disparity information by means of eye tracking. That is, in other embodiments, determining the type of amblyopia of the amblyopic eye is accomplished by: and displaying the first test image in the display area corresponding to the dominant eye and displaying the second test image in the display area corresponding to the amblyopia eye. And determining the coordinates of a plurality of gaze positions of the dominant eye in the displayed first test image and the coordinates of a plurality of gaze positions of the amblyopic eye in the displayed second test image in an eye movement tracking mode to obtain a plurality of first gaze position coordinates and a plurality of second gaze position coordinates. And determining the amblyopia type of the amblyopia eye according to the plurality of first gaze position coordinates and the plurality of second gaze position coordinates.
Wherein, according to a plurality of first fixation position coordinates and a plurality of second fixation position coordinates, the implementation process for determining the amblyopia type of the amblyopia eye comprises the following steps: and determining binocular deviation information according to the first gaze position coordinate and the second gaze position coordinate determined at the same time, thereby obtaining a plurality of binocular deviation information, wherein the binocular deviation information is deviation information between the visual line direction of the amblyopia eye and the visual line direction of the main sight eye. And determining the statistic value of the binocular deviation information, and if the statistic value of the binocular deviation information is larger than or equal to a first threshold value, determining that the amblyopia type of the amblyopia is strabismus amblyopia. And if the statistic value of the plurality of binocular deviation information is smaller than a first threshold value, determining that the amblyopia type of the amblyopia eye is anisometropic amblyopia.
For example, suppose that the amblyopia training device displays a first test image in a display area corresponding to the dominant eye, and after a second test image is displayed in a display area corresponding to the amblyopic eye, coordinates of 7 gaze positions of the dominant eye in the first test image and coordinates of 7 gaze positions of the amblyopic eye in the second test image are determined in an eye movement tracking manner, so as to obtain 7 first gaze position coordinates and 7 second gaze position coordinates. Fig. 3 shows 7 pieces of binocular disparity information determined based on the 7 first gaze location coordinates and the 7 second gaze location coordinates. By determining the statistical value of the 7 binocular disparity information, it is possible to determine the amblyopia type of the amblyopic eye.
It should be noted that the statistical value of the binocular deviation information refers to an average value, a median value, and the like of the binocular deviation information, which is not limited in the embodiment of the present application.
Step 203: and when the amblyopia type of the amblyopia eye is determined to be anisometropic amblyopia, performing image size adjustment processing on the amblyopia training image to perform amblyopia training.
In some embodiments, the homography transformation process for the amblyopia training image is implemented as follows: and displaying the amblyopia training image in the display area corresponding to the dominant eye, performing homography conversion processing on the amblyopia training image, and displaying the amblyopia training image in the display area corresponding to the amblyopia eye, so that the images seen by the eyes of the user can be combined to perform amblyopia training. Similarly, the process of performing the amblyopia training by adjusting the image size of the amblyopia training image comprises the following steps: and displaying the amblyopia training image in the display area corresponding to the dominant eye, and displaying the amblyopia training image in the display area corresponding to the amblyopia eye after carrying out image size adjustment on the amblyopia training image so as to enable the images seen by the two eyes of the user to be combined for carrying out the amblyopia training.
In other embodiments, in order to improve the imaging ability of the amblyopic eye, the image contrast corresponding to the dominant eye and the image contrast corresponding to the amblyopic eye may also be determined so that the user's binocular perception ability is the same. In this way, the weak sight training image can be displayed in the display area corresponding to the dominant eye according to the image contrast corresponding to the dominant eye, and the weak sight training image can be subjected to the homography conversion processing according to the image contrast corresponding to the weak sight eye and then displayed in the display area corresponding to the weak sight eye, so that the images seen by the eyes of the user can be combined to perform the weak sight training. Or, according to the image contrast corresponding to the dominant eye, displaying the amblyopia training image in the display area corresponding to the dominant eye, and according to the image contrast corresponding to the amblyopia eye, performing image size adjustment processing on the amblyopia training image and displaying the amblyopia training image in the display area corresponding to the amblyopia eye, so that the images seen by the eyes of the user can be combined to perform amblyopia training.
For the two embodiments, the operation of processing the amblyopia training image according to the amblyopia type of the amblyopia eye and the operation of performing the amblyopia training are the same, and the difference is only whether the image contrast corresponding to the two eyes needs to be determined, so that the amblyopia training image is displayed according to the image contrast corresponding to the two eyes in the process of the amblyopia training. Next, taking the second embodiment as an example, the amblyopia training process provided by the embodiments of the present application is described by the following steps (1) - (2).
(1) And determining the image contrast corresponding to the dominant eye and the image contrast corresponding to the amblyopia eye so as to enable the two eyes of the user to have the same perception capability.
In some embodiments, the amblyopia training device displays the third test image in the display area corresponding to the dominant eye and the fourth test image in the display area corresponding to the amblyopic eye. Then, the contrast of the displayed third test image is decreased, and the contrast of the displayed fourth test image is increased. When a contrast determining instruction is detected, the contrast of the reduced third test image is determined as the image contrast corresponding to the dominant eye, the contrast of the increased fourth test image is determined as the image contrast corresponding to the amblyopia eye, and the contrast determining instruction is triggered when the contrast which can be perceived by the two eyes of the user according to the displayed third test image and the feedback of the fourth test image is the same.
That is, by displaying one test image on each of the two display areas of the amblyopia training device and by adjusting the contrast of the two test images through subjective feedback of the user, the contrast perceived by both eyes of the user is the same. Moreover, in the embodiment of the present application, the contrast that can be perceived by both eyes of the user is ensured to be the same by decreasing the contrast of the third test image and increasing the contrast of the fourth test image. That is, the contrast of the dominant eye is reduced, and the contrast of the amblyopia eye is increased, so that in the subsequent amblyopia training process, pictures with different contrasts are respectively displayed for the dominant eye and the amblyopia eye, so that the amblyopia eye is strengthened, the imaging capability of the dominant eye is inhibited, and the aim of amblyopia training is fulfilled.
Wherein, the implementation process of reducing the contrast of the displayed third test image and increasing the contrast of the displayed fourth test image comprises the following steps: the contrast of the third test image is decreased according to the adjustment step size, and the contrast of the fourth test image is increased according to the adjustment step size. And displaying the adjusted third test image and the fourth test image. And when a contrast adjusting instruction is detected, continuing to reduce the contrast of the third test image according to the adjusting step length, increasing the contrast of the fourth test image according to the adjusting step length, and displaying the third test image and the fourth test image after the contrast is adjusted until a contrast determining instruction is detected. And the contrast adjusting instruction is triggered by the user according to the fact that the contrast of the displayed third test image and the contrast of the displayed fourth test image feedback eyes are different.
It should be noted that, in the process of adjusting the contrast of the third test image by adjusting the step length, the display area corresponding to the dominant eye may be shielded, and then displayed after adjustment, and the user determines whether the contrasts of both eyes are the same. Of course, in the process of adjusting the contrast of the third test image by adjusting the step length, the display area corresponding to the dominant eye may not be shielded. Similarly, in the process of adjusting the contrast of the fourth test image by adjusting the step length, the display area corresponding to the amblyopia eye can be shielded, and then displayed after adjustment, so that the user determines whether the contrasts of the two eyes are the same. Of course, in the process of adjusting the contrast of the fourth test image by adjusting the step length, the display area corresponding to the amblyopia eye may not be shielded, which is not limited in the embodiment of the present application.
In addition, the contrast of the third test image and the contrast of the fourth test image may be sequentially adjusted, for example, the contrast of the third test image is first reduced according to the adjustment step length, then the adjusted third test image and the fourth test image are displayed, if the contrast of the two eyes fed back by the user according to the displayed third test image and the fourth test image is different, then the contrast of the fourth test image is increased according to the adjustment step length, the adjusted third test image and the fourth test image are displayed, if the contrast of the two eyes fed back by the user according to the displayed third test image and the displayed fourth test image is different, then the contrast of the third test image is continuously reduced according to the adjustment step length until the contrast of the two eyes fed back by the user according to the displayed third test image and the displayed fourth test image is the same.
Of course, decreasing the contrast of the third test image first and then increasing the contrast of the fourth test image is only an example, and of course, the contrast of the fourth test image may be increased first and then decreased, and the order of adjusting the third test image and the fourth test image is not limited in the embodiment of the present application.
The third test image and the fourth test image may be the same or different. The third test image and the fourth test image may be projected onto the amblyopia training device for the terminal device, and of course, the third test image and the fourth test image may also be stored by the amblyopia training device itself, which is not limited in this application.
And under the condition that the third test image and the fourth test image are stored by the amblyopia training equipment, the amblyopia training equipment can be manually adjusted according to the adjustment step length to adjust the contrast of the test images. However, when the third test image and the fourth test image are projected onto the amblyopia training device for the terminal device, the contrast of the test image can be adjusted not only by manually adjusting the amblyopia training device according to the adjustment step size, but also by manually adjusting the terminal device according to the adjustment step size.
The adjustment step length for each contrast adjustment may be the same or different. The adjustment step may be a step set in advance, or may be calculated according to the contrast between the third test image and the fourth test image.
As an example, a contrast difference between the third test image and the fourth test image is determined, and the contrast difference is divided by a preset number of times to obtain an adjustment step. That is, in this manner, the contrast seen by both eyes of the user is the same by adjusting the third test image and the fourth test image a preset number of times.
As another example, a contrast difference between the third test image and the fourth test image is determined, and a corresponding adjustment step size is obtained from the stored correspondence between the contrast difference and the adjustment step size according to the contrast difference. The corresponding relationship between the contrast difference and the adjustment step length can be determined empirically in advance.
The above implementation process determines the image contrast corresponding to the dominant eye and the image contrast corresponding to the amblyopic eye according to the subjective feedback of the user, but the contrast of the subjective feedback of the user may be deviated. Therefore, in other embodiments, a plurality of third test images including the first moving object are displayed in the display area corresponding to the dominant eye, and a plurality of fourth test images including the second moving object are displayed in the display area corresponding to the weak eye. In this way, before determining the contrast of the third test image after the decrease as the image contrast corresponding to the dominant eye and determining the contrast of the fourth test image after the increase as the image contrast corresponding to the amblyopia eye, the method further includes: and determining the fixation track of the dominant eye to the first moving target in the third test images and the fixation track of the amblyopic eye to the second moving target in the fourth test images in an eye movement tracking mode to obtain the first fixation track and the second fixation track. And acquiring the actual motion track of the first motion target and the actual motion track of the second motion target to obtain a first actual motion track and a second actual motion track. And if the first watching track is matched with the first actual motion track and the second watching track is matched with the second actual motion track, determining the contrast of the reduced third test image as the image contrast corresponding to the dominant eye and determining the contrast of the increased fourth test image as the image contrast corresponding to the amblyopia eye.
That is, whether the image contrast corresponding to the two eyes determined by the subjective feedback of the user is accurate can be further accurately determined by means of eye movement tracking, so that errors caused by the subjective feedback are avoided.
As an example, the first gaze track and the first actual motion track are matched, that is, the coincidence degree of the first gaze track and the first actual motion track is greater than a certain threshold, and similarly, the second gaze track and the second actual motion track are matched, that is, the coincidence degree of the second gaze track and the second actual motion track is greater than a certain threshold.
The first moving object and the second moving object may be the same or different. In addition, the determination mode of the first gaze track and the second gaze track is similar to the above-mentioned mode of determining the two gaze position coordinates when determining the binocular deviation, and the embodiment of the present application is not described again.
Further, if the first gaze track is not matched with the first actual motion track, the contrast of the third test image needs to be readjusted, and in the process of adjustment, the first gaze track is redetermined in an eye movement tracking manner, and if the redetermined first gaze track is not matched with the first actual motion track, the adjustment is continued until the redetermined first gaze track is matched with the first actual motion track, so that the image contrast corresponding to the dominant eye is obtained. Similarly, if the second gaze track is not matched with the second actual motion track, the contrast of the fourth test image needs to be readjusted, and in the adjusting process, the second gaze track is determined again in an eye movement tracking manner, and if the second gaze track determined again is not matched with the second actual motion track, the adjustment is continued until the second gaze track determined again is matched with the second actual motion track, so that the image contrast corresponding to the amblyopia eye is obtained.
(2) And displaying the amblyopia training image in the display area corresponding to the dominant eye according to the image contrast corresponding to the dominant eye, and displaying the amblyopia training image in the display area corresponding to the amblyopia eye after carrying out homography transformation processing or image size adjustment processing on the amblyopia training image according to the amblyopia type of the amblyopia eye according to the image contrast corresponding to the amblyopia eye so as to ensure that the images seen by the eyes of the user can be combined to carry out amblyopia training.
Based on the above description, the amblyopia type includes strabismus amblyopia and anisometropic amblyopia, and the operation of processing the amblyopia training image is different for different amblyopia types, which will be described separately below.
The amblyopia type of amblyopia is strabismus amblyopia
In this case, the target object is included in the amblyopia training image, and the target object is used for performing amblyopia training. At this time, homography conversion is performed on the amblyopia training image according to the binocular deviation information. And displaying the first training image in the display area corresponding to the dominant eye according to the image contrast corresponding to the dominant eye, and displaying the second training image in the display area corresponding to the amblyopia eye according to the image contrast corresponding to the amblyopia eye. And determining the gaze position of the dominant eye in the first training image and the gaze position of the amblyopia eye in the second training image in an eye movement tracking mode to obtain a first gaze position and a second gaze position. And if the first gaze position coincides with the actual position of the target object in the first training image and the second gaze position coincides with the actual position of the target object in the second training image, reducing the homography transformation amount of the amblyopia training image, returning the amblyopia training image before homography transformation as the first training image and the amblyopia training image after homography transformation as the second training image until the amblyopia training is finished or the second gaze position coincides with the actual position of the target object in the second training image under the condition that the amblyopia training image is not subjected to homography transformation.
A homography transformation refers to a mapping of one image to another, typically involving translation and/or rotation. Since the eye is usually not sensitive enough to rotate, homography transformation in the embodiments of the present application refers to translation mainly. In addition, based on the above-mentioned binocular disparity calculation process, the binocular disparity information can be divided into disparity information in the horizontal direction and disparity information in the vertical direction, so that the implementation process of performing homography transformation on the amblyopia training image according to the binocular disparity information is as follows: and determining the deviation information in the horizontal direction in the binocular deviation information as the translation amount in the horizontal direction, and determining the deviation information in the vertical direction in the binocular deviation information as the translation amount in the vertical direction. And then, translating the amblyopia training image according to the translation amount in the horizontal direction and the translation amount in the vertical direction, thereby realizing the homography transformation of the amblyopia training image.
In addition, the subsequent reduction of the homography transformation amount of the amblyopia training image may refer to reduction of the translation amount in the horizontal direction and/or the translation amount in the vertical direction of the amblyopia training image, and the reduction amount in the horizontal direction and the vertical direction is the same or different, and the reduction amount may be set as required.
For example, as shown in fig. 4, it is assumed that the left eye of the user is squinting amblyopia, that is, the left eye of the user is squinting and the right eye of the user is looking straight, and at this time, there is a deviation between the line of sight direction of the left eye and the line of sight direction of the right eye. In order to perform amblyopia training for the left eye of the user, the position of the target object in the amblyopia training image needs to be translated to the left. And then, taking the amblyopia training image before translation as a first training image, taking the amblyopia training image after translation as a second training image, displaying the first training image in the display area corresponding to the right eye according to the image contrast corresponding to the right eye, and displaying the second training image in the display area corresponding to the left eye according to the image contrast corresponding to the left eye.
And determining the gaze position of the right eye in the first training image and the gaze position of the left eye in the second training image in an eye movement tracking mode to obtain a first gaze position and a second gaze position. And if the first gaze position coincides with the actual position of the target object in the first training image and the second gaze position coincides with the actual position of the target object in the second training image, reducing the leftward translation amount of the target object in the amblyopia training image, and continuously displaying the corresponding amblyopia training image in the display area of the two eyes. If the second gaze location coincides with the actual location of the target object in the second training image without panning the target object in the amblyopic training image, it may be determined that the user's left eye has returned to emmetropia, i.e., as shown in fig. 5.
The determination method of the first gaze location and the second gaze location is similar to the above-mentioned determination method of the coordinates of the two gaze locations when determining the binocular deviation, and details thereof are omitted in the embodiments of the present application.
For strabismus amblyopia, due to the deviation between the visual line direction of the amblyopic eye and the visual line direction of the dominant eye, the amblyopia training image needs to be subjected to homography transformation to adjust the position of the target object in the amblyopia training image. And then displaying the unadjusted amblyopia training image in the display area corresponding to the dominant eye, and displaying the adjusted amblyopia training image in the display area corresponding to the amblyopia eye, so that the images seen by the two eyes of the user can be combined in the process of training the amblyopia eye, namely, the two eyes can see the same picture. Moreover, the two eyes watch the amblyopia training images in the corresponding display areas simultaneously, so that the stereoscopic vision of the two eyes can be ensured.
In addition, based on the above description, the image contrast corresponding to the dominant eye and the image contrast corresponding to the amblyopia eye may be different, so that the first training image is displayed according to the image contrast corresponding to the dominant eye, and the second training image is displayed according to the image contrast corresponding to the amblyopia eye, so that the imaging capability of the amblyopia eye is improved, the comfort level of the user can be ensured, the user can accept the training method and the training process more easily, the training effect is improved, and the user viscosity is increased.
It should be noted that the whole process of the amblyopia training may be performed in multiple cycles, but after each cycle of the amblyopia training, the image contrast corresponding to the two eyes may change, so in general, before each cycle of the amblyopia training, the image contrast corresponding to the two eyes of the user needs to be determined, and the specific implementation process refers to the foregoing description, and is not described herein again.
The amblyopia type of amblyopia is anisometropic amblyopia
In this case, the target object is included in the amblyopia training image, and the target object is used for performing amblyopia training. At this time, the position and/or size of the target object in the amblyopic training image is adjusted a plurality of times to obtain a plurality of training images, the image scaling ratio of the amblyopic eye with respect to the dominant eye is determined, and the sizes of the plurality of training images are scaled according to the image scaling ratio. The method comprises the steps of using a plurality of training images before zooming as a plurality of third training images, using the plurality of training images after zooming as a plurality of fourth training images, sequentially displaying the plurality of third training images in a display area corresponding to dominant eyes according to the image contrast corresponding to the dominant eyes, sequentially displaying the plurality of fourth training images in a display area corresponding to weak eyes according to the image contrast corresponding to the weak eyes, and enabling the display sequence and the switching frequency of the plurality of third training images and the plurality of fourth training images to be the same.
For ametropia amblyopia, since the diopter of the amblyopia is higher than the diopter of the dominant eye, the size of the image seen by the amblyopia may be different from the size of the image seen by the dominant eye, and therefore, after the position and/or size of the target object in the amblyopia training image is adjusted for a plurality of times to obtain a plurality of training images, it is necessary to determine the image scaling ratio of the amblyopia relative to the dominant eye, and further scale the sizes of the plurality of training images according to the image scaling ratio, so that the images seen by the dominant eye and the amblyopia eye can be combined, that is, both eyes can see the same picture.
The ways of adjusting the position and/or size of the target object in the amblyopia training image many times include multiple ways, and four ways are described next.
In the first mode, the position of the target object in the amblyopia training image is sequentially adjusted in the order of up, down, left, and right to obtain four training images. That is, the position of the target object is located at the upper position, the lower position, the left direction, and the right position in the image in this order for the four training images.
For example, as shown in fig. 6, the position of the target object in the amblyopic training image is adjusted in the order of top, bottom, left, and right, and the obtained four training images are respectively training image 1 to training image 4. Thus, after the four training images are displayed in the display area in sequence, the user can see the images shown in fig. 7, that is, the target object is transformed according to the positions 1, 2, 3 and 4 in fig. 7.
In the second mode, the position of the target object in the amblyopia training image is adjusted in a rotating mode to obtain a plurality of training images.
For example, as shown in fig. 8, the position of the target object in the amblyopic training image is adjusted in a rotating manner, and eight training images are obtained, i.e., training image 1 to training image 8. Thus, after the eight training images are sequentially displayed in the display area, the user sees the images as shown in fig. 9, that is, the target object is transformed according to the positions 1, 2, 3, 4, 5, 6, 7, and 8 in fig. 9.
In the third mode, the position of the target object in the amblyopia training image is randomly adjusted to obtain a plurality of training images.
For example, as shown in fig. 10, nine training images, which are training image 1 to training image 9, are obtained after the position of the target object in the amblyopic training image is randomly adjusted. Thus, after the nine training images are sequentially displayed in the display area, the user sees the images as shown in fig. 11, that is, the target object is transformed according to the positions 1, 2, 3, 4, 5, 6, 7, 8, and 9 in fig. 11.
In the fourth mode, the size of the target object in the amblyopia training image is adjusted for multiple times to obtain multiple training images.
For example, as shown in fig. 12, the size of the target object in the amblyopic training image is adjusted four times to obtain four training images, which are training image 1 to training image 4. Thus, after the four training images are sequentially displayed in the display area, the user sees the images as shown in fig. 13, that is, the target object takes on a form from far to near.
The first three ways are to adjust the position of the target object, and the last way is to adjust the size of the target object. In practical applications, the first three ways may also adjust the size of the target object based on the position adjustment, that is, the position and size are overlapped and adjusted. Similarly, for the last method, on the basis of the size adjustment of the target object, the position of the target object may also be adjusted, which is not described in detail in this embodiment of the present application.
According to the embodiment of the application, the extraocular muscles are trained and the imaging capacity of the amblyopia eye is improved through the up-down, left-right, rotation, random, distance and near movement and other movement modes, so that the aim of training the amblyopia eye can be fulfilled.
The implementation process for determining the image scaling of the amblyopia eye relative to the dominant eye comprises the following steps: and displaying a fifth test image in a display area corresponding to the dominant eye, and displaying a sixth test image in a display area corresponding to the amblyopia eye, wherein the fifth test image comprises a first test target, the sixth test image comprises a second test target, and the first test target and the second test target have the same proportion. Scaling the displayed sixth test image. And when a proportion determining instruction is detected, determining the proportion between the first test target and the second test target after zooming to obtain the image zooming proportion, wherein the proportion determining instruction is triggered when a user feeds back that the first test target and the second test target with the same proportion can be seen by the eyes of the user according to the displayed fifth test image and the displayed sixth test image.
That is, the first test object in the fifth test image and the second test object in the sixth test image are initially in the same proportion for the same eye. And then, in the process that the main sight eye gazes at the fifth test image and the weak sight eye gazes at the sixth test image, the image scaling ratio is determined according to the size of the first test target and the scaled size of the second test target when the user feeds back that the two eyes of the user can see the first test target and the second test target which have the same ratio according to the displayed fifth test image and the displayed sixth test image.
As an example, the ratio between the height of the scaled second test object and the height of the un-scaled first test object is determined as the scaling in the vertical direction, and the ratio between the width of the scaled second test object and the width of the un-scaled first test object is determined as the scaling in the horizontal direction. Then, the scale in the vertical direction and the scale in the horizontal direction are determined as the image scales of the amblyopic eye with respect to the dominant eye.
In this example, scaling the sizes of the plurality of training images according to the image scaling is performed by: the heights of the training images are multiplied by the scaling in the vertical direction, and the widths of the training images are multiplied by the scaling in the horizontal direction, so that a plurality of scaled training images are obtained.
As another example, a ratio between a height of the first test object that is not scaled and a height of the second test object that is scaled is determined as a scaling in the vertical direction, and a ratio between a width of the first test object that is not scaled and a width of the second test object that is scaled is determined as a scaling in the horizontal direction. Then, the scale in the vertical direction and the scale in the horizontal direction are determined as the image scales of the amblyopic eye with respect to the dominant eye.
In this example, scaling the sizes of the plurality of training images according to the image scaling is performed by: dividing the heights of the training images by the scaling in the vertical direction, and dividing the widths of the training images by the scaling in the horizontal direction, thereby obtaining a plurality of scaled training images.
It should be noted that the image scaling is determined by scaling the sixth test image, but the image scaling may also be determined by scaling the fifth test image. In addition, to facilitate binocular fusion, the backgrounds of the fifth and sixth test images may be the same.
For example, as shown in fig. 14, it is assumed that the left eye is a dominant eye, the right eye is a weak eye, a fifth test image is displayed in the display area of the left eye, the fifth test image includes a first test target, and both the height and the width of the first test target are 2 centimeters, a sixth test image is displayed in the display area of the right eye, the sixth test image includes a second test target, and both the height and the width of the second test target are 2 centimeters. Thereafter, the scale of the sixth test image is scaled. When the image seen by the user is the image shown in fig. 15, that is, the outer contour of the second test object is inscribed in the outer contour of the first test object, it is determined that the first test object and the second test object can be seen by both eyes of the user in the same proportion. At this time, it is determined that the height and width of the scaled second test object are both 3 cm, and then the image scaling is determined to be 3: 2. Thus, for amblyopic eyes, it is necessary to multiply the sizes of a plurality of training images by the image scaling ratio to perform amblyopic training.
It should be noted that, in the embodiments of the present application, not only the image scaling of the amblyopic eye relative to the dominant eye can be determined, but also the rotation angle of the amblyopic eye relative to the dominant eye can be determined. However, since the eyes are usually not sensitive enough to rotate, the embodiment of the present application mainly determines the image scaling, and then performs the scaling processing on the image for performing the amblyopia training on the amblyopia eyes.
In addition, when the amblyopia training is carried out according to different amblyopia types, the diopter of the amblyopia eye can be corrected, and then the amblyopia training is carried out according to the mode.
After amblyopia training is carried out according to different amblyopia types, the amblyopia training effect can be fed back, so that a user can adjust a training plan in a pertinence manner. Two ways of feeding back the amblyopia training effect will be described next.
In the first mode, the gaze position of the amblyopia eye in the image displayed in the display region corresponding to the amblyopia eye is determined by means of eye movement tracking, and the gaze position of the amblyopia eye is obtained. And if the gaze position of the amblyopia eye coincides with the actual position of the target object in the image displayed in the display area corresponding to the amblyopia eye, modifying the display mode of the target object in the image displayed in the display area corresponding to the amblyopia eye into a reference display mode to indicate the amblyopia training effect.
The reference display mode may refer to a highlight mode, a color change mode, and the like, which is not limited in the embodiment of the present application.
For example, as shown in fig. 16, if the gaze position of the amblyopic eye coincides with the actual position of the target object, the color change process is performed on the target object. Therefore, for the user, after a plurality of times of training, the times of coincidence of the gaze position and the actual position can be obtained, the amblyopia training effect is further determined, and the training plan is convenient to adjust at any time.
In the second mode, the gaze position of the amblyopia eye in the image displayed in the display area corresponding to the amblyopia eye is determined in an eye movement tracking mode, so that the gaze position of the amblyopia eye is obtained. And drawing an amblyopia training curve according to the amblyopia eye gaze position and the actual position of the target object so as to indicate the amblyopia training effect.
As an example, the total number of times of training in the current amblyopia training period, and the percentage of the number of times of coincidence of the amblyopic eye gaze position and the actual position of the target object in the current amblyopia training period to the total number of times of training are counted. And then, drawing an amblyopia training curve by combining the data of a plurality of historical amblyopia training periods. That is, the amblyopia training curve is drawn by taking a plurality of amblyopia training periods as the horizontal axis and taking the determined percentage in each amblyopia training period as the vertical axis.
For example, as shown in fig. 17, the amblyopia training apparatus performs amblyopia training in 7 amblyopia training periods, and the percentage of the number of times that the amblyopic gaze position coincides with the actual position of the target object to the total number of times of training in the 7 amblyopia training periods is 50%, 60%, 70%, 80%, 90%, 95%, 100%, respectively. At this time, the amblyopia training curve is plotted as shown in fig. 17.
The two types of systems may be used alone or in combination, and the embodiments of the present application are not limited to these. Of course, in practical application, the amblyopia training effect can also be fed back in other ways. For example, after performing amblyopia training in each amblyopia training period, the diopter of the amblyopia eye may be detected, and then, the amblyopia training curve may be drawn with the plurality of amblyopia training periods as the horizontal axis and the diopter of the amblyopia eye as the vertical axis. Referring to fig. 18, the amblyopia training device performs amblyopia training in 11 amblyopia training periods, in which the diopters of the amblyopic eyes are 700, 650, 600, 500, 450, 400, 350, 300, 275, respectively. At this time, the amblyopia training curve is plotted as shown in fig. 18.
In addition, the amblyopia training effect can be fed back through the amblyopia training equipment, and the amblyopia training effect can also be fed back through the terminal equipment. That is, after the amblyopia training device determines the amblyopia training effect, the amblyopia training effect can be directly fed back to the user. Of course, the amblyopia training device can also send the amblyopia training effect to the terminal device, and the terminal device directly feeds back the amblyopia training effect to the user.
The implementation process is used for determining the gaze position of the amblyopia eye, so as to feed back the amblyopia training effect. Of course, in other embodiments, the gaze location of the dominant eye may also be determined, thereby feeding back amblyopia training effects. The method for determining the gaze position of the dominant eye and feeding back the amblyopia training effect is the same as the method for the amblyopia eye, and the embodiment of the application is not repeated.
In the embodiment of the application, the amblyopia training is performed by determining the amblyopia type of the amblyopia eye, and displaying the amblyopia training image after processing according to the amblyopia type of the amblyopia eye. That is, by differentiating the type of amblyopia, it is possible to perform targeted training according to different symptoms, and improve the amblyopia training effect. In addition, in the amblyopia training process, the amblyopia training images are displayed in the display areas corresponding to the two eyes, so that the images seen by the two eyes can be combined, and the stereoscopic vision of the two eyes is improved. Moreover, after the amblyopia training is carried out, the amblyopia training effect can be fed back, so that the user can adjust the amblyopia training plan in a targeted manner, and the viscosity of the user is improved. Finally, amblyopia detection, training and effect feedback are integrated into the amblyopia training equipment provided by the embodiment of the application, and a user can carry out targeted training more flexibly and prompt training efficiency and training effect.
Fig. 19 is a schematic structural diagram of an amblyopia training device provided in an embodiment of the present application, which may be implemented by software, hardware, or a combination of the two to be part or all of an amblyopia training apparatus, which may be the amblyopia training apparatus shown in fig. 1. Referring to fig. 19, the apparatus includes: a first determination module 1901, a second determination module 1902, a first amblyopia training module 1903, and a second amblyopia training module 1904.
A first determining module 1901 for determining a dominant eye and a amblyopic eye of a user;
a second determining module 1902 for determining a type of amblyopia of the amblyopic eye, the type of amblyopia comprising strabismus amblyopia or anisometropic amblyopia;
a first amblyopia training module 1903, configured to perform amblyopia training by performing homography transformation processing on the amblyopia training image when the amblyopia type of the amblyopic eye is determined as strabismus amblyopia;
a second amblyopia training module 1904, configured to perform amblyopia training by performing image resizing processing on the amblyopia training image when the amblyopia type of the amblyopic eye is determined as anisometropic amblyopia.
Optionally, the second determining module 1902 includes:
the display submodule is used for displaying a first test image in a display area corresponding to the dominant eye and displaying a second test image in a display area corresponding to the amblyopia eye;
the first determining submodule is used for determining the coordinates of the gaze position of the dominant eye in the displayed first test image and the coordinates of the gaze position of the amblyopia eye in the displayed second test image in an eye movement tracking mode to obtain a first gaze position coordinate and a second gaze position coordinate;
and the second determining submodule is used for determining the amblyopia type of the amblyopia eye according to the first gaze position coordinate and the second gaze position coordinate.
Optionally, the second determining submodule is specifically configured to:
determining binocular deviation information according to the first gaze position coordinate and the second gaze position coordinate, wherein the binocular deviation information is deviation information between the sight line direction of the amblyopia eye and the sight line direction of the main sight eye;
if the binocular deviation information is greater than or equal to a first threshold value, determining that the amblyopia type of the amblyopia eye is strabismus amblyopia;
and if the binocular disparity information is less than the first threshold, determining that the amblyopia type of the amblyopia eye is anisometropic amblyopia.
Optionally, the first determining module is specifically configured to:
detecting diopters of both eyes of the user;
the eyes with low diopter in the eyes of the user are determined as dominant eyes, and the eyes with high diopter in the eyes of the user are determined as weak eyes.
Optionally, the first amblyopia training module 1903 comprises:
and the first amblyopia training submodule is used for displaying an amblyopia training image in a display area corresponding to the dominant eye, performing homography conversion processing on the amblyopia training image and displaying the amblyopia training image in the display area corresponding to the amblyopia eye so that images seen by the two eyes of the user can be combined to perform amblyopia training.
Optionally, the second amblyopia training module 1904 comprises:
and the second amblyopia training submodule is used for displaying the amblyopia training image in the display area corresponding to the main sight eye, performing image size adjustment processing on the amblyopia training image and displaying the amblyopia training image in the display area corresponding to the amblyopia eye, so that the images seen by the two eyes of the user can be combined to perform amblyopia training.
Optionally, the apparatus further comprises:
and the third determining module is used for determining the image contrast corresponding to the dominant eye and the image contrast corresponding to the amblyopia eye so as to enable the two eyes of the user to have the same perception capability.
Optionally, the third determining module is specifically configured to:
displaying a third test image in a display area corresponding to the dominant eye and displaying a fourth test image in a display area corresponding to the amblyopia eye;
reducing the contrast of the displayed third test image and increasing the contrast of the displayed fourth test image;
and the third determining submodule is used for determining the contrast of the reduced third test image as the image contrast corresponding to the dominant eye and determining the contrast of the increased fourth test image as the image contrast corresponding to the amblyopia eye when a contrast determining instruction is detected, wherein the contrast determining instruction is triggered when the user feeds back the contrast which can be perceived by the eyes of the user according to the displayed third test image and the fourth test image to be the same.
Optionally, the amblyopia training image includes a target object, and the target object is used for performing amblyopia training;
the first amblyopia training submodule is specifically configured to:
performing homography transformation on the amblyopia training image according to binocular deviation information, wherein the binocular deviation information is deviation information between the sight direction of the amblyopia eye and the sight direction of the dominant eye;
taking the amblyopia training image before homography transformation as a first training image, taking the amblyopia training image after homography transformation as a second training image, displaying the first training image in the display area corresponding to the dominant eye according to the image contrast corresponding to the dominant eye, and displaying the second training image in the display area corresponding to the amblyopia eye according to the image contrast corresponding to the amblyopia eye;
determining the gaze position of the dominant eye in the first training image and the gaze position of the amblyopic eye in the second training image in an eye movement tracking mode to obtain a first gaze position and a second gaze position;
and if the first gaze position coincides with the actual position of the target object in the first training image and the second gaze position coincides with the actual position of the target object in the second training image, reducing the homography transformation amount of the amblyopia training image, returning the amblyopia training image before homography transformation as the first training image and the amblyopia training image after homography transformation as the second training image until the amblyopia training is finished or the second gaze position coincides with the actual position of the target object in the second training image under the condition that the amblyopia training image is not subjected to homography transformation.
Optionally, the amblyopia training image includes a target object, and the target object is used for performing amblyopia training;
the second amblyopia training submodule is specifically configured to:
if the amblyopia type of the amblyopia eye is anisometropic amblyopia, adjusting the position and/or the size of a target object in the amblyopia training image for multiple times to obtain multiple training images;
determining an image scaling of the amblyopic eye relative to the dominant eye;
scaling the sizes of the plurality of training images according to the image scaling;
the method comprises the steps of using a plurality of training images before zooming as a plurality of third training images, using the plurality of training images after zooming as a plurality of fourth training images, sequentially displaying the plurality of third training images in a display area corresponding to dominant eyes according to the image contrast corresponding to the dominant eyes, sequentially displaying the plurality of fourth training images in a display area corresponding to weak eyes according to the image contrast corresponding to the weak eyes, and enabling the display sequence and the switching frequency of the plurality of third training images and the plurality of fourth training images to be the same.
Optionally, the second amblyopia training submodule is further configured to:
displaying a fifth test image in a display area corresponding to the dominant eye, and displaying a sixth test image in a display area corresponding to the amblyopia eye, wherein the fifth test image comprises a first test target, the sixth test image comprises a second test target, and the first test target and the second test target have the same proportion;
scaling the displayed sixth test image;
and when a proportion determining instruction is detected, determining the proportion between the first test target and the second test target after zooming to obtain the image zooming proportion, wherein the proportion determining instruction is triggered when a user feeds back that the first test target and the second test target with the same proportion can be seen by the eyes of the user according to the displayed fifth test image and the displayed sixth test image.
Optionally, the apparatus further comprises:
the fourth determining module is used for determining the gaze position of the amblyopia in the image displayed in the display area corresponding to the amblyopia in an eye movement tracking mode to obtain the gaze position of the amblyopia;
and the training effect indicating module is used for modifying the display mode of the target object in the image displayed in the display area corresponding to the amblyopia eye into a reference display mode to indicate the amblyopia training effect if the amblyopia eye gaze position coincides with the actual position of the target object in the image displayed in the display area corresponding to the amblyopia eye, and/or drawing an amblyopia training curve according to the amblyopia eye gaze position and the actual position of the target object to indicate the amblyopia training effect.
In the embodiment of the application, the amblyopia training is performed by determining the amblyopia type of the amblyopia eye, and displaying the amblyopia training image after processing according to the amblyopia type of the amblyopia eye. That is, by differentiating the type of amblyopia, it is possible to perform targeted training according to different symptoms, and improve the amblyopia training effect. In addition, in the amblyopia training process, the amblyopia training images are displayed in the display areas corresponding to the two eyes, so that the images seen by the two eyes can be combined, and the stereoscopic vision of the two eyes is improved.
It should be noted that: in the amblyopia training device provided in the above embodiment, only the division of the above functional modules is exemplified when performing amblyopia training, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the amblyopia training device provided by the above embodiment and the amblyopia training method embodiment belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment and will not be described herein again.
Referring to fig. 20, fig. 20 is a schematic structural diagram of an amblyopia training device according to an embodiment of the present application. The amblyopia training device comprises at least one processor 2001, a communication bus 2002, a memory 2003 and at least one communication interface 2004.
The processor 2001 may be a general-purpose Central Processing Unit (CPU), a Network Processor (NP), a microprocessor, or one or more integrated circuits such as an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof, for implementing the disclosed aspects. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
A communication bus 2002 is used to transfer information between the above components. The communication bus 2002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The Memory 2003 may be, but is not limited to, a read-only Memory (ROM), a Random Access Memory (RAM), an electrically erasable programmable read-only Memory (EEPROM), an optical disk (including a CD-ROM), a compact disk, a laser disk, a digital versatile disk, a blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 2003 may be separate and coupled to the processor 2001 via a communication bus 2002. The memory 2003 may also be integrated with the processor 2001.
The communication interface 2004 uses any transceiver or the like for communicating with other devices or a communication network. The communication interface 2004 includes a wired communication interface, and may also include a wireless communication interface. The wired communication interface may be an ethernet interface, for example. The ethernet interface may be an optical interface, an electrical interface, or a combination thereof. The wireless communication interface may be a Wireless Local Area Network (WLAN) interface, a cellular network communication interface, a combination thereof, or the like.
In particular implementations, processor 2001 may include one or more CPUs such as CPU0 and CPU1 shown in fig. 20 as one embodiment.
In a particular implementation, as an embodiment, the amblyopia training device may include a plurality of processors, such as processor 2001 and processor 2005 shown in fig. 20. Each of these processors may be a single core processor or a multi-core processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In particular implementations, as an embodiment, the amblyopia training device may further include an output device 2006 and an input device 2007. The output device 2006 is in communication with the processor 2001 and may display information in a variety of ways. For example, the output device 2006 may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display device, a Cathode Ray Tube (CRT) display device, a projector (projector), or the like. The input device 2007 communicates with the processor 2001 and may receive user input in a variety of ways. For example, the input device 2007 may be a mouse, a keyboard, a touch screen device, a sensing device, or the like.
In some embodiments, the memory 2003 is used to store program code 2010 for performing aspects of the present application, and the processor 2001 may execute the program code 2010 stored in the memory 2003. The program code 2010 may include one or more software modules, and the amblyopia training device may implement the method provided by the above embodiments by the processor 2001 and the program code 2010 in the memory 2003.
Referring to fig. 21, fig. 21 is a schematic structural diagram of a terminal device according to an embodiment of the present application. The terminal device includes a sensor unit 1110, a calculation unit 1120, a storage unit 1140 and an interaction unit 1130.
A sensor unit 1110, typically including a vision sensor (e.g., camera), a depth sensor, an IMU, a laser sensor, etc.;
a computing unit 1120, which generally includes a CPU, a GPU, a cache, a register, and the like, and is mainly used for running an operating system;
a storage unit 1140, which mainly includes a memory and an external storage, and is mainly used for reading and writing local and temporary data of a user, and the like;
the interaction unit 1130 mainly includes a display screen, a touch panel, a speaker, a microphone, and the like, and is mainly used for interacting with a user, acquiring input information, and implementing a presentation algorithm effect. For example, the amblyopia training image may be displayed and projected onto the amblyopia training device.
For ease of understanding, the structure of a terminal device 100 provided in the embodiments of the present application will be described below by way of example. Referring to fig. 22, fig. 22 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
As shown in fig. 22, the terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation to the terminal device 100. In other embodiments of the present application, terminal device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors. The processor 110 may execute a computer program to implement any of the amblyopia training methods in the embodiments of the present application.
The controller may be a neural center and a command center of the terminal device 100, among others. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory, avoiding repeated accesses, reducing the latency of the processor 110, and thus increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I1C) interface, an integrated circuit built-in audio (I1S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
It should be understood that the interface connection relationship between the modules illustrated in the embodiment of the present application is only an exemplary illustration, and does not constitute a limitation on the structure of the terminal device 100. In other embodiments of the present application, the terminal device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the terminal device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
In some possible embodiments, the terminal device 100 may communicate with other devices using wireless communication capabilities. For example, the terminal device 100 may communicate with a second electronic device, the terminal device 100 establishes a screen-casting connection with the second electronic device, the terminal device 100 outputs screen-casting data to the second electronic device, and so on. The screen projection data output by the terminal device 100 may be audio and video data.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in terminal device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including wireless communication of 1G/3G/4G/5G, etc. applied to the terminal device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 2 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the terminal device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 1, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the antenna 1 of the terminal device 100 is coupled to the mobile communication module 150 and the antenna 2 is coupled to the wireless communication module 160 so that the terminal device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The terminal device 100 implements a display function by the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the terminal device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
In some possible implementations, the display screen 194 may be used to display various interfaces of the system output of the terminal device 100.
The terminal device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the terminal device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals.
Video codecs are used to compress or decompress digital video. The terminal device 100 may support one or more video codecs. In this way, the terminal device 100 can play or record video in a plurality of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG1, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can implement applications such as intelligent recognition of the terminal device 100, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the terminal device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the terminal device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as an indoor positioning method in the embodiment of the present application) required by at least one function, and the like. The storage data area may store data (such as audio data, a phonebook, etc.) created during use of the terminal device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The terminal device 100 may implement an audio function through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc. In some possible implementations, the audio module 170 may be used to play sound corresponding to video. For example, when the display screen 194 displays a video playing screen, the audio module 170 outputs the sound of the video playing.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The gyro sensor 180B may be used to determine the motion attitude of the terminal device 100. The air pressure sensor 180C is used to measure air pressure.
The acceleration sensor 180E can detect the magnitude of acceleration of the terminal device 100 in various directions (including three axes or six axes). The magnitude and direction of gravity can be detected when the terminal device 100 is stationary. The method can also be used for recognizing the posture of the terminal equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance.
The ambient light sensor 180L is used to sense the ambient light level.
The fingerprint sensor 180H is used to collect a fingerprint.
The temperature sensor 180J is used to detect temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on the surface of the terminal device 100, different from the position of the display screen 194.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The terminal device 100 may receive a key input, and generate a key signal input related to user setting and function control of the terminal device 100.
The motor 191 may generate a vibration cue.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card.
In the above embodiments, the implementation may be wholly or partly realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others. It is noted that the computer-readable storage medium referred to in the embodiments of the present application may be a non-volatile storage medium, in other words, a non-transitory storage medium.
It should be understood that reference herein to "a plurality" means two or more. In the description of the embodiments of the present application, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in order to facilitate clear description of technical solutions of the embodiments of the present application, in the embodiments of the present application, terms such as "first" and "second" are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
The above-mentioned embodiments are provided not to limit the present application, and any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (26)

1. An amblyopia training method, which is applied to an amblyopia training device, the method comprising:
determining dominant and amblyopic eyes of a user;
determining a type of amblyopia of the amblyopic eye, the type of amblyopia comprising strabismus amblyopia or anisometropic amblyopia;
performing homography transformation processing on the amblyopia training image to perform amblyopia training when the amblyopia type of the amblyopia eye is determined as strabismus amblyopia;
performing an image resizing process on the amblyopia training image to perform amblyopia training when the amblyopia type of the amblyopic eye is determined as ametropic amblyopia.
2. The method of claim 1, wherein said determining the type of amblyopia of said amblyopic eye comprises:
displaying a first test image in a display area corresponding to the dominant eye and displaying a second test image in a display area corresponding to the amblyopia eye;
determining the coordinates of the gaze position of the dominant eye in the displayed first test image and the coordinates of the gaze position of the amblyopic eye in the displayed second test image in an eye movement tracking manner to obtain a first gaze position coordinate and a second gaze position coordinate;
and determining the amblyopia type of the amblyopia eye according to the first gaze position coordinate and the second gaze position coordinate.
3. The method of claim 2, wherein said determining a type of amblyopia for the amblyopic eye from the first gaze location coordinate and the second gaze location coordinate comprises:
determining binocular deviation information according to the first gaze position coordinate and the second gaze position coordinate, wherein the binocular deviation information is deviation information between the visual line direction of the amblyopia eye and the visual line direction of the main sight eye;
if the binocular deviation information is greater than or equal to a first threshold value, determining that the amblyopia type of the amblyopia eye is strabismus amblyopia;
and if the binocular disparity information is smaller than the first threshold value, determining that the amblyopia type of the amblyopia eye is anisometropic amblyopia.
4. The method of any of claims 1-3, wherein the determining the dominant and amblyopic eyes of the user comprises:
detecting diopters of both eyes of the user;
and determining the eyes with low diopter in the eyes of the user as dominant vision eyes, and determining the eyes with high diopter in the eyes of the user as amblyopia eyes.
5. The method of any one of claims 1-4, wherein performing the amblyopia training by performing the homography transformation process on the amblyopia training images comprises:
and displaying the amblyopia training image in a display area corresponding to the dominant eye, performing homography conversion processing on the amblyopia training image, and displaying the amblyopia training image in a display screen corresponding to the amblyopia eye, so that images seen by the two eyes of the user can be combined to perform amblyopia training.
6. The method of any of claims 1-4, wherein the performing image resizing processing on the amblyopia training image to perform amblyopia training comprises:
and displaying the amblyopia training image in a display area corresponding to the dominant eye, performing image size adjustment processing on the amblyopia training image, and displaying the amblyopia training image in the display area corresponding to the amblyopia eye, so that images seen by the two eyes of the user can be combined to perform amblyopia training.
7. The method of claim 5 or 6, wherein prior to displaying the amblyopic training image in the display region corresponding to the dominant eye, the method further comprises:
and determining the image contrast corresponding to the dominant eye and the image contrast corresponding to the amblyopia eye so as to enable the two eyes of the user to have the same perception capability.
8. The method of claim 7, wherein said determining the image contrast corresponding to the dominant eye and the image contrast corresponding to the amblyopic eye comprises:
displaying a third test image in a display area corresponding to the dominant eye and displaying a fourth test image in a display area corresponding to the amblyopia eye;
decreasing the contrast of the displayed third test image and increasing the contrast of the displayed fourth test image;
when a contrast determination instruction is detected, determining the contrast of the reduced third test image as the image contrast corresponding to the dominant eye, and determining the contrast of the increased fourth test image as the image contrast corresponding to the amblyopia eye, wherein the contrast determination instruction is triggered when the user can perceive the same contrast according to the displayed third test image and the fourth test image which are fed back by the user.
9. The method of claim 7, wherein the amblyopia training image includes a target object, the target object being used for amblyopia training;
the displaying the amblyopia training image in the display area corresponding to the dominant eye, performing homography conversion processing on the amblyopia training image, and displaying the amblyopia training image in the display area corresponding to the amblyopia eye includes:
performing homography transformation on the amblyopia training image according to binocular deviation information, wherein the binocular deviation information is deviation information between the sight direction of the amblyopia eye and the sight direction of the dominant eye;
taking the amblyopia training image before homography transformation as a first training image, taking the amblyopia training image after homography transformation as a second training image, displaying the first training image in the display area corresponding to the dominant eye according to the image contrast corresponding to the dominant eye, and displaying the second training image in the display area corresponding to the amblyopia eye according to the image contrast corresponding to the amblyopia eye;
determining the gaze position of the dominant eye in the first training image and the gaze position of the amblyopic eye in the second training image in an eye movement tracking mode to obtain a first gaze position and a second gaze position;
and if the first gaze position coincides with the actual position of the target object in the first training image and the second gaze position coincides with the actual position of the target object in the second training image, reducing the homography transformation amount of the amblyopia training image, returning the amblyopia training image before homography transformation as the first training image and the amblyopia training image after homography transformation as the second training image until the amblyopia training is finished or the second gaze position coincides with the actual position of the target object in the second training image under the condition that the amblyopia training image is not subjected to homography transformation.
10. The method of claim 7, wherein the amblyopia training image includes a target object, the target object being used for amblyopia training;
the displaying the amblyopia training image in the display area corresponding to the dominant eye, performing image size adjustment processing on the amblyopia training image, and displaying the amblyopia training image in the display area corresponding to the amblyopia eye includes:
adjusting the position and/or size of the target object in the amblyopia training image for multiple times to obtain multiple training images;
determining an image scale of the amblyopic eye relative to the dominant eye;
scaling the sizes of the plurality of training images according to the image scaling;
the plurality of training images before zooming are used as a plurality of third training images, the plurality of training images after zooming are used as a plurality of fourth training images, the plurality of third training images are sequentially displayed in a display area corresponding to the dominant eye according to the image contrast corresponding to the dominant eye, the plurality of fourth training images are sequentially displayed in a display area corresponding to the amblyopia eye according to the image contrast corresponding to the amblyopia eye, and the display sequence and the switching frequency of the plurality of third training images and the plurality of fourth training images are the same.
11. The method of claim 10, wherein said determining an image scale of said amblyopic eye relative to said dominant eye comprises:
displaying a fifth test image in a display area corresponding to the dominant eye, and displaying a sixth test image in a display area corresponding to the amblyopia eye, wherein the fifth test image comprises a first test target, the sixth test image comprises a second test target, and the first test target and the second test target have the same proportion;
scaling the displayed sixth test image;
when a scale determining instruction is detected, determining the scale between the first test target and the second test target after scaling to obtain the image scaling, wherein the scale determining instruction is triggered when the user can see the first test target and the second test target with the same scale according to the displayed fifth test image and the displayed sixth test image by feeding back the two eyes of the user.
12. The method of any of claims 9-11, wherein the method further comprises:
determining the gaze position of the amblyopia eye in the image displayed in the display area corresponding to the amblyopia eye in an eye movement tracking mode to obtain the gaze position of the amblyopia eye;
and if the amblyopia eye gaze position coincides with the actual position of the target object in the image displayed in the display area corresponding to the amblyopia eye, modifying the display mode of the target object in the image displayed in the display area corresponding to the amblyopia eye into a reference display mode to indicate an amblyopia training effect, and/or drawing an amblyopia training curve according to the amblyopia eye gaze position and the actual position of the target object to indicate the amblyopia training effect.
13. An amblyopia training device, which is applied to amblyopia training equipment, the device comprises:
the first determining module is used for determining the dominant eye and the amblyopia eye of the user;
a second determination module for determining a type of amblyopia of the amblyopic eye, the type of amblyopia comprising strabismus amblyopia or anisometropic amblyopia;
the first amblyopia training module is used for performing homography transformation processing on an amblyopia training image to perform amblyopia training when the amblyopia type of the amblyopia eye is determined as strabismus amblyopia;
and the second amblyopia training module is used for performing amblyopia training by performing image size adjustment processing on the amblyopia training image when the amblyopia type of the amblyopia eye is determined as the anisometropic amblyopia.
14. The apparatus of claim 13, wherein the second determining module comprises:
the display submodule is used for displaying a first test image in a display area corresponding to the dominant eye and displaying a second test image in a display area corresponding to the amblyopia eye;
the first determining submodule is used for determining the coordinates of the gaze position of the dominant eye in the displayed first test image and the coordinates of the gaze position of the amblyopic eye in the displayed second test image in an eye movement tracking mode to obtain a first gaze position coordinate and a second gaze position coordinate;
and the second determining submodule is used for determining the amblyopia type of the amblyopia eye according to the first gaze position coordinate and the second gaze position coordinate.
15. The apparatus of claim 14, wherein the second determination submodule is specifically configured to:
determining binocular deviation information according to the first gaze position coordinate and the second gaze position coordinate, wherein the binocular deviation information is deviation information between the visual line direction of the amblyopia eye and the visual line direction of the main sight eye;
if the binocular deviation information is greater than or equal to a first threshold value, determining that the amblyopia type of the amblyopia eye is strabismus amblyopia;
and if the binocular disparity information is smaller than the first threshold value, determining that the amblyopia type of the amblyopia eye is anisometropic amblyopia.
16. The apparatus of any one of claims 13-15, wherein the first determining module is specifically configured to:
detecting diopters of both eyes of the user;
and determining the eyes with low diopter in the eyes of the user as dominant vision eyes, and determining the eyes with high diopter in the eyes of the user as amblyopia eyes.
17. The apparatus of any of claims 13-16, wherein the first amblyopia training module comprises:
and the first amblyopia training submodule is used for displaying the amblyopia training image in the display area corresponding to the main sight eye, performing homography conversion processing on the amblyopia training image and displaying the amblyopia training image in the display area corresponding to the amblyopia eye so as to enable the images seen by the two eyes of the user to be combined to perform amblyopia training.
18. The apparatus of any of claims 13-16, wherein the second amblyopia training module comprises:
and the second amblyopia training submodule is used for displaying the amblyopia training image in the display area corresponding to the main sight eye, performing image size adjustment processing on the amblyopia training image and displaying the amblyopia training image in the display area corresponding to the amblyopia eye, so that the images seen by the two eyes of the user can be combined to perform amblyopia training.
19. The apparatus of claim 17 or 18, wherein the apparatus further comprises:
and the third determining module is used for determining the image contrast corresponding to the dominant eye and the image contrast corresponding to the amblyopia eye so as to enable the two eyes of the user to have the same perception capability.
20. The apparatus of claim 19, wherein the third determination module is specifically configured to:
displaying a third test image in a display area corresponding to the dominant eye and displaying a fourth test image in a display area corresponding to the amblyopia eye;
decreasing the contrast of the displayed third test image and increasing the contrast of the displayed fourth test image;
when a contrast determination instruction is detected, determining the contrast of the reduced third test image as the image contrast corresponding to the dominant eye, and determining the contrast of the increased fourth test image as the image contrast corresponding to the amblyopia eye, wherein the contrast determination instruction is triggered when the user can perceive the same contrast according to the displayed third test image and the fourth test image which are fed back by the user.
21. The apparatus of claim 19, wherein the amblyopia training image includes a target object, the target object being used for amblyopia training;
the first amblyopia training submodule is specifically configured to:
performing homography transformation on the amblyopia training image according to binocular deviation information, wherein the binocular deviation information is deviation information between the sight direction of the amblyopia eye and the sight direction of the dominant eye;
taking the amblyopia training image before homography transformation as a first training image, taking the amblyopia training image after homography transformation as a second training image, displaying the first training image in the display area corresponding to the dominant eye according to the image contrast corresponding to the dominant eye, and displaying the second training image in the display area corresponding to the amblyopia eye according to the image contrast corresponding to the amblyopia eye;
determining the gaze position of the dominant eye in the first training image and the gaze position of the amblyopic eye in the second training image in an eye movement tracking mode to obtain a first gaze position and a second gaze position;
and if the first gaze position coincides with the actual position of the target object in the first training image and the second gaze position coincides with the actual position of the target object in the second training image, reducing the homography transformation amount of the amblyopia training image, returning the amblyopia training image before homography transformation as the first training image and the amblyopia training image after homography transformation as the second training image until the amblyopia training is finished or the second gaze position coincides with the actual position of the target object in the second training image under the condition that the amblyopia training image is not subjected to homography transformation.
22. The apparatus of claim 19, wherein the amblyopia training image includes a target object, the target object being used for amblyopia training;
the second amblyopia training submodule is specifically configured to:
adjusting the position and/or size of the target object in the amblyopia training image for multiple times to obtain multiple training images;
determining an image scale of the amblyopic eye relative to the dominant eye;
scaling the sizes of the plurality of training images according to the image scaling;
the plurality of training images before zooming are used as a plurality of third training images, the plurality of training images after zooming are used as a plurality of fourth training images, the plurality of third training images are sequentially displayed in a display area corresponding to the dominant eye according to the image contrast corresponding to the dominant eye, the plurality of fourth training images are sequentially displayed in a display area corresponding to the amblyopia eye according to the image contrast corresponding to the amblyopia eye, and the display sequence and the switching frequency of the plurality of third training images and the plurality of fourth training images are the same.
23. The apparatus of claim 22, wherein the second amblyopia training submodule is further for:
displaying a fifth test image in a display area corresponding to the dominant eye, and displaying a sixth test image in a display area corresponding to the amblyopia eye, wherein the fifth test image comprises a first test target, the sixth test image comprises a second test target, and the first test target and the second test target have the same proportion;
scaling the displayed sixth test image;
when a scale determining instruction is detected, determining the scale between the first test target and the second test target after scaling to obtain the image scaling, wherein the scale determining instruction is triggered when the user can see the first test target and the second test target with the same scale according to the displayed fifth test image and the displayed sixth test image by feeding back the two eyes of the user.
24. The apparatus of any of claims 21-23, wherein the apparatus further comprises:
the fourth determining module is used for determining the gaze position of the amblyopia eye in the image displayed in the display area corresponding to the amblyopia eye in an eye movement tracking mode to obtain the gaze position of the amblyopia eye;
and the training effect indicating module is used for modifying the display mode of the target object in the image displayed in the display area corresponding to the amblyopia eye into a reference display mode to indicate the amblyopia training effect if the amblyopia eye gaze position coincides with the actual position of the target object in the image displayed in the display area corresponding to the amblyopia eye, and/or drawing an amblyopia training curve according to the amblyopia eye gaze position and the actual position of the target object to indicate the amblyopia training effect.
25. Amblyopia training device, characterized in that the amblyopia training device comprises a memory for storing a computer program and a processor for executing the computer program for carrying out the steps of the method of any one of claims 1 to 12.
26. A computer-readable storage medium, characterized in that the storage medium has stored therein a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 12.
CN202011019488.4A 2020-09-24 2020-09-24 Amblyopia training method, device, equipment and storage medium Pending CN114255204A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011019488.4A CN114255204A (en) 2020-09-24 2020-09-24 Amblyopia training method, device, equipment and storage medium
PCT/CN2021/095234 WO2022062436A1 (en) 2020-09-24 2021-05-21 Amblyopia training method, apparatus and device, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011019488.4A CN114255204A (en) 2020-09-24 2020-09-24 Amblyopia training method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114255204A true CN114255204A (en) 2022-03-29

Family

ID=80790144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011019488.4A Pending CN114255204A (en) 2020-09-24 2020-09-24 Amblyopia training method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114255204A (en)
WO (1) WO2022062436A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116458835A (en) * 2023-04-27 2023-07-21 上海中医药大学 Detection and prevention system for myopia and amblyopia of infants
CN116807849A (en) * 2023-06-20 2023-09-29 广州视景医疗软件有限公司 Visual training method and device based on eye movement tracking

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116098794A (en) * 2022-12-30 2023-05-12 广州视景医疗软件有限公司 De-inhibition visual training method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101224151A (en) * 2007-01-19 2008-07-23 北京同仁验光配镜中心 Amblyopia vision increasing treatment method and system thereof
CN103876886A (en) * 2014-04-09 2014-06-25 合肥科飞视觉科技有限公司 Amblyopia treatment system
JP2020509790A (en) * 2016-09-23 2020-04-02 ノバサイト リミテッド Screening device and method
CN110856686A (en) * 2018-08-25 2020-03-03 广州联海信息科技有限公司 VR amblyopia patient training system
CN110123594A (en) * 2019-05-04 2019-08-16 吴登智 A kind of VR for amblyopia training and intelligent terminal synchronous display system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116458835A (en) * 2023-04-27 2023-07-21 上海中医药大学 Detection and prevention system for myopia and amblyopia of infants
CN116458835B (en) * 2023-04-27 2024-02-13 上海中医药大学 Detection and prevention system for myopia and amblyopia of infants
CN116807849A (en) * 2023-06-20 2023-09-29 广州视景医疗软件有限公司 Visual training method and device based on eye movement tracking

Also Published As

Publication number Publication date
WO2022062436A1 (en) 2022-03-31

Similar Documents

Publication Publication Date Title
US11797084B2 (en) Method and apparatus for training gaze tracking model, and method and apparatus for gaze tracking
US11782554B2 (en) Anti-mistouch method of curved screen and electronic device
US10365882B2 (en) Data processing method and electronic device thereof
WO2020192458A1 (en) Image processing method and head-mounted display device
WO2022062436A1 (en) Amblyopia training method, apparatus and device, storage medium, and program product
KR102558473B1 (en) Method for displaying an image and an electronic device thereof
KR20180074369A (en) Method and Apparatus for Managing Thumbnail of 3-dimensional Contents
CN110708533B (en) Visual assistance method based on augmented reality and intelligent wearable device
KR20160024168A (en) Method for controlling display in electronic device and the electronic device
US11838494B2 (en) Image processing method, VR device, terminal, display system, and non-transitory computer-readable storage medium
US11244496B2 (en) Information processing device and information processing method
KR20180071012A (en) Electronic apparatus and controlling method thereof
US11335090B2 (en) Electronic device and method for providing function by using corneal image in electronic device
EP4044000A1 (en) Display method, electronic device, and system
CN113741681A (en) Image correction method and electronic equipment
US11941804B2 (en) Wrinkle detection method and electronic device
KR20180046543A (en) Electronic device and method for acquiring omnidirectional image
WO2020044916A1 (en) Information processing device, information processing method, and program
CN110728744B (en) Volume rendering method and device and intelligent equipment
CN114895790A (en) Man-machine interaction method and device, electronic equipment and storage medium
US10969865B2 (en) Method for transmission of eye tracking information, head mounted display and computer device
CN115335754A (en) Geospatial image surface processing and selection
CN107872619B (en) Photographing processing method, device and equipment
KR20160008357A (en) Video call method and apparatus
CN112558847B (en) Method for controlling interface display and head-mounted display

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination