CN112263450A - Vision training method and device based on near-to-eye information - Google Patents

Vision training method and device based on near-to-eye information Download PDF

Info

Publication number
CN112263450A
CN112263450A CN202011097696.6A CN202011097696A CN112263450A CN 112263450 A CN112263450 A CN 112263450A CN 202011097696 A CN202011097696 A CN 202011097696A CN 112263450 A CN112263450 A CN 112263450A
Authority
CN
China
Prior art keywords
training
visual
image
wearer
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011097696.6A
Other languages
Chinese (zh)
Inventor
梁文隆
李传勍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yishi Haotong Information Technology Co ltd
Original Assignee
Shanghai Yishi Haotong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yishi Haotong Information Technology Co ltd filed Critical Shanghai Yishi Haotong Information Technology Co ltd
Priority to CN202011097696.6A priority Critical patent/CN112263450A/en
Publication of CN112263450A publication Critical patent/CN112263450A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H5/00Exercisers for the eyes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5007Control means thereof computer controlled
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2230/00Measuring physical parameters of the user

Landscapes

  • Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Epidemiology (AREA)
  • Pain & Pain Management (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The application relates to the field of vision training, in particular to a vision training method and a device based on near-to-eye information, which comprises the steps of obtaining a plurality of visual identification images and a plurality of training images, wherein the visual identification images are directional patterns, the directions of the visual identification images are inconsistent, the training images are dynamic images taking the visual identification images as moving bodies, and the training images correspond to the visual identification patterns; randomly calling a visual identification image to project to an initial sight distance; acquiring feedback information of the wearer on the visual identification image and selecting a training visual range adapted to the visual condition of the wearer; matching a training image based on the training visual range and projecting the training image to a training plane which is away from the eyes of the wearer by the training visual range; acquiring feedback information of a wearer on the visual identification image in the training image and judging whether the training is successful or not; when the training is judged to be successful, the training image and the training visual range are replaced. The application has the effect of reducing the myopia degree of the wearer.

Description

Vision training method and device based on near-to-eye information
Technical Field
The application relates to the field of vision training, in particular to a vision training method and device based on near-to-eye information.
Background
In recent years, the incidence rate of myopia in China is on a remarkable rising trend, and myopia has become a great public health problem affecting the eye health of the national people, particularly teenagers. The most common ophthalmic diseases are listed as one of three diseases in the world, the incidence rate of myopia of the population in China is about 33 percent, which is 1.5 times of the average level of the world (accounting for 22 percent of the total population), and the vision can be recovered by enabling ciliary muscles to stretch crystalline lenses by watching distant objects for a long time theoretically.
The methods used to correct vision are mainly of two kinds: wearing glasses and laser surgery treatment. For the laser surgery treatment scheme, the scheme can radically cure myopia, but the laser surgery is risky and has the possibility of relapse; for the scheme of wearing the glasses, the problem cannot be solved fundamentally when the ordinary myopia glasses are worn, and the myopia degree can be deepened for a longer time. In addition, various myopia auxiliary therapeutic apparatuses exist in the market, but the myopia auxiliary therapeutic apparatuses are more inconvenient to use or have unobvious effects.
At present, near-eye information-based head-mounted equipment is popular in the world, VR (Virtual Reality, referred to as VR) technology is rapidly developed and rapidly permeates into various industries, and the VR Virtual Reality technology can be widely applied to numerous fields such as city planning, indoor design, industrial simulation, historic site restoration, bridge and road design, real estate sales, tourism teaching, water conservancy and power, geological disasters and the like, and provides a feasible solution for the VR technology. How to use virtual technology to correct vision and reduce the myopia degree of a wearer through a head-mounted device becomes a problem which needs to be solved urgently by the technical personnel in the field.
Disclosure of Invention
In order to reduce the myopia degree of a wearer, the application provides a vision training method based on near-eye information and a device thereof.
In a first aspect, the present application provides a community safety early warning method, which adopts the following technical scheme:
a near-eye information based vision training method, comprising:
acquiring a plurality of visual identification images and a plurality of training images, wherein the visual identification images are directional patterns, the directions of the visual identification images are inconsistent, the training images are dynamic images taking the visual identification images as moving bodies, and the training images correspond to the visual identification patterns;
randomly calling a visual identification image to project to an initial sight distance;
acquiring feedback information of the wearer on the visual identification image and selecting a training visual range adapted to the visual condition of the wearer;
matching a training image based on the training visual range and projecting the training image to a training plane which is away from the eyes of the wearer by the training visual range;
acquiring feedback information of a wearer on the visual identification image in the training image and judging whether the training is successful or not;
when the training is judged to be successful, the training image and the training visual range are replaced.
Through adopting above-mentioned technical scheme, a plurality of visual identification image projections to different stadia of directional inconsistent, obtain the training stadia of adaptation in the wearer's visual condition based on the feedback information of wearer to visual identification image, then with training image projection to the training plane that equals the training stadia with the wearer's human eye distance, carry out the visual training to the wearer through the training image, whether the training succeeds through obtaining the feedback information of wearer to visual identification image in the training image and judging, when judging for the training succeeds, change training image and training stadia, carry out new visual training, thereby reduce the near-sighted degree of wearer gradually.
Further, the acquiring feedback information of the wearer on the visual identification image and selecting the training visual range adapted to the visual condition of the wearer comprises:
acquiring feedback information of the wearer on the visual identification image and judging whether to adjust the sight distance;
when the visual distance is judged to be adjusted, replacing another visual identification image and projecting the image to the adjusted visual distance, and re-acquiring feedback information of the wearer to the other visual identification image and judging whether to adjust the visual distance;
and when the sight distance is judged not to be adjusted, setting the current sight distance as the training sight distance.
By adopting the technical scheme, the feedback information of the wearer to the visual identification image is received and judged, and when the visual range is not regulated, the current visual range is set as the training visual range, namely, the selection of the training visual range is realized.
Further, the initial viewing distance is the farthest viewing distance, the initial viewing distance corresponds to the best vision, and the adjusted viewing distance is smaller than the viewing distance before adjustment.
By adopting the technical scheme, the visual identification image is fed back by the far and gradually approaching visual range when the wearer can clearly see the visual identification image, and after the feedback information of the wearer to the visual identification image is acquired, the current visual range is set as the training visual range, namely, the selection of the training visual range is realized.
Further, the obtaining feedback information of the wearer on the visual identification image and determining whether to adjust the viewing distance includes:
acquiring pupil position information of a wearer in real time;
when the deviation direction of the pupil position of the wearer is consistent with the direction of the current visual identification image and the deviation duration time of the pupil position is not less than the first time, judging that the sight distance is not adjusted;
and when the deviation direction of the pupil position of the wearer is inconsistent with the direction of the visual identification image and is not less than the second time, determining to adjust the visual range, wherein the second time is greater than the first time.
By adopting the technical scheme, whether the sight distance needs to be adjusted is judged based on the offset direction and the offset time of the pupil position of the wearer.
Further, the matching the training image based on the training visual distance and projecting the training image to the training plane at a distance equal to the training visual distance from the human eye of the wearer comprises:
and projecting the training image corresponding to the visual identification image projected to the training visual distance to a training plane which is away from the eyes of the wearer by the distance equal to the training visual distance.
By adopting the technical scheme, after the training visual range is determined, the training image corresponding to the visual identification image projected to the training visual range is projected to the training plane with the distance from the eyes of the wearer equal to the training visual range, and the visual training of the wearer can be carried out.
Further, the obtaining feedback information of the wearer on the visual identification image in the training image and determining whether the training is successful includes:
acquiring pupil position information of a wearer in real time;
when the deviation direction of the pupil position of the wearer is consistent with the direction of the visual identification image in the training image and the deviation duration time of the pupil position is not less than the first time, judging that the training is successful;
and when the deviation direction of the pupil position of the wearer is inconsistent with the direction of the visual identification image and is not less than a second time, judging that the training is not successful, wherein the second time is greater than the first time.
By adopting the technical scheme, when the wearer is trained through the training image, the pupil position information of the wearer is acquired in real time, and whether the training of the wearer is successful is judged based on the deviation direction and the deviation time of the pupil position of the wearer.
Further, the replacing the training image and the training visual range includes:
setting a next sight distance adjacent to the current training sight distance as a new training sight distance, the new training sight distance being greater than the current training sight distance;
and projecting the training image corresponding to the visual identification image projected to the new training visual range to a training plane which is away from the eyes of the wearer by the distance equal to the new training visual range.
By adopting the technical scheme, after the training of the wearer is judged to be successful, the visual identification image that the wearer can see the current training visual range is shown, and the new training image is projected to the new training visual range larger than the current training visual range, so that the vision training can be further carried out.
Furthermore, the method also comprises the steps of acquiring the iris information of the wearer and storing training result information corresponding to the iris information for subsequent reading.
By adopting the technical scheme, the training result information can be directly read when the wearer wears the wearable device again after the training is stopped.
In a second aspect, the present application provides a community safety precaution device, which adopts the following technical scheme:
a near-eye information based vision training device comprising:
the device comprises a preprocessing module, a directional module and a training module, wherein the preprocessing module is used for acquiring a plurality of visual identification images and a plurality of training images, the visual identification images are directional patterns, the directions of the visual identification images are inconsistent, the training images are dynamic images taking the visual identification images as a moving body, and the training images correspond to the visual identification patterns;
the image projection module is used for randomly calling a visual identification image to project to the initial sight distance;
the visual range selection module is used for acquiring feedback information of the wearer on the visual identification image and selecting a training visual range adaptive to the vision condition of the wearer;
the image projection module is used for matching a training image based on the training visual distance and projecting the training image to a training plane with the distance from the human eyes of the wearer equal to the training visual distance;
the judging module is used for acquiring feedback information of the wearer on the visual identification image in the training image and judging whether the training is successful or not;
and the image replacing module is used for replacing the training image and the training sight distance when the training is judged to be successful.
Through adopting above-mentioned technical scheme, a plurality of visual identification image projections to different stadia of directional inconsistent, obtain the training stadia of adaptation in the wearer's visual condition based on the feedback information of wearer to visual identification image, then with training image projection to the training plane that equals the training stadia with the wearer's human eye distance, carry out the visual training to the wearer through the training image, whether the training succeeds through obtaining the feedback information of wearer to visual identification image in the training image and judging, when judging for the training succeeds, change training image and training stadia, carry out new visual training, thereby reduce the near-sighted degree of wearer gradually.
In a third aspect, the present application provides an electronic device, which adopts the following technical solutions:
an electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the near-eye information based vision training method of any one of claims 1-8 when executing the computer program.
In summary, the present application includes at least one of the following beneficial technical effects:
the visual identification images with different directions are projected to different visual distances, training visual distances suitable for the visual condition of a wearer are obtained based on feedback information of the wearer to the visual identification images, then the training images are projected to a training plane with the distance equal to the training visual distances from the eyes of the wearer, the wearer is subjected to visual training through the training images, whether the training is successful or not is judged by obtaining the feedback information of the wearer to the visual identification images in the training images, and when the training is judged to be successful, the training images and the training visual distances are replaced, new visual training is carried out, and therefore the myopia degree of the wearer is gradually reduced;
after the training is stopped, the training result information can be directly read when the wearing person wears the wearable device again.
Drawings
Fig. 1 is a flowchart of a near-eye information-based vision training method in an embodiment of the present application.
Fig. 2 is a motion trace diagram of a visual identification image in a training image in an embodiment of the present application.
Fig. 3 is a flowchart of a vision training method based on near-eye information according to another embodiment of the present application.
Fig. 4 is a flowchart of a vision training method based on near-eye information according to another embodiment of the present application.
Fig. 5 is a flowchart of a vision training method based on near-eye information according to another embodiment of the present application.
Fig. 6 is a schematic block diagram of a vision training apparatus based on near-eye information in an embodiment of the present application.
Fig. 7 is a schematic block diagram of an electronic device in an embodiment of the present application.
Detailed Description
The present application is described in further detail below with reference to figures 1-7.
The embodiment of the application discloses an eyesight training method based on near-to-eye information, which can reduce the myopia degree of a wearer and is suitable for head-mounted equipment.
Specifically, referring to fig. 1, the method includes the following steps:
s1, obtaining a plurality of visual identification images and a plurality of training images, wherein the visual identification images are directional patterns, the directions of the visual identification images are inconsistent, the training images are dynamic images taking the visual identification images as moving bodies, and the training images correspond to the visual identification patterns.
In this embodiment, the visual identification image may adopt a commonly used vision test image with directivity such as "E" shape, or adopt a pattern such as "a", "→" → "as the vision test image; specifically, referring to fig. 2, the eight visual identification images are different in pointing direction, and respectively point to "up", "down", "left", "right", "up left", "up right", "down left", "down right"; the difference between a plurality of directionalities is 45 degrees, the training images are dynamic images taking the visual identification images as motion main bodies, in the embodiment, the visual identification images of the training images sequentially traverse the nine palaces along the numerical sequence in the figure, and return to 1 position to 9, so that the visual training can be performed comprehensively on the eyesight. The visual identification image and the training image may be pre-imported into the system for subsequent recall.
And S2, randomly calling a visual identification image to project to the initial sight distance.
The initial viewing distance may be a farthest viewing distance, the initial viewing distance corresponding to best vision.
For example, the farthest visual range is 3m, and when the wearer can distinguish the direction of the visual identification image projected to the visual range at 3m, the visual identification image indicates that the vision of the wearer is the best vision, and corresponds to the vision data of 5.3. The interval between adjacent visual distances is 0.1m, and the specific interval value is freely set.
And S3, acquiring feedback information of the wearer on the visual identification image and selecting a training visual range adapted to the vision condition of the wearer. The method specifically comprises the following steps:
acquiring feedback information of the wearer on the visual identification image and judging whether to adjust the sight distance;
acquiring pupil position information of a wearer in real time;
specifically, the method can be completed based on opencv, firstly, a gray integration algorithm is used for roughly finding out the approximate position of a human eye part on a human face, then, a Harris corner detection algorithm is used for positioning the corner of the eye, the middle position of the eye socket of the human eye is further obtained, then, Hough transformation is used for positioning the accurate position of the center of the pupil of the human eye, the positioning of the pupil can also be realized by using a self-contained classifier (namely, haarcascade _ eye _ tree _ eye glasses, xml) in opencv, and finally, the offset direction of the pupil position of a wearer is obtained based on the middle position of the eye socket of the human eye and the accurate position of the center of a through hole of the human eye; here, only the method of always acquiring the offset direction of the pupil position is shown, and those skilled in the art can freely design an algorithm to realize the acquisition of the offset direction of the pupil position.
Judging that the sight distance is not adjusted when the deviation direction of the pupil position of the wearer is consistent with the direction of the current visual identification image and the deviation duration time of the pupil position is not less than the first time;
and thirdly, when the deviation direction of the pupil position of the wearer is inconsistent with the direction of the visual identification image and is not less than the second time, the visual distance is judged to be adjusted, and the second time is greater than the first time.
In this embodiment, the first time may be set to 2s, that is, when the training visual distance is obtained through the visual identification image, the wearer feeds back the visual identification image, and shifts the pupil for 2s along the pointing direction of the visual identification image in the training image, that is, the wearer can recognize the pointing direction of the current visual identification image, and then it is determined that the visual distance is not to be adjusted, in particular, the shift direction of the allowable pupil position may deviate from the actual pointing direction of the identification image, which may be about ± 20 ° in this embodiment. The second time may be set to 3s, and when the deviation direction of the pupil position of the wearer does not coincide with the direction of the visual identification image for 3s, that is, it indicates that the wearer cannot recognize the direction of the current visual identification image, it is determined to adjust the viewing distance.
When the visual distance is judged to be adjusted, replacing another visual identification image and projecting the image to the adjusted visual distance, and re-acquiring feedback information of the wearer to the other visual identification image and judging whether to adjust the visual distance;
and when the sight distance is judged not to be adjusted, setting the current sight distance as the training sight distance.
The adjusted sight distance is smaller than the sight distance before adjustment.
Specifically, when the visual distance is judged to be adjusted, one visual identification image is called again and projected to the adjusted visual distance, for example, when the visual distance is 2.5m, the feedback of the wearer to the visual identification image is recognized, namely the pupil is continuously shifted for 2s along the pointing direction of the visual identification image, and the current visual distance 2.5m is set as the training visual distance 2.5 m; if the fact that the deviation direction of the pupil position of the wearer is inconsistent with the direction of the visual identification image for 3s is recognized, the current visual distance is changed to be 2.4m, and one visual identification image is called again to be projected to the visual distance of 2.4 m.
And S4, matching the training image based on the training visual distance and projecting the training image to a training plane which is away from the human eyes of the wearer by the training visual distance.
Specifically, a training image corresponding to the visual identification image projected to the training visual distance is projected to a training plane which is away from the eyes of the wearer by the distance equal to the training visual distance. In this embodiment, the training plane may be a virtual plane, that is, an air imaging technology is used to project the training image into the air, or a micro-mirror array or a scatterer (water curtain, smoke) with a complex structure may be used to achieve the effect of air suspension display, and the training apparent distance is projected with the visual identification image in step S3, and the training image corresponding to the visual identification image is projected to the training apparent distance.
And S5, acquiring feedback information of the wearer to the visual identification image in the training image and judging whether the training is successful.
Specifically, the method comprises the following steps:
(1) acquiring pupil position information of a wearer in real time;
(2) when the deviation direction of the pupil position of the wearer is consistent with the direction of the visual identification image in the training image and the deviation duration time of the pupil position is not less than the first time, judging that the training is successful;
(3) and when the deviation direction of the pupil position of the wearer is inconsistent with the direction of the visual identification image and is not less than a second time, judging that the training is not successful, wherein the second time is greater than the first time.
And S6, when the training is judged to be successful, replacing the training image and the training visual range.
Specifically, the method comprises the following steps:
(1) setting a next sight distance adjacent to the current training sight distance as a new training sight distance, the new training sight distance being greater than the current training sight distance;
(2) and projecting the training image corresponding to the visual identification image projected to the new training visual range to a training plane which is away from the eyes of the wearer by the distance equal to the new training visual range.
For example, when the visual distance is 2.5m, recognizing that the feedback of the wearer to the visual identification image in the training image is made, that is, the pupil is continuously shifted for 2S along the pointing direction of the visual identification image in the training image, determining that the training is successful, adjusting the training visual distance to 2.6m, wherein the visual identification image is already projected when the visual distance of 2.6m is in the step S3, and projecting the training image corresponding to the visual identification image to the training visual distance; if the deviation direction of the pupil position of the wearer is identified to be inconsistent with the direction of the visual identification image in the training image for 3s, the wearer is judged to be untrained successfully, and the visual training of the wearer is continued through the training image with the training visual distance of 2.5 m.
And S7, acquiring the iris information of the wearer, and storing the training result information corresponding to the iris information for subsequent reading.
Specifically, when the wearer wears the training device, the iris information of the wearer is recorded, and the training result information of the wearer is recorded, wherein the training result information comprises the current training visual range, a training image, the visual range of the projected visual identification image and the training image matched with the visual range of the projected visual identification image.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
The embodiment also provides a vision training device based on near-eye information, and referring to fig. 7, the vision training device based on near-eye information comprises a preprocessing module, an image projection module, a visual range selection module, an image projection module, a judgment module and an image replacement module. The functional modules are explained in detail as follows:
the device comprises a preprocessing module, a directional module and a training module, wherein the preprocessing module is used for acquiring a plurality of visual identification images and a plurality of training images, the visual identification images are directional patterns, the directions of the visual identification images are inconsistent, the training images are dynamic images taking the visual identification images as a moving body, and the training images correspond to the visual identification patterns;
the image projection module is used for randomly calling a visual identification image to project to the initial sight distance;
the visual range selection module is used for acquiring feedback information of the wearer on the visual identification image and selecting a training visual range adaptive to the vision condition of the wearer;
the image projection module is used for matching a training image based on the training visual distance and projecting the training image to a training plane with the distance from the human eyes of the wearer equal to the training visual distance;
the judging module is used for acquiring feedback information of the wearer on the visual identification image in the training image and judging whether the training is successful or not;
and the image replacing module is used for replacing the training image and the training sight distance when the training is judged to be successful.
For specific limitations of the near-eye information based vision training apparatus, reference may be made to the above limitations of the near-eye information based vision training method, which are not described herein again. The modules in the near-eye information-based vision training system can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the electronic device, and can also be stored in a memory in the electronic device in a software form, so that the processor can call and execute operations corresponding to the modules.
The embodiment also provides an electronic device, and the internal structure diagram of the electronic device can be as shown in fig. 7. The electronic device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the electronic device is used for connecting and communicating with an external terminal through a network. The computer program when executed by a processor to implement a near-eye information based vision training method:
acquiring a plurality of visual identification images and a plurality of training images, wherein the visual identification images are directional patterns, the directions of the visual identification images are inconsistent, the training images are dynamic images taking the visual identification images as moving bodies, and the training images correspond to the visual identification patterns;
randomly calling a visual identification image to project to an initial sight distance;
acquiring feedback information of the wearer on the visual identification image and selecting a training visual range adapted to the visual condition of the wearer;
matching a training image based on the training visual range and projecting the training image to a training plane which is away from the eyes of the wearer by the training visual range;
acquiring feedback information of a wearer on the visual identification image in the training image and judging whether the training is successful or not;
when the training is judged to be successful, the training image and the training visual range are replaced.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, the computer program can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the system is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for vision training based on near-to-eye information, comprising:
acquiring a plurality of visual identification images and a plurality of training images, wherein the visual identification images are directional patterns, the directions of the visual identification images are inconsistent, the training images are dynamic images taking the visual identification images as moving bodies, and the training images correspond to the visual identification patterns;
randomly calling a visual identification image to project to an initial sight distance;
acquiring feedback information of the wearer on the visual identification image and selecting a training visual range adapted to the visual condition of the wearer;
matching a training image based on the training visual range and projecting the training image to a training plane which is away from the eyes of the wearer by the training visual range;
acquiring feedback information of a wearer on the visual identification image in the training image and judging whether the training is successful or not;
when the training is judged to be successful, the training image and the training visual range are replaced.
2. The near-eye information-based vision training method of claim 1, wherein the obtaining of feedback information of the wearer on the visual identification image and the selection of the training visual distance adapted to the vision condition of the wearer comprises:
acquiring feedback information of the wearer on the visual identification image and judging whether to adjust the sight distance;
when the visual distance is judged to be adjusted, replacing another visual identification image and projecting the image to the adjusted visual distance, and re-acquiring feedback information of the wearer to the other visual identification image and judging whether to adjust the visual distance;
and when the sight distance is judged not to be adjusted, setting the current sight distance as the training sight distance.
3. The method of claim 2, wherein the initial viewing distance is a farthest viewing distance, the initial viewing distance corresponds to a best vision, and the adjusted viewing distance is smaller than the viewing distance before the adjustment.
4. The near-eye information-based vision training method of claim 2, wherein the obtaining feedback information of the wearer on the visual identification image and the judging whether to adjust the visual distance comprises:
acquiring pupil position information of a wearer in real time;
when the deviation direction of the pupil position of the wearer is consistent with the direction of the current visual identification image and the deviation duration time of the pupil position is not less than the first time, judging that the sight distance is not adjusted;
and when the deviation direction of the pupil position of the wearer is inconsistent with the direction of the visual identification image and is not less than the second time, determining to adjust the visual range, wherein the second time is greater than the first time.
5. The method of claim 2, wherein the step of matching the training image based on the training visual distance and projecting the training image to the training plane at a distance equal to the training visual distance from the eye of the wearer comprises:
and projecting the training image corresponding to the visual identification image projected to the training visual distance to a training plane which is away from the eyes of the wearer by the distance equal to the training visual distance.
6. The near-eye information-based vision training method of claim 1, wherein the obtaining feedback information of the wearer on the visual identification image in the training image and determining whether the training is successful comprises:
acquiring pupil position information of a wearer in real time;
when the deviation direction of the pupil position of the wearer is consistent with the direction of the visual identification image in the training image and the deviation duration time of the pupil position is not less than the first time, judging that the training is successful;
and when the deviation direction of the pupil position of the wearer is inconsistent with the direction of the visual identification image and is not less than a second time, judging that the training is not successful, wherein the second time is greater than the first time.
7. The method of claim 1, wherein the replacing the training image and the training visual distance comprises:
setting a next sight distance adjacent to the current training sight distance as a new training sight distance, the new training sight distance being greater than the current training sight distance;
and projecting the training image corresponding to the visual identification image projected to the new training visual range to a training plane which is away from the eyes of the wearer by the distance equal to the new training visual range.
8. The near-eye information-based vision training method of claim 1, further comprising obtaining iris information of the wearer and storing training result information corresponding to the iris information for subsequent reading.
9. A vision training device based on near-to-eye information, comprising:
the device comprises a preprocessing module, a directional module and a training module, wherein the preprocessing module is used for acquiring a plurality of visual identification images and a plurality of training images, the visual identification images are directional patterns, the directions of the visual identification images are inconsistent, the training images are dynamic images taking the visual identification images as a moving body, and the training images correspond to the visual identification patterns;
the image projection module is used for randomly calling a visual identification image to project to the initial sight distance;
the visual range selection module is used for acquiring feedback information of the wearer on the visual identification image and selecting a training visual range adaptive to the vision condition of the wearer;
the image projection module is used for matching a training image based on the training visual distance and projecting the training image to a training plane with the distance from the human eyes of the wearer equal to the training visual distance;
the judging module is used for acquiring feedback information of the wearer on the visual identification image in the training image and judging whether the training is successful or not;
and the image replacing module is used for replacing the training image and the training sight distance when the training is judged to be successful.
10. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the near-eye information based vision training method of any one of claims 1-8 when executing the computer program.
CN202011097696.6A 2020-10-14 2020-10-14 Vision training method and device based on near-to-eye information Pending CN112263450A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011097696.6A CN112263450A (en) 2020-10-14 2020-10-14 Vision training method and device based on near-to-eye information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011097696.6A CN112263450A (en) 2020-10-14 2020-10-14 Vision training method and device based on near-to-eye information

Publications (1)

Publication Number Publication Date
CN112263450A true CN112263450A (en) 2021-01-26

Family

ID=74338010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011097696.6A Pending CN112263450A (en) 2020-10-14 2020-10-14 Vision training method and device based on near-to-eye information

Country Status (1)

Country Link
CN (1) CN112263450A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115969677A (en) * 2022-12-26 2023-04-18 广州视景医疗软件有限公司 Eyeball movement training method and device
CN116898704A (en) * 2023-07-13 2023-10-20 广州视景医疗软件有限公司 VR-based visual target adjusting method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08257077A (en) * 1995-03-24 1996-10-08 Minolta Co Ltd Sight recovery training device
US20090040459A1 (en) * 2003-12-25 2009-02-12 Minghua Dai Device for Preventing and Treating Myopia
JP2011212430A (en) * 2010-03-16 2011-10-27 Univ Of Tokyo System for developemental test of visual perception, training system, and support system
WO2014205515A1 (en) * 2013-06-25 2014-12-31 Amblyoptica (Holding) Pty Ltd Method and apparatus for visual training
CN107307981A (en) * 2017-06-21 2017-11-03 常州快来信息科技有限公司 Visual training method based on eye movement
CN107913162A (en) * 2016-10-10 2018-04-17 简顺源 Vision training apparatus and visual training method
CN108143596A (en) * 2016-12-05 2018-06-12 遵义市劲林视力保健咨询有限公司 A kind of wear-type vision training instrument, system and training method
CN109106567A (en) * 2018-08-29 2019-01-01 中国科学院长春光学精密机械与物理研究所 A kind of more sighting distance display systems for myopia
CN109803623A (en) * 2016-09-30 2019-05-24 埃登卢克斯公司 The vision training apparatus of training eyeball muscle based on the optical properties of user

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08257077A (en) * 1995-03-24 1996-10-08 Minolta Co Ltd Sight recovery training device
US20090040459A1 (en) * 2003-12-25 2009-02-12 Minghua Dai Device for Preventing and Treating Myopia
JP2011212430A (en) * 2010-03-16 2011-10-27 Univ Of Tokyo System for developemental test of visual perception, training system, and support system
WO2014205515A1 (en) * 2013-06-25 2014-12-31 Amblyoptica (Holding) Pty Ltd Method and apparatus for visual training
CN109803623A (en) * 2016-09-30 2019-05-24 埃登卢克斯公司 The vision training apparatus of training eyeball muscle based on the optical properties of user
CN107913162A (en) * 2016-10-10 2018-04-17 简顺源 Vision training apparatus and visual training method
CN108143596A (en) * 2016-12-05 2018-06-12 遵义市劲林视力保健咨询有限公司 A kind of wear-type vision training instrument, system and training method
CN107307981A (en) * 2017-06-21 2017-11-03 常州快来信息科技有限公司 Visual training method based on eye movement
CN109106567A (en) * 2018-08-29 2019-01-01 中国科学院长春光学精密机械与物理研究所 A kind of more sighting distance display systems for myopia

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115969677A (en) * 2022-12-26 2023-04-18 广州视景医疗软件有限公司 Eyeball movement training method and device
CN115969677B (en) * 2022-12-26 2023-12-08 广州视景医疗软件有限公司 Eyeball movement training device
CN116898704A (en) * 2023-07-13 2023-10-20 广州视景医疗软件有限公司 VR-based visual target adjusting method and device
CN116898704B (en) * 2023-07-13 2023-12-26 广州视景医疗软件有限公司 VR-based visual target adjusting method and device

Similar Documents

Publication Publication Date Title
CN112263450A (en) Vision training method and device based on near-to-eye information
US9939644B2 (en) Technologies for controlling vision correction of a wearable computing device
EP3750028B1 (en) Devices, systems and methods for predicting gaze-related parameters
US11333906B2 (en) Determination of at least one optical parameter of a spectacle lens
US9274351B2 (en) Method for optimizing the postural prism of an ophthalmic lens
CA3001762C (en) Method for determining a three dimensional performance of an ophthalmic lens; associated method of calculating an ophthalmic lens.
CN111511318A (en) Digital treatment correcting glasses
EP3749172B1 (en) Devices, systems and methods for predicting gaze-related parameters
US10942368B2 (en) Method for determining an optical system of a progressive lens
US11402661B2 (en) Method for determining an ophthalmic lens adapted to a locomotion parameter
CN110770636B (en) Wearable image processing and control system with vision defect correction, vision enhancement and perception capabilities
CN112601509A (en) Hybrid see-through augmented reality system and method for low-vision users
CN109063539A (en) The virtual usual method of glasses, device, computer equipment and storage medium
CN110619303A (en) Method, device and terminal for tracking point of regard and computer readable storage medium
US20220198789A1 (en) Systems and methods for determining one or more parameters of a user's eye
CN111839455A (en) Eye sign identification method and equipment for thyroid-associated ophthalmopathy
CN114495221A (en) Method for positioning key points of face with mask
CN110575374B (en) Intraocular optical focus adjusting method, system and storage medium
WO2021122826A1 (en) Method for determining a value of at least one geometrico-morphological parameter of a subject wearing an eyewear
EP3462231B1 (en) A method and means for evaluating toric contact lens rotational stability
CN113950639A (en) Free head area of an optical lens
CN115437148A (en) Transparent insert identification
CN110119674B (en) Method, device, computing equipment and computer storage medium for detecting cheating
CN105190411A (en) Device and method for producing spectacle lenses for astigmatism
US11224338B2 (en) Method and system for measuring refraction, method for the optical design of an ophthalmic lens, and pair of glasses comprising such an ophthalmic lens

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210126

RJ01 Rejection of invention patent application after publication