CN114931492A - Intelligent vision training system and method - Google Patents

Intelligent vision training system and method Download PDF

Info

Publication number
CN114931492A
CN114931492A CN202210594609.0A CN202210594609A CN114931492A CN 114931492 A CN114931492 A CN 114931492A CN 202210594609 A CN202210594609 A CN 202210594609A CN 114931492 A CN114931492 A CN 114931492A
Authority
CN
China
Prior art keywords
training
user
vision
distance
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210594609.0A
Other languages
Chinese (zh)
Inventor
蓝章礼
鲍芳芳
范鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Jiaotong University
Original Assignee
Chongqing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Jiaotong University filed Critical Chongqing Jiaotong University
Priority to CN202210594609.0A priority Critical patent/CN114931492A/en
Publication of CN114931492A publication Critical patent/CN114931492A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H5/00Exercisers for the eyes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5023Interfaces to the user
    • A61H2201/5043Displays
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2205/00Devices for specific parts of the body
    • A61H2205/02Head
    • A61H2205/022Face
    • A61H2205/024Eyes

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Pain & Pain Management (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The invention relates to the technical field of intelligent systems, in particular to an intelligent vision training system and a method, wherein the system comprises a training personnel positioning mechanism, a training information robot, a user side and a background side; an information base, a training base and an analysis unit are arranged in the background end; the information base is used for storing personal information of the user; the training library is used for storing a training scheme; the content of the training scheme comprises a plurality of training distances, display content of each training distance and a training sequence of each training distance; the user side comprises a verification unit, an interactive input unit and a communication unit; the verification unit is used for verifying the identity of the user; the communication unit is used for sending a scheme acquisition signal to the background terminal after the authentication unit passes the user identity authentication; the scheme acquisition signal includes user identity information. The system can effectively correct the eyesight of the user under the condition of no potential sequelae.

Description

Intelligent vision training system and method
Technical Field
The invention relates to the technical field of intelligent systems, in particular to an intelligent vision training system and method.
Background
Myopia is a form of refractive error that a person suffers if the eye is in a relaxed state with parallel rays of light entering the eye but focused in front of the retina, resulting in the inability to form a sharp image on the retina. In recent years, the incidence rate of myopia in China is on a remarkable rising trend, and myopia has become a great public health problem affecting the eye health of the national people, particularly teenagers. If the people have bad habit of using eyes, such as playing computer games for a long time, the stretching function of the zonules, the ciliary muscles and the crystalline lens of the eyes is disordered or cramped, and further the axis of the eyes is changed, the people can suffer from myopia, and can not see objects far away clearly or have blurred vision.
At present, the treatment means of myopia is mainly surgical treatment, but although the surgical treatment has quick effect, the cost is high and potential sequelae can exist. There have also been proposed vision correction using therapeutic devices, such as scintillation training, eye movement training, and the like. However, when the instrument is used, the instrument is too close to the eyes of a patient, and the long-time training increases the visual burden due to the short-distance object, so that the effect of correction cannot be achieved.
Therefore, how to achieve the purpose of vision correction without potential sequelae is a problem to be solved urgently.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an intelligent vision training system which can effectively correct the vision of a user under the condition of no potential sequelae.
In order to solve the technical problem, the invention adopts the following technical scheme:
an intelligent vision training system comprises a training personnel positioning mechanism, a training information robot, a user side and a background end; an information base and a training base are arranged in the background end; the information base is used for storing personal information of the user; the training library is used for storing a training scheme; the content of the training scheme comprises a plurality of training distances, display content of each training distance and a training sequence of each training distance;
the user side comprises an authentication unit and a communication unit; the authentication unit is used for carrying out user identity authentication; the communication unit is used for sending a scheme acquisition signal to the background terminal after the authentication unit passes the user identity authentication; the scheme acquisition signal comprises user identity information; the background end is used for extracting corresponding personal information from the information base according to the user identity information after receiving the scheme acquisition signal, matching a corresponding training scheme from the training base according to the provided personal information and feeding back the training scheme to the corresponding communication unit; the communication unit is also used for establishing communication with the training information robot and sending the training scheme received last time to the training information robot establishing communication;
the training information robot is integrated with a control unit, a display unit, a mobile unit, a distance measuring unit and a voice interaction unit; the training personnel positioning mechanism is positioned right in front of the display unit; the distance measuring unit is used for measuring the distance between the training information robot and the training personnel positioning mechanism and sending the distance to the control unit; the control unit is used for controlling the mobile unit and the display unit to work according to the content of the training scheme and the distance between the training information robot and the training personnel positioning mechanism, and is also used for controlling the voice interaction unit to carry out voice guidance for training; the voice interaction unit is also used for receiving training feedback voice of the user and sending the training feedback voice to the control unit; the control unit is also used for judging the training completion condition of the user according to the training feedback voice of the user.
Preferably, the content of the training scheme further includes an eye-using strategy of each display content at each training distance, where the eye-using strategy includes a training sequence and a training duration of eyes; the personal information includes age, sex, vision condition, reason for impaired vision, and vision condition of parents.
Preferably, the control unit is further configured to generate training information of the user according to a training completion condition of the user and send the training information to the corresponding communication unit; the communication unit is also used for sending the received training information to the background end; the background end is also used for receiving the training information and storing the training information in the information base;
and after receiving the verification passing information, the background end extracts corresponding personal information and training information from the information base according to the user identity information, and matches a corresponding training scheme from the training base according to the extracted personal information and training information.
Preferably, the user side further comprises an interactive input unit, the interactive input unit is used for inputting personal information and training related information by a user, and is also used for inputting a test signal and sending the test signal to the control unit through the communication unit; the control unit is also used for controlling the mobile unit, the display unit and the voice interaction unit to work according to a preset vision test scheme after receiving the test signal, and performing vision test on the user; the control unit is also used for adjusting the vision detection content according to the feedback voice received by the voice interaction unit during the vision detection, generating a vision detection result of the user and then sending the vision detection result to the communication unit;
the communication unit is also used for receiving a vision detection result of the user and then sending the vision detection result to the background end, and the background end is used for updating the vision condition in the personal information of the user after receiving the vision detection result;
and the control unit is also used for controlling the mobile unit, the display unit and the voice interaction unit to work according to a preset vision test scheme after judging the content of the training scheme finished by the user, and carrying out vision test on the user.
Preferably, the interactive input unit is further configured to input the analysis signal and send the analysis signal to the back end through the communication unit; the background end also comprises an analysis unit which is used for analyzing the vision improvement condition of the user according to the training information and the vision updating condition of the user after the background end receives the analysis signal, generating a vision improvement report and feeding the vision improvement report back to the corresponding communication unit.
Preferably, the training scheme comprises a screen focusing training scheme and an eye chart focusing training scheme; the screen focusing training scheme comprises the content of a focusing target, the size of the focusing target, a training initial distance, a first distance between adjacent training distances, the sequence of the training distances and the training sequence of eyes at the training distances; the scheme library is also stored with a training comparison table, and the training comparison table is stored with a mapping relation between a training initial distance and vision data of eyes with poor vision and a mapping relation between the size of a focusing target and the vision data of the eyes with poor vision; when a screen focusing training scheme is matched, matching a corresponding training initial distance and the size of a focusing target according to vision data of eyes with poor vision of a user;
the visual chart focusing training scheme comprises the number, the arrangement mode, the size of characters, a training starting distance, a second distance between adjacent training distances, the sequence of the training distances and the training sequence of eyes at the training distances; the training comparison table is also stored with a mapping relation between the training starting distance and vision data of eyes with poor vision, and a mapping relation between the character size and the vision data of the eyes with poor vision; and when the training scheme is matched, matching a corresponding training starting distance and a character size according to the vision data of the eyes with poor vision of the user.
The invention also provides an intelligent vision training method, which uses the intelligent vision training system and comprises the following steps:
s1, logging in a user side, and matching a training scheme;
s2, performing eye movement training according to the preset preparation requirement;
s3, performing screen focusing training according to the content of the screen focusing training scheme;
s4, performing long-range focusing training for X minutes according to the preset long-range training requirement;
s5, performing the focusing training of the visual chart according to the content of the focusing training scheme of the visual chart;
s6, performing the long-range Y minutes according to the preset long-range requirement;
s7, performing eye exercises for Z minutes;
wherein X, Y, Z are all preset minutes.
Preferably, in step S2, the preparation request includes a rotation direction, a rotation order, and a rotation time of the eyeball;
preferably, in step S3, the screen focus training includes:
s31, moving the training information robot to enable the distance between the training information robot and the user to be equal to the initial training distance, and displaying the display content; the display content comprises a background and a focusing target, the color difference value between the color of the background and the color of the focusing target is greater than a preset value, and the content of the focusing target is patterns, characters, letters or numbers;
s32, extracting binocular vision data of the user to determine eyes with poor vision of the user, reminding the user to shield the eyes with poor vision, and focusing the eyes with good vision on a focusing target; if the vision of the two eyes of the user is the same, the left eye is used as the eye with better vision;
s33, timing S seconds if the user can clearly see the focusing target through feedback, and reminding the user to shield eyes with better vision after S seconds and focus the eyes with poorer vision on the focusing target;
s34, timing S seconds if the user can clearly see the focusing target after the feedback, and reminding the user that the two eyes focus on the focusing target at the same time after S seconds;
s35, timing S seconds if the user can clearly see the focusing target through feedback, and reminding the user that the screen focusing training at the current distance is finished after S seconds; after moving the training information robot in the direction away from the user by the first distance, judging whether the distance between the training information robot and the user is greater than a preset farthest focusing distance, if not, turning to S32, and if so, turning to S4; wherein the first distance is 0.3-0.8 m;
preferably, in step S4, the long-range focusing training includes:
s41, selecting an object with a definite boundary as a distant view target, wherein the distance between the distant view target and a user is more than 100 meters;
s42, reminding the user to shield the eyes with poor eyesight and focusing the eyes with good eyesight on the distant view target; if the vision of the two eyes of the user is the same, the left eye is used as the eye with better vision;
s43, reminding the user to shield the eyes with better vision and focus the eyes with poorer vision on the distant view target after P minutes;
s44, reminding the user to focus the two eyes on the distant view target after P minutes;
s45, reminding the user that the vision focusing training is finished after P minutes;
preferably, in step S5, the eye chart focusing training includes:
s51, moving the training information robot to enable the distance between the training information robot and the user to be equal to the training starting distance, and displaying characters; when the characters are displayed, the color of the characters is black, the background of the screen is white, and the size of the characters is the size of the corresponding characters on the visual chart after 0.1 is added to the vision of the eyes with poor vision of the user; the characters are multiple and random in direction;
s52, reminding the user to shield the eyes with poor eyesight, and sequentially identifying the designated characters by the eyes with good eyesight;
s53, if the success rate of the identification of the latest Q characters is greater than the preset success rate, reminding the user to shield eyes with better vision after R seconds, and identifying the successively appointed characters in sequence by using eyes with poorer vision;
s54, if the success rate of the identification of the latest Q characters is greater than the preset success rate, reminding the user after R seconds, and identifying the successively appointed characters by using two eyes;
s55, if the recognition success rate of the latest Q characters is greater than the preset success rate, reminding the user after R seconds, and finishing the focusing training of the visual chart at the current distance; after moving the training information robot to a direction away from the user by a second distance, judging whether the distance between the training information robot and the user is greater than a preset farthest character distance, if not, turning to S52, and if so, turning to S6; wherein the second spacing is 0.4-0.7 meters;
preferably, in step S6, the telephoto requirement is to view an object beyond a preset telephoto distance.
Compared with the prior art, the invention has the following beneficial effects:
1. when the user uses the system, after the user side performs identity authentication, the background side can match a training scheme matched with the user side to perform vision correction training. Specifically, the content of the training scheme includes a plurality of training distances, display content of each training distance, and a training order of each training distance. The control unit of the training information robot can control the mobile unit and the display unit to work according to the content of the training scheme, and the voice guide for training the user is performed through the voice interaction unit. Meanwhile, the training completion condition of the user is judged according to the training feedback voice of the user received by the voice interaction unit, so that the training progress is controlled until the user completes all contents of the training scheme. The user only needs to be located at the training personnel positioning mechanism, and can perform correction training and real-time voice feedback according to the voice prompt of the training information robot.
Through the process, the user can be trained to observe different observation objects (such as figures, numbers, characters and the like) and observation distances in a centralized time period, so that the control capability of the eye muscle group of the user is continuously exercised and enhanced, the eye muscle group contracts and expands under active control, and the crystalline lens becomes thicker and thinner, thereby recovering the elasticity of the crystalline lens, enhancing the adjustment capability of the eye muscle group, and gradually enabling the image of the observation object at a far position to fall on the retina of the user, thereby achieving the purpose of correcting the vision of the user. In addition, the system does not apply other external force to eye tissues in the training process, and does not have potential sequelae.
In conclusion, the system can effectively correct the vision of the user under the condition of no potential sequelae.
2. The system can automatically match a proper training scheme for the user according to the personal information and the training condition of the user so as to ensure the adaptability of the scheme. The training plan includes not only a plurality of training distances, display contents (size, color, brightness, color of a background, and the like of a display object) of each training distance, and a training sequence of each training distance, but also an eye-using strategy of each display content at each training distance, the eye-using strategy including a training sequence of eyes and a training time period. For example, the training is performed in the order of left eye, right eye, and both eyes, and the same or different training picture content and training time are set for each eye; or to emphasize training a certain eye with poorer vision in the left or right eye, etc. Therefore, different users are respectively subjected to correction training suitable for the users, and the effectiveness of the correction training is ensured.
3. The system can automatically detect the eyesight of the user when the correction training is finished, and updates the eyesight condition of the user according to the detection result, so that the training scheme can be adjusted in real time according to the eyesight correction condition of the user in the whole correction training period. Meanwhile, the user can know the real-time correction effect conveniently.
Drawings
For purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made in detail to the present invention as illustrated in the accompanying drawings, in which:
FIG. 1 is a logic block diagram of an embodiment;
FIG. 2 is a diagram showing a hardware configuration in the embodiment;
fig. 3 is a schematic structural diagram of a training information robot in an embodiment.
Detailed Description
The following is further detailed by the specific embodiments:
example (b):
as shown in fig. 1, the embodiment discloses an intelligent vision training system, which includes a training person positioning mechanism, a training information robot, a user side, a background side, and a management side.
In the embodiment, the training personnel positioning mechanism is provided with a chair; the background end is a cloud server. An information base and a training base are arranged in the background end; the information base is used for storing personal information and training information of the user; the training library is used for storing a training scheme; the content of the training scheme comprises a plurality of training distances, display content of each training distance, a training sequence of each training distance, and eye-using strategies of each display content under each training distance, wherein the eye-using strategies comprise the training sequence and the training duration of eyes. Wherein the display content comprises content, size, color, brightness, and color of a background of the display object. The personal information includes age, sex, vision condition, reason for impaired vision, and vision condition of parents. In specific implementation, in order to ensure the integrity and validity of the personal information, the content of the personal information may further include basic vision conditions of siblings and sisters.
In this embodiment, the user side is a smart phone loaded with the corresponding applet, and in other embodiments, the user side may also be a smart phone or a smart tablet loaded with the corresponding APP. The user side comprises a registration unit, a verification unit, an interactive input unit and a communication unit.
The registration unit is used for registering the user. After the registration is passed, the user end sends the personal information of the user to the background end, and the personal information is stored in the information base. The authentication unit is used for carrying out user identity authentication; i.e. authentication when the user logs in. In this embodiment, the authentication mode is password authentication, and fingerprint or face recognition authentication may also be adopted in other embodiments.
The communication unit is used for sending a scheme acquisition signal to the background end after the verification unit passes the user identity verification; the scheme acquisition signal comprises user identity information; the background end is used for extracting corresponding personal information and training information from the information base according to the user identity information after receiving the scheme acquisition signal, and matching a corresponding training scheme from the training base according to the extracted personal information and training information and a scheme recommended by a deep learning result (namely, the background automatically matches the training base according to the personal information and the training information, in the embodiment, the matching method is trained by a deep learning algorithm), and feeding back the training scheme to the corresponding communication unit; the communication unit is also used for establishing communication with the training information robot and sending the training scheme received last time to the training information robot establishing communication.
The training information robot is integrated with a control unit, a display unit, a mobile unit, a distance measuring unit and a voice interaction unit.
The distance measuring unit is used for measuring the distance between the training information robot and the training personnel positioning mechanism and sending the distance to the control unit; the control unit is used for controlling the mobile unit and the display unit to work according to the content of the training scheme and the distance between the training information robot and the training personnel positioning mechanism, and is also used for controlling the voice interaction unit to carry out voice guidance for training; the voice interaction unit is also used for receiving training feedback voice of the user and sending the training feedback voice to the control unit; the control unit is also used for judging the training completion condition of the user according to the training feedback voice of the user. The control unit is also used for generating training information of the user according to the training completion condition of the user and sending the training information to the corresponding communication unit; the communication unit is also used for sending the received training information to the background end; the background end is also used for receiving the training information and storing the training information in the information base.
The interactive input unit is used for inputting a test signal and sending the test signal to the control unit through the communication unit; and the control unit is also used for controlling the mobile unit, the display unit and the voice interaction unit to work according to a preset vision test scheme after receiving the test signal so as to detect the vision of the user. In the specific embodiment, the scheme of vision test is a vision test chart under standard test distance. The control unit is also used for adjusting the vision detection content according to the feedback voice received by the voice interaction unit during the vision detection (for example, when the user watches the Mth symbol on the right of the Nth row, the voice feedback of the user is not clear, or the result of the voice feedback is wrong/correct), generating the vision detection result of the user and then sending the vision detection result to the communication unit; the communication unit is also used for receiving the vision detection result of the user and then sending the vision detection result to the background end, and the background end is used for updating the vision condition in the personal information of the user after receiving the vision detection result. Therefore, when the user does not determine whether the current vision condition of the user is matched with the vision condition stored in the background end before the test, the vision condition in the background end can be updated in a vision detection mode.
The control unit is also used for controlling the mobile unit, the display unit and the voice interaction unit to work according to a preset vision test scheme after judging the training scheme content finished by the user, and carrying out vision detection on the user. Therefore, the training scheme can be adjusted in real time according to the vision correction condition of the user in the whole correction training period. Meanwhile, the user can know the real-time correction effect conveniently.
The interactive input unit is also used for inputting analysis signals and sending the analysis signals to the background end through the communication unit; the background end also comprises an analysis unit which is used for analyzing the vision improvement condition of the user according to the training information and the vision updating condition of the user after the background end receives the analysis signal, and generating a vision improvement report and feeding the vision improvement report back to the corresponding communication unit. Through the vision improvement report, the user can know the correction effect and the correction progress after the user begins training. The interactive input unit is also used for the user to interact with the system related information and complete some actions except for presetting, such as stopping in the middle, changing information and the like. These are well established technologies and will not be described herein.
The management terminal comprises a statistical analysis unit and an updating unit, wherein the statistical analysis unit is used for statistically analyzing the user data in the information base, and the updating unit is used for updating the training scheme in the training base. In this embodiment, the management terminal is a PC loaded with the corresponding application program.
When using the system, the user is first guided to the training person positioning mechanism. In order to ensure the stability and comfort of the user position, a chair can be arranged on the training person positioning mechanism, and the front face of the chair faces the display unit of the training information robot. And then, the user wears the external part, logs in the user side to perform identity verification, and connects the user side with the training information robot. After the identity authentication is passed, the background end matches a training scheme suitable for the background end and sends the training scheme to the user end, and then the training scheme is sent to the training information robot by the user end.
The training information robot can carry out vision correction training on the user according to the content of the training scheme. Specifically, the moving part is sequentially moved to the distance corresponding to the user according to the training distance in the training scheme and the training sequence of each training distance, whether the moving part is moved in place is judged according to the feedback data of the distance measuring unit during moving, the display content of the distance is displayed after the moving part is moved to a certain position, and the color scheme of the display content is blue bottom + white character or blue bottom + yellow character during specific implementation. And voice guidance is carried out on the user through the peripheral end, for example, please 'please hide the left eye, see XXX on the display screen with the right eye', and the like. After the training scheme is completed, the training information robot sends the training data of the user to the background terminal and stores the training data in the training library.
The system can train a user to observe different observation objects (such as figures, numbers, characters and the like) and observation distances in a centralized time period, so that the eye muscle group of the user contracts and expands, and the crystalline lens becomes thicker and thinner, the elasticity of the crystalline lens is restored, the adjustment capability of the eye muscle group is enhanced, images of the observation objects at far positions are gradually made to fall on the retina of the user, and the aim of correcting the vision of the user is fulfilled. In addition, the system does not adopt external force to damage eye tissues of the user in the training process, and has no potential sequelae. Thereby effectively correcting the vision of the user under the condition of no potential sequelae.
In addition, the system can automatically match a proper training scheme for the user according to the personal information and the training condition of the user so as to ensure the adaptability of the scheme. The training plan includes not only a plurality of training distances, display contents (contents, size, color, brightness, and color of a background of a display object) of each training distance, and a training sequence of each training distance, but also an eye-using strategy of each display content at each training distance, the eye-using strategy including a training sequence of eyes and a training time period. For example, the training is performed in the order of left eye, right eye, and both eyes, and the same or different training picture content and training time are set for each eye; or to emphasize training a certain eye with poorer vision in the left or right eye, etc. Therefore, different users are respectively subjected to correction training adapted to the users, and the effectiveness of the correction training is ensured.
Preferably, in order to ensure the training effect, the training scheme includes a screen focusing training scheme and an eye chart focusing training scheme. The screen focusing training scheme comprises the content of a focusing target, the size of the focusing target, a training initial distance, a first distance between adjacent training distances, the sequence of the training distances and the training sequence of eyes at the training distances; the scheme library is also stored with a training comparison table, and the training comparison table is stored with a mapping relation between a training initial distance and vision data of the eye with poor vision and a mapping relation between the size of a focusing target and the vision data of the eye with poor vision; and when the screen focusing training scheme is matched, matching a corresponding training initial distance and the size of a focusing target according to the vision data of the eye with poor vision of the user. The visual chart focusing training scheme comprises the number, arrangement mode, character size, training starting distance, second distance between adjacent training distances, sequence of each training distance and training sequence of eyes at each training distance; the training comparison table is also stored with a mapping relation between the training starting distance and vision data of eyes with poor vision, and a mapping relation between the character size and the vision data of the eyes with poor vision; and when the training scheme is matched, matching a corresponding training starting distance and a corresponding character size according to the vision data of the eye with poor vision of the user.
In specific implementation, the design of the screen focusing training scheme and the visual chart focusing training scheme can refer to the following reference designs:
Figure BDA0003667293450000101
the application also provides an intelligent vision training method, and the intelligent vision training system comprises the following steps:
s1, logging in a user side, and matching a training scheme;
s2, performing eye movement training according to preset preparation requirements; the preparation requirements include the rotation direction, rotation sequence and rotation time of the eyeball, for example, first rotate the eyeball clockwise for 30 seconds, rest for 5 seconds, and then rotate the eyeball counterclockwise for 30 seconds.
S3, performing screen focusing training according to the content of the screen focusing training scheme;
specifically, the screen focus training includes:
s31, moving the training information robot, enabling the distance between the training information robot and the user to be equal to the initial training distance, and displaying the display content; the display content comprises a background and a focusing target, the color difference value between the color of the background and the color of the focusing target is greater than a preset value, and the content of the focusing target is patterns, characters, letters or numbers; in this embodiment, the focus target is a pattern.
S32, extracting binocular vision data of the user to determine eyes with poor vision of the user, reminding the user to cover the eyes with poor vision, and focusing the eyes with good vision on a focusing target; if the vision of the two eyes of the user is the same, the left eye is used as the eye with better vision;
s33, timing S seconds if the user can clearly see the focusing target through feedback, and reminding the user to shield eyes with better vision after S seconds and focus the eyes with poorer vision on the focusing target; in this example, the value of S is 40.
S34, timing S seconds if the user can clearly see the focusing target through feedback, and reminding the user that the two eyes focus on the focusing target at the same time after S seconds;
s35, timing S seconds if the user can clearly see the focusing target through feedback, and reminding the user that the screen focusing training at the current distance is finished after S seconds; after moving the training information robot to the direction far away from the user by the first distance, judging whether the distance between the training information robot and the user is greater than the preset farthest focusing distance, if not, turning to S32, and if so, turning to S4; wherein the first distance is 0.3-0.8 m;
and S4, performing the long-range focusing training for X minutes according to the preset long-range training requirement.
Specifically, the long-range focusing training includes:
s41, selecting an object with a definite boundary (such as characters, an antenna and the like) as a distant view target, wherein the distance between the distant view target and a user is more than 100 meters;
s42, reminding the user to cover the eyes with poor eyesight and focusing the eyes with good eyesight on the distant view target; if the vision of the two eyes of the user is the same, the left eye is used as the eye with better vision;
s43, reminding the user to shield the eyes with better vision and focus the eyes with poorer vision on the distant view target after P minutes; in this example, P is 3.
S44, reminding the user to focus the two eyes on the distant view target after P minutes;
and S45, after P minutes, reminding the user that the vision focusing training is finished.
And S5, performing the visual chart focusing training according to the content of the visual chart focusing training scheme.
Specifically, the eye chart focusing training comprises:
s51, moving the training information robot to enable the distance between the training information robot and the user to be equal to the training starting distance, and displaying characters; when the characters are displayed, the color of the characters is black, the background of a screen is white, and the size of the characters is the size of the corresponding characters on the visual chart after 0.1 is added to the vision of the eyes with poor vision of a user; for example, if the cross-eye vision of the user is 4.7, the character size is the character size corresponding to 4.8 on the visual chart; the characters are multiple and random in direction;
s52, reminding the user to shield the eyes with poor eyesight, and sequentially identifying the designated characters by the eyes with good eyesight;
s53, if the success rate of the identification of the latest Q characters is greater than the preset success rate, reminding the user to shield eyes with better vision after R seconds, and identifying the successively appointed characters in sequence by using eyes with poorer vision; in this example, Q is 8 and R is 20.
S54, if the success rate of the identification of the latest Q characters is greater than the preset success rate, reminding the user after R seconds, and identifying the successively specified characters in sequence by using two eyes;
s55, if the recognition success rate of the latest Q characters is greater than the preset success rate, reminding the user after R seconds, and completing the focusing training of the eye chart at the current distance; after moving the training information robot to a direction away from the user by a second distance, judging whether the distance between the training information robot and the user is greater than a preset farthest character distance, if not, turning to S52, and if so, turning to S6; wherein the second spacing is 0.4-0.7 meters.
S6, performing the long-range Y minutes according to the preset long-range requirement; in specific implementation, the long-range requirement is that objects exceeding a preset long-range distance are watched.
S7, performing eye exercises for Z minutes;
wherein X, Y, Z is preset minutes. In the embodiment, the value range of X is 10-20 minutes, the value range of Y is 8-10 minutes, and the value range of Z is 3-7 minutes. The content of the long-range focusing training and the specific content of the long-range observation can be specifically set by those skilled in the art according to the environment of the training field, and are not described herein again.
Description of the principle:
taking myopia as an example (the same principle as that of hyperopia), true myopia is characterized by lengthening of the axis of the eye, which is considered as an important characteristic of true myopia in traditional medicine, and the reduction or disappearance of the degree cannot be realized through relaxation of ciliary muscles. However, the problems of the long axis of the eye, the inability to focus on the retina, the ability to focus on other muscle groups of the eye in cooperation with the ciliary muscle, and the like still remain to be studied. The eye is a set of imaging systems, the long axis distance corresponds to the long focal length of the camera, and for the camera, no matter the long focal length or the short focal length, as long as the focusing system is normal, the normal focusing can be always realized. The human body has stronger self-repairing and adjusting capabilities, and various capabilities of a person can be improved to a certain degree through training, particularly, certain capabilities can be improved to a certain extent through reasonable training of a training scheme. Therefore, through reasonable training of human eyes, the focusing capacity of the eyes can be improved certainly, and even the good method can be greatly improved. The enhancement of focusing power includes an enhancement of the eye muscle group power and also an enhancement of the vision control system. Experiments in the early period show that the improvement is remarkable, and the effect on young students with myopia and adults with myopia for many years can be remarkable.
In the method, the eyeball can be preheated through the eye movement training, so that the focusing training effect is improved; then, the focusing ability of the eyeballs at different distances is trained through screen focusing training at different distances; then, the long-distance focusing can train the ability of eyes to focus on a long-distance target and play a certain role in relaxation; then, the visual chart focuses on training to train the resolving power of the trainee on the tiny targets, and the trainee can improve the fine recognition capability of the trainee on the targets at different distances by continuously improving the training distance and the characters with different sizes; then, the training effect is strengthened by performing relaxation through long-range observation and performing relaxation and health care through eye exercises. By the method, the eyesight can be effectively improved.
The complete training scheme is carried out on 11 students of a certain primary school in a period of 5 days in a concentrated mode every day according to the method. Wherein, the bore hole training reaches 5.2 after little K training the first day, consequently only has carried out training once, has participated in four days training for little I instead. The training results obtained are shown in the table below.
Figure BDA0003667293450000131
Figure BDA0003667293450000141
The training result shows that the training results show that the whole binocular vision of the other 9 persons reaches or exceeds 5.0, the average is 5.1+, and the average monocular vision is improved by about 0.72 in 11 persons participating in the training except that the small K is only trained for 1 day and the small I is trained for 4 days, so that the training is asynchronous.
The vision improvement training is carried out, and students training for 5 days completely realize that 100% of vision reaches or exceeds 5.0 and completely reach a normal or even excellent level. Wherein the worse student of eyesight promotes more, if little I, the initial eyesight of both eyes all is less than 4.0, and the biggest letter is all looked unclear on the visual chart, and the letter is unknown where after the third row, and both eyes all reach 5.1 when the training is accomplished, and if little G again, initial eyesight is 4.0 and 4.3, and both eyes all reach 5.2 after the training.
To facilitate understanding of the hardware structure in the present application, a specific embodiment will be described as follows: wherein the reference numerals include: training person positioning mechanism 1, track 2, base 3, drive wheel 4, installation department 5, display screen 6, range sensor 7, adapter 8, megaphone 9.
As shown in fig. 2 and 3, the robot comprises a track 2, a trainee positioning mechanism 1 and a training information robot.
Wherein the track 2 is a linear track. The trainer positioning mechanism 1 is positioned on the extension line of the track 2; in this embodiment, the trainee positioning mechanism 1 comprises a chair, and the front of the chair faces the rail 2.
The training information robot comprises a base 3, wherein a mobile unit is arranged below the base 3; the moving unit includes a driving motor and a driving wheel 4; the driving wheels 4 are engaged with the rail 2 for moving the training information robot along the length direction of the rail 2. The number of the driving wheels 4 is larger than one, each group of driving wheels 4 is arranged along the length direction of the track 2, and the number of each group of driving wheels 4 is equal to the number of the tracks 2; the driving wheel 4 is a multi-drive-shaft driving mode. By the arrangement, the stability of the training information robot in the moving process can be ensured. In this embodiment, the number of the tracks 2 is preferably two, and the driving wheels 4 are two groups; during the concrete implementation, can set up the driven wheel that corresponds in each drive wheel 4 department, through the cooperation from driving wheel and track 2, can increase the strong point of training information robot to when realizing the control actuating mechanism quantity, guarantee the mobility stability of training information robot.
The installation part 5 is fixedly arranged above the base 3, and during specific implementation, the installation part 5 is a cuboid. A display screen 6 and a distance measuring sensor 7 are fixed on the side wall of the mounting part 5 facing to the training person positioning mechanism 1, and the distance measuring sensor 7 is used for measuring the distance between the training person positioning mechanism 1 and the distance measuring sensor 7; during specific implementation, set up the detection hole on the installation department 5 front lateral wall, the detection hole is located 6 below range sensor 7 vertically of display screen and passes through the front lateral wall of installation department 5, and range sensor 7's probe passes through the detection hole outwards exposes. In this embodiment, the display screen 6 is an LED screen, and the distance measuring sensor 7 is an infrared distance measuring sensor 7. The installation department 5 internally mounted has microcontroller, and in this embodiment, microcontroller is STM32 series's singlechip. The microcontroller is electrically connected with the display screen 6, the distance measuring sensor 7 and the mobile unit respectively. In specific implementation, the microcontroller can be fixed on the mounting seat after the mounting seat is fixed in the mounting part 5. Due to the arrangement of the mounting seat, the microcontroller can be mounted more conveniently and more stably. The specific shape and size of the mounting seat can be specifically set by those skilled in the art according to the specific data of the internal space of the mounting portion 5, and will not be described herein again.
The training information robot is also fixedly provided with a voice playing module and a voice receiving module which are respectively and electrically connected with the microcontroller. In this embodiment, the voice playing module is two microphones 9, and the voice receiving module is a sound pickup 8. The detection hole is positioned in the middle of the lower part of the display screen 6; sound amplifying ports are respectively formed in the front side wall of the mounting part 5 at two sides of the detection hole, the two loudspeakers 9 penetrate through the front side wall of the mounting part 5, and the sound amplifying ends of the two loudspeakers 9 penetrate through the two sound amplifying ports respectively; still seted up on the front lateral wall of installation department 5 and picked up the sound hole, the front lateral wall at installation department 5 is fixed to adapter 8, and adapter 8's radio reception probe passes the sound hole outwards exposes. The mounting mode can reasonably utilize the mounting position of the mounting part 5, so that each part of the module can be more compactly mounted. During concrete implementation, the installation part can be further integrated with a power supply module for supplying power to the training information robot, and the power supply module can be a rechargeable lithium battery. The power module belongs to the common technology in the robot field, and is not described herein again.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the technical solutions, and those skilled in the art should understand that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all that should be covered by the claims of the present invention.

Claims (10)

1. An intelligence eyesight training system which characterized in that: the training information robot comprises a training personnel positioning mechanism, a training information robot, a user side and a background side; an information base and a training base are arranged in the background end; the information base is used for storing personal information of the user; the training library is used for storing a training scheme; the content of the training scheme comprises a plurality of training distances, the display content of each training distance and the training sequence of each training distance;
the user side comprises an authentication unit and a communication unit; the authentication unit is used for carrying out user identity authentication; the communication unit is used for sending a scheme acquisition signal to the background terminal after the authentication unit passes the user identity authentication; the scheme acquisition signal comprises user identity information; the background end is used for extracting corresponding personal information from the information base according to the user identity information after receiving the scheme acquisition signal, matching a corresponding training scheme from the training base according to the provided personal information and feeding back the training scheme to the corresponding communication unit; the communication unit is also used for establishing communication with the training information robot and sending the training scheme received last time to the training information robot for establishing communication;
the training information robot is integrated with a control unit, a display unit, a mobile unit, a distance measuring unit and a voice interaction unit; the training personnel positioning mechanism is positioned right in front of the display unit; the distance measuring unit is used for measuring the distance between the training information robot and the training personnel positioning mechanism and sending the distance to the control unit; the control unit is used for controlling the mobile unit and the display unit to work according to the content of the training scheme and the distance between the training information robot and the training personnel positioning mechanism, and is also used for controlling the voice interaction unit to carry out voice guidance for training; the voice interaction unit is also used for receiving training feedback voice of the user and sending the training feedback voice to the control unit; the control unit is also used for judging the training completion condition of the user according to the training feedback voice of the user.
2. The intelligent vision training system of claim 1, wherein: the content of the training scheme further comprises eye-using strategies of all display contents under all training distances, wherein the eye-using strategies comprise training sequences and training duration of eyes; the personal information includes age, sex, vision condition, reason for impaired vision, and vision condition of parents.
3. The intelligent vision training system of claim 1, wherein: the control unit is also used for generating training information of the user according to the training completion condition of the user and sending the training information to the corresponding communication unit; the communication unit is also used for sending the received training information to the background end; the background end is also used for receiving the training information and storing the training information in the information base;
and after receiving the verification passing information, the background end extracts corresponding personal information and training information from the information base according to the user identity information, and matches a corresponding training scheme from the training base according to the extracted personal information and training information.
4. The intelligent vision training system of claim 3, wherein: the user side also comprises an interactive input unit, the interactive input unit is used for inputting personal information and training related information by a user, and is also used for inputting a test signal and sending the test signal to the control unit through the communication unit; the control unit is also used for controlling the mobile unit, the display unit and the voice interaction unit to work according to a preset vision test scheme after receiving the test signal, and performing vision test on the user; the control unit is also used for adjusting the vision detection content according to the feedback voice received by the voice interaction unit during the vision detection, generating a vision detection result of the user and then sending the vision detection result to the communication unit;
the communication unit is also used for receiving the vision detection result of the user and then sending the vision detection result to the background end, and the background end is used for updating the vision condition in the personal information of the user after receiving the vision detection result;
and the control unit is also used for controlling the mobile unit, the display unit and the voice interaction unit to work according to a preset vision test scheme after judging the content of the training scheme finished by the user, and carrying out vision test on the user.
5. The intelligent vision training system of claim 4, wherein: the interactive input unit is also used for inputting analysis signals and sending the analysis signals to the background end through the communication unit; the background end also comprises an analysis unit which is used for analyzing the vision improvement condition of the user according to the training information and the vision updating condition of the user after the background end receives the analysis signal, and generating a vision improvement report and feeding the vision improvement report back to the corresponding communication unit.
6. The intelligent vision training system of claim 4, wherein: the training scheme comprises a screen focusing training scheme and an eye chart focusing training scheme; the screen focusing training scheme comprises the content of a focusing target, the size of the focusing target, a training initial distance, a first distance between adjacent training distances, the sequence of the training distances and the training sequence of eyes at the training distances; the scheme library is also stored with a training comparison table, and the training comparison table is stored with a mapping relation between a training initial distance and vision data of eyes with poor vision and a mapping relation between the size of a focusing target and the vision data of the eyes with poor vision; when the screen focusing training scheme is matched, matching a corresponding training initial distance and the size of a focusing target according to vision data of eyes with poor vision of a user;
the visual chart focusing training scheme comprises the number, the arrangement mode, the size of characters, a training starting distance, a second distance between adjacent training distances, the sequence of the training distances and the training sequence of eyes at the training distances; the training comparison table is also stored with a mapping relation between the training starting distance and vision data of eyes with poor vision, and a mapping relation between the character size and the vision data of the eyes with poor vision; and when the training scheme is matched, matching a corresponding training starting distance and a corresponding character size according to the vision data of the eye with poor vision of the user.
7. An intelligent vision training method, using the intelligent vision training system of claim 1, comprising the steps of:
s1, logging in a user side, and matching a training scheme;
s2, performing eye movement training according to preset preparation requirements;
s3, performing screen focusing training according to the content of the screen focusing training scheme;
s4, performing long-range focusing training for X minutes according to the preset long-range training requirement;
s5, performing the focusing training of the visual chart according to the content of the focusing training scheme of the visual chart;
s6, performing the long-range Y minutes according to the preset long-range requirement;
s7, performing eye exercises for Z minutes;
x, Y, Z are all preset minutes.
8. The intelligent vision training method of claim 7, wherein: in the step S3, the screen focus training includes:
s31, moving the training information robot to enable the distance between the training information robot and the user to be equal to the initial training distance, and displaying the display content; the display content comprises a background and a focusing target, the color difference value between the color of the background and the color of the focusing target is greater than a preset value, and the content of the focusing target is patterns, characters, letters or numbers;
s32, extracting binocular vision data of the user to determine eyes with poor vision of the user, reminding the user to cover the eyes with poor vision, and focusing the eyes with good vision on a focusing target; if the vision of the two eyes of the user is the same, the left eye is used as the eye with better vision;
s33, timing S seconds if the user can clearly see the focusing target through feedback, and reminding the user to shield eyes with better vision after S seconds and focus the eyes with poorer vision on the focusing target;
s34, timing S seconds if the user can clearly see the focusing target after the feedback, and reminding the user that the two eyes focus on the focusing target at the same time after S seconds;
s35, timing S seconds if the user can clearly see the focusing target through feedback, and reminding the user that the screen focusing training at the current distance is finished after S seconds; after moving the training information robot to the direction far away from the user by the first distance, judging whether the distance between the training information robot and the user is greater than the preset farthest focusing distance, if not, turning to S32, and if so, turning to S4; wherein the first distance is 0.3-0.8 m.
9. The intelligent vision training method of claim 7, wherein: in step S4, the distant view focusing training includes:
s41, selecting an object with a definite boundary as a distant view target, wherein the distance between the distant view target and a user is more than 100 meters;
s42, reminding the user to shield the eyes with poor eyesight and focusing the eyes with good eyesight on the distant view target; if the vision of the two eyes of the user is the same, the left eye is used as the eye with better vision;
s43, reminding the user to shield the eyes with better vision and focus the eyes with poorer vision on the distant view target after P minutes;
s44, reminding the user to focus the two eyes on the distant view target after P minutes;
and S45, after P minutes, reminding the user that the vision focusing training is finished.
10. The intelligent vision training method of claim 7, wherein: in step S5, the eye chart focusing training includes:
s51, moving the training information robot to enable the distance between the training information robot and the user to be equal to the training starting distance, and displaying characters; when the characters are displayed, the color of the characters is black, the background of the screen is white, and the size of the characters is the size of the corresponding characters on the visual chart after 0.1 is added to the vision of the eyes with poor vision of the user; the characters are multiple and random in direction;
s52, reminding the user to cover the eyes with poor vision, and sequentially identifying each designated character by the eyes with good vision;
s53, if the success rate of the identification of the latest Q characters is greater than the preset success rate, reminding the user to shield the eyes with better vision after R seconds, and identifying the successively appointed characters in sequence by the eyes with poorer vision;
s54, if the success rate of the identification of the latest Q characters is greater than the preset success rate, reminding the user after R seconds, and identifying the successively appointed characters by using two eyes;
s55, if the recognition success rate of the latest Q characters is greater than the preset success rate, reminding the user after R seconds, and completing the focusing training of the eye chart at the current distance; after moving the training information robot to a direction away from the user by a second distance, judging whether the distance between the training information robot and the user is greater than a preset farthest character distance, if not, turning to S52, and if so, turning to S6; wherein the second spacing is 0.4-0.7 meters.
CN202210594609.0A 2022-05-27 2022-05-27 Intelligent vision training system and method Pending CN114931492A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210594609.0A CN114931492A (en) 2022-05-27 2022-05-27 Intelligent vision training system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210594609.0A CN114931492A (en) 2022-05-27 2022-05-27 Intelligent vision training system and method

Publications (1)

Publication Number Publication Date
CN114931492A true CN114931492A (en) 2022-08-23

Family

ID=82866393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210594609.0A Pending CN114931492A (en) 2022-05-27 2022-05-27 Intelligent vision training system and method

Country Status (1)

Country Link
CN (1) CN114931492A (en)

Similar Documents

Publication Publication Date Title
CN110890140B (en) Virtual reality-based autism rehabilitation training and capability assessment system and method
CN111202663B (en) Vision training learning system based on VR technique
CN104603673B (en) Head-mounted system and the method for being calculated using head-mounted system and rendering digital image stream
CN205903239U (en) Visual acuity test and trainer based on virtual reality
CN109330555A (en) A kind of intelligent eyesight detection based on cloud computing and training correction system
CN106388762A (en) Vision testing and training system based on virtual reality
Sabbah et al. Importance of eye position on spatial localization in blind subjects wearing an Argus II retinal prosthesis
US10376439B2 (en) Audio-feedback computerized system and method for operator-controlled eye exercise
US20040236389A1 (en) Method and system for training a visual prosthesis
CN107260506B (en) 3D vision training system, intelligent terminal and head-mounted device based on eye movement
US20120179076A1 (en) Method and system for treating amblyopia
CN205404960U (en) Intelligent spectacles
CN108888487A (en) A kind of eyeball training system and method
CN107028738B (en) Vision-training system, intelligent terminal and helmet based on eye movement
KR20160090065A (en) Rehabilitation system based on gaze tracking
US11774759B2 (en) Systems and methods for improving binocular vision
CN110604675A (en) Method for realizing vision correction based on VR interaction
CN107307981B (en) Control method of head-mounted display device
WO2022205789A1 (en) Eyeball tracking method and system based on virtual reality
Crombie Early concepts of the senses and the mind
CN114931492A (en) Intelligent vision training system and method
CN107291233A (en) Wear vision optimization system, intelligent terminal and the helmet of 3D display devices
CN103297546A (en) Method and system for visual perception training and server
CN116188211A (en) Online education learning assessment system
CN113181017A (en) XR-based cloud game vision training system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination