WO2019231397A1 - A system and device for improving dynamic cognitive visual functions in a user - Google Patents

A system and device for improving dynamic cognitive visual functions in a user Download PDF

Info

Publication number
WO2019231397A1
WO2019231397A1 PCT/SG2019/050273 SG2019050273W WO2019231397A1 WO 2019231397 A1 WO2019231397 A1 WO 2019231397A1 SG 2019050273 W SG2019050273 W SG 2019050273W WO 2019231397 A1 WO2019231397 A1 WO 2019231397A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
symbols
input
training
input received
Prior art date
Application number
PCT/SG2019/050273
Other languages
French (fr)
Inventor
Zoran PEJIC
Original Assignee
Orthovision Pte. Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orthovision Pte. Ltd. filed Critical Orthovision Pte. Ltd.
Publication of WO2019231397A1 publication Critical patent/WO2019231397A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/168Evaluating attention deficit, hyperactivity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H5/00Exercisers for the eyes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H5/00Exercisers for the eyes
    • A61H5/005Exercisers for training the stereoscopic view
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4833Assessment of subject's compliance to treatment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B22/00Exercising apparatus specially adapted for conditioning the cardio-vascular system, for training agility or co-ordination of movements
    • A63B22/16Platforms for rocking motion about a horizontal axis, e.g. axis through the middle of the platform; Balancing drums; Balancing boards or the like
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Definitions

  • the present invention generally relates to training and/or improving dynamic cognitive visual functions of a user. More particularly, the present invention relates to a system and a device for training or improving the dynamic cognitive visual functions in a user.
  • Dynamic visual functions include Visual Acuity, both near (within 0.33m) and far (within 3.0m), Accommodation Facility and range of Accommodation, Convergence, Stereo Acuity-depth perception, Tracking Abilities-fme pursuits, eye-hand coordination, balance, Visual Memory, Visual Spatial Orientation which influences visual and overall attention.
  • AD Attention Deficit Disorder
  • ADHD Attention Deficit Hyperactivity Disorder
  • Dyslexia other forms of learning and/or behavioral issues. Treating any of those issues would be focusing on symptoms rather than on the root cause of the problem. Focusing on underlying visual functional issues proves to be a very good way of remediation and removes the root cause of the learning and/or issues.
  • Prolonged screen time may also cause the degrees of shortsighted children to increase while prescribed reading glasses would be required much earlier among young individuals.
  • the effects of prolonged screen time would be even more pronounced as the ciliary muscle of the eye becomes rigid due to aging and therefore the process of accommodation slows down as the ciliary muscle is unable to relax.
  • the present invention seeks to provide a system and a device for training or improving dynamic cognitive visual functions in a user to overcome or to address at least in part some of the aforementioned disadvantages.
  • a training system for improving visual cognitive functions of a user comprising:
  • a training program configured to be operable on a communication device and configured to: present at least one training mode on an electronic display of the communication device for a user to select; display a plurality of symbols on the electronic display, wherein each of the plurality of symbols are positioned at a predetermined simulated distance from the user’s eyes; receive an input from the user via an input device, determine whether the input received from the user corresponds correctly to the plurality of symbols;
  • the parameter of each of the plurality of symbols includes the size of each of the plurality of symbols.
  • the step of determining whether the input received from the user corresponds correctly to the plurality of symbols includes determining whether the input received matches a word of the English language based on the number of plurality of symbols.
  • the plurality of symbols on the display includes any one of the following: a letter, a combination of letters, an object or a logo.
  • each of the plurality of symbols is positioned at a predetermined simulated distance from the user’s eyes, wherein the predetermined simulated distance of each of the plurality of symbols can range from 30cm to 3.0m.
  • each of the plurality of symbols are positioned at a predetermined simulated distance from the user’s eyes, wherein the predetermined simulated distance of each of the plurality of symbols can range from very near to far.
  • the step of adjusting a parameter of each of the plurality of symbols include increasing the size of each of the plurality of symbols by a predetermined percentage level when each of the input received does not correspond to the each of the plurality of symbols.
  • the predetermined percentage level is approximately 10%.
  • the training mode comprises a plurality of levels that progressively increases in difficulty as the user completes each level.
  • selecting the training mode is based on one of the following criteria: the user’s age, the user’s literacy level or the user’s ability to speak English.
  • the training system further comprises the step of computing the aggregated time taken by the user to complete the input via the input device.
  • a device for improving visual cognitive functions of a user comprising:
  • processor executable code that, when executed by the one or more processors, causes the processors to perform operations including:
  • presenting at least one training mode configured for display on a communication device for a user to select; displaying a plurality of symbols on the display, wherein the plurality of symbols are positioned at a predetermined simulated distance from the user’s eyes;
  • the parameter of each of the plurality of symbols includes the size of each of the plurality of symbols.
  • the step of determining whether the input received from the user corresponds correctly to the plurality of symbols includes determining whether the input received matches a word of the English language based on the number of plurality of symbols.
  • the plurality of symbols on the display includes any one of the following: a letter, a combination of letters, an object or a logo.
  • each of the plurality of symbols is positioned at a predetermined simulated distance from the user’s eyes, wherein the predetermined simulated distance of each of the plurality of symbols can range from 30cm to 3.0m.
  • each of the plurality of symbols are positioned at a predetermined simulated distance from the user’s eyes, wherein the predetermined simulated distance of each of the plurality of symbols can range from very near to far.
  • the step of adjusting a parameter of each of the plurality of symbols include increasing the size of each of the plurality of symbols by a predetermined percentage level when each of the input received does not correspond to the each of the plurality of symbols.
  • the predetermined percentage level is approximately 10%.
  • the training mode comprises a plurality of levels that progressively increases in difficulty as the user completes each level.
  • selecting the training mode is based on one of the following criteria: the user’s age, the user’s literacy level or the user’s ability to speak English.
  • the device further comprises the step of computing the aggregated time taken by the user to complete the input via the input device.
  • a computer- implemented method for improving visual cognitive functions of a user comprising: presenting at least one training mode configured for display on a communication device for a user to select;
  • the one or more aspects include the features hereinafter fully described and particularly pointed out in the claims.
  • the following description and the annexed drawings set forth in detail certain illustrative features of the one or more aspects. These features are indicative, however, of but a few of the various ways in which the principles of various aspects may be employed, and this description is intended to include all such aspects and their equivalents.
  • Figure 1 shows an exemplary training system for improving dynamic cognitive visual functions of a user according to various embodiments
  • Figure 2 shows an exemplary graphical user interface screen display of a user portal for accessing the training system according to various embodiments
  • Figure 3 shows an exemplary graphical user interface screen display presenting various levels of a trining program according to various embodiments
  • Figure 4 shows an exemplary computer-implemented method for improving dynamic cognitive visual functions of a user according to various embodiments
  • Figures 5A-5C show exemplary graphical user interface screen displays of a first mode of a training program according to various embodiments
  • Figure 6 shows an exemplary computer-implemented method for improving dynamic cognitive visual functions of a user according various embodiments
  • Figures 7A and 7B show exemplary graphical user interface screen displays of a second mode of a training program according to various embodiments
  • Figure 8 shows an exemplary graphical user interface screen display presenting the performance scores and results obtained for a user on completion of a training program according to various embodiments; .
  • Figures 9A and 9B show exemplary graphical user interface screen displays of the scoreboard and leaderboard for various training programs according to various embodiments
  • Figure 10 is substantially a graph of scores and aggregated time taken versus time of a user A according to various embodiments
  • Figure 11 is substantially a graph of scores and aggregated time taken versus time of a user B according to various embodiments
  • Figure 12 is substantially a graph of scores and aggregated time taken versus time of a user C according to various embodiments
  • Figure 13 is substantially a graph of scores and aggregated time taken versus time of a user D according to various embodiments
  • Figure 14 is substantially a graph of scores and aggregated time taken versus time of a user A according to various embodiments
  • Figure 15 is substantially a graph of scores and aggregated time taken versus time of a user B according to various embodiments
  • Figure 16 is substantially a graph of scores and aggregated time taken versus time of a user C according to various embodiments;
  • Figure 17 is substantially a graph of scores and aggregated time taken versus time of a user D according to various embodiments;
  • Figure 18 is substantially a graph of scores and aggregated time taken versus time of a user E according to various embodiments.
  • Dynamic visual cognitive functions include Visual Acuity (both near (0.3m) and far (3m)), Accommodation Facility and range of Accommodation, Convergence, Stereo Acuity-depth perception, Tracking Abilities-fine pursuits, eye-hand coordination, balance, Visual Memory, Visual Spatial Orientation, and visual and overall attention and sensory integration.
  • Visual Acuity both near (0.3m) and far (3m)
  • Accommodation Facility and range of Accommodation
  • Convergence Stereo Acuity-depth perception
  • Tracking Abilities-fine pursuits eye-hand coordination
  • balance Visual Memory
  • Visual Spatial Orientation Visual and overall attention and sensory integration.
  • processors include microprocessors, microcontrollers, graphics processing units (GPUs), central processing units (CPUs), application processors, digital signal processors (DSPs), reduced instruction set computing (RISC) processors, systems on a chip (SoC), baseband processors, field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure.
  • processors in the processing system may execute software.
  • Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
  • the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer.
  • such computer- readable media may include a random-access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
  • RAM random-access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable ROM
  • optical disk storage magnetic disk storage
  • magnetic disk storage other magnetic storage devices
  • combinations of the aforementioned types of computer-readable media or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
  • Coupled may be understood as electrically coupled or as mechanically coupled, for example attached or fixed, or just in contact without any fixation, and it will be understood that both direct coupling or indirect coupling (in other words: coupling without direct contact) may be provided.
  • the present disclosure provides solutions that make use of computer hardware and software to train or to improve the dynamic cognitive visual functions of a user.
  • the present disclosure also provides for a device that is configured to train the visual functions of the user, i.e., using the visual pathways to stimulate certain centers in the brain which influence the other corresponding centers.
  • the present disclosure provides a device, system and computer-implemented method that are based on focusing the entire visual system, having a motor and sensory component, and focusing on different viewing distances. This provides the advantage of boosting ocular accommodation, visual acuity, convergence, eye teaming, enhancing the sensory motor system of the vision and integrating the senses of sight, hearing and balance.
  • Ocular accommodation involves the change of focusing distance and is the ability to shift focus from far to near and vice versa.
  • the main problem with myopia and in majority of the cases with learning difficulties lies in the unequal accommodation of each eye, which greatly disrupts the use of both eyes together.
  • the vision starts to blur and the eye teaming abilities that are essential for accurate tracking (ability to track symbols such as a row of letters) slows down and becomes inaccurate.
  • the visual component includes focusing, particularly facility of accommodation-facilitation of the change of the focusing distance.
  • Facility of accommodation is an absolutely necessary function for a regular school day or regular daily activities. It is translated into ability to keep clear focus irrespective of the focusing distance and not fully dependent on light condition.
  • the speed of ocular accommodation in children is enormous. A child is able to refocus from infinity to 6.7 cm in 350 milliseconds.
  • the ability to accommodate and its speed falls sharply, affecting not only vision but the entire sensory system.
  • the present disclosure provides for a device, system and computer- implemented method for boosting ocular accomodation of a user by firstly enhancing the weaker eye (first eye) by stimulating the use of the weaker eye.
  • a next step of providing the user with a prescribed red/green goggles and having the user match the red/green color on a display unit can be used to faciliate binocular stimulation.
  • monocular accommodation as well as vergence conjuggate movement of the eye balls
  • Enhancing convergence is provided as a next step to boost ocular accommodation.
  • Convergence is in nature a conjugate movement of both eyes towards the nose; it is crucial for the eyes to stay together during the process of reading and writing. Enhancing convergence is also particularly important in helping children who are academically or behaviorally challenged.
  • the aforementioned steps, when utilized, have been shown to significantly improve the focusing ability, eye sight, tracking over all eye teaming, and enhanced academic performance of children who would otherwise be diagnosed as suffering from Attention Deficit Disorder (ADD), Attention Deficit Hyperactivity Disorder (ADHD) or dyslexia.
  • ADD Attention Deficit Disorder
  • ADHD Attention Deficit Hyperactivity Disorder
  • dyslexia dyslexia
  • the present disclosure is also configured for enhancing the sensory motor system of a user’s vision and this involves both visual and vestibular stimulation.
  • a balancing board or the like can be utilized in conjunction with the device, system or computer-implemented methods as disclosed herein.
  • the user may be tasked to stay on a balancing board and perform visual activities on the device or system disclosed herein.
  • the device and system may calculate, based on a predetermined model, how well the user was able to stay on the balancing board and integrate the balancing with the visual activities on the device or system.
  • the visual exercises together with the physical exercise of using the balancing board enables the enhancement of the connections between the visual and the vestibular system.
  • CSPD central sensory processing disorder
  • the present disclosure provides a device, system and computer- implemented method that aims to educate parents about the beneficial visual habits which would enhance visual functions and promote the integration of the sensory information perceived.
  • the present disclosure also provides a device, system and computer-implemented method that is intended to ameliorate the effects of CSPD by stimulating the visual and vestibular systems of a user.
  • the present disclosure provides a system and a device that makes use of computer hardware and software to improve dynamic cognitive visual functions of a user.
  • Various embodiments are provided for systems, and various embodiments are provided for devices. It will be understood that basic properties of the system also hold for the devices and vice versa. Therefore, for the sake of brevity, duplicate description of such properties may be omitted.
  • Figure 1 illustrates an embodiment of a training system 100 for improving dynamic cognitive visual function of a user.
  • the training system 100 comprises a multi-faceted, web-deliverable, browser-based system configured to improve the dynamic cognitive visual function of users.
  • the training system 100 comprises a user portal 101, a clinician portal 102, an administration manager 104, a report generator module 105 and a database 103.
  • the training system 100 provides the training program 110 through a user portal 101 that is designed to allow users to access the training program 100 via a communication device 120 connected to a communication network 140.
  • the training program 100 is designed to be platform-independent so that it can be delivered over the internet through any communication device 120.
  • the user creates a user account and accesses the training program 110 via the user portal 101 using an identifier such as a user identification and password as illustrated in Figure 2. Further details of the user portal 101 will be described hereinafter. Every aspect of the user’s progress, scores and results are recorded upon completion of the training program 110 and these can be stored under the user account for further reference and analysis, thereby allowing the user’s progress to be monitored as illustrated in Figure 8.
  • the user’s progress, scores and results can be stored on a cloud-based database 103 and/or on the database of the training system 100.
  • the server makes the user’s data available for review by the supervising clinician through the clinician portal 102.
  • the supervising clinician uses the clinician portal 102 to regularly check on the usage and progress of each user and to provide helpful guidance and coaching remotely or in-person during sessions.
  • the training system 100 comprises a report generator that allows a user or a clinician to access the user’s account to generate a report of the user’s sessions. Access to the user’s account is provided to the clinician via the clinician portal 102. This allows the clinician to monitor the user’s progress and to provide recommendations or to adjust the parameters of the training program 110 accordingly.
  • the communication device 101 can comprise a portable computing device such as a laptop, a mobile phone, or any other appropriate storage and/or communication device 120 to exchange data via a web browser and/or communications network 140.
  • the user portal 101 provides access to users via any communication device 120 located anywhere in the world and allows users to access the training program 110. Users can work on the training program as frequently as their time and schedule permits. Users can also work under the supervision of a clinician who can also be located remotely and can access the training program through the clinician portal 102.
  • the communication device 101 includes one or more processors (not shown) and memory (not shown) for storing applications, modules and other data. In one embodiment, the memory includes a large number of memory blocks where calculations may be performed.
  • the communication device also includes one or more display interface for viewing content.
  • the display interface permits a user to interact with the training system 100 and its components and functions. This may be facilitated by the inclusion of a user input device which can include a mouse, keyboard, or any other peripheral device that allows a user to interact with the training system 101 or the communication device 101.
  • a user input device which can include a mouse, keyboard, or any other peripheral device that allows a user to interact with the training system 101 or the communication device 101.
  • the components and functions of the training system 100 may be represented as one or more discrete systems or workstations, or may be integrated as part of a larger system or workstation.
  • the processor executes instructions contained in programs such as the training program 110 within the training system 100 stored in the memory of the communication device 101 or external database 132.
  • the processor may provide the central processing unit (CPU) functions of a computing device on one or more integrated circuits.
  • processor broadly refers to and is not limited to single or multi-core general purpose processor, a special purpose processor, a conventional processor, a graphical processing unit, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, one or more Application Specific Integrated Circuits (ASICs), one or more Field Programmable Gate Array (FPGA) circuits, any other type of integrated circuit, a system on a chip (SOC), and/or a state machine.
  • DSP digital signal processor
  • ASICs Application Specific Integrated Circuits
  • FPGA Field Programmable Gate Array
  • the training system 100 is communicably operative via a communications network 140 to a server 130.
  • the server 130 may be owned or operated by a different entity such as a private data service provider or by an intermediary company via the communications network 140.
  • An external database 132 is communicatively coupled to the server 130 and is configured for storing of data such as the trainnig system and information pertaining to a user, such information comprising at least one identifier such as name, address, identity card or passport number, age and gender.
  • a database 103 is also provided within the training system 100 and is configured for storing of information pertaining to a user, such information comprising at least one identifier such as name, address, identity card or passport number, age and gender.
  • Information pertaining to the scores and results of the user, analysis of such scores and results are also stored in the database 103 or the external database 132.
  • the communication device 120 may exchange information via any communications network 140, such as a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a proprietary network, and/or Internet Protocol (IP) network such as the Internet, an Intranet or an extranet.
  • LAN Local Area Network
  • MAN Metropolitan Area Network
  • WAN Wide Area Network
  • IP Internet Protocol
  • Each device, module or component within the system may be connected over a network or may be directly connected.
  • a person skilled in the art will recognize that the terms‘network’,‘computer network’ and‘online’ may be used interchangeably and do not imply a particular network embodiment. In general, any type of network may be used to implement the online or computer networked embodiment of the present invention.
  • the network may be maintained by a server or a combination of servers or the network may be serverless.
  • any type of protocol for example, HTTP, FTP, ICMP, UDP, WAP, SIP, H.323, NDMP, TCP/IP
  • HTTP HyperText Transfer Protocol
  • FTP FTP
  • ICMP UDP
  • WAP Wireless Fidelity
  • SIP Session Initiation Protocol
  • H.323, NDMP TCP/IP
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • Figure 2 shows an embodiment of a screen shot of the user portal 101 of the training system 100.
  • the user opens a standard web browser on a communication device 101 connected to the communication network 140 and goes to a predetermined web site containing the training system. The user then logs into the training system via the user portal 101 using a user identification and password.
  • the user portal 101 allows the user to access the training system 100 that is designed to be accessed in a computing device in a treatment center or during a treatment session, or over the internet on any communication device 101 that is connected to the communications network 140.
  • the training system 100 comprises a training program 110 comprising a plurality of exercises configured to target a plurality of cognitive visual functions.
  • the training program 110 comprises two modes: a first mode 112 and a second mode 111.
  • the first mode requires the user to identify a series of letter-like symbols and/or combinations of letter-like symbols and the second mode requires the user to identify a series of object or logo symbols and/or combinations of the same.
  • each of the two modes cater to different age groups of children and to the child’s literacy level or ability to recognise the English alphabet. For example, if the child comes from an English speaking environment or any other environment that uses the latin alphabet, and has an acceptable level of literacy, the first mode can be used. Additionally, other than improving the dynamic cognitive visual functions of the user, the first mode provides another advantage of enhancing the user’s spelling abilities and therefore reading and writing abilities. If the child comes from a non-English speaking environment, the second mode of using a series of object or logo symbols and/or combinations of the same the symbols of the game can be used. The complexity of the symbols enhances the user’s ability to recognise those symbols. For example, very young children and those with low literacy levels can use the second mode of the training program.
  • FIG. 3 illustrates the various levels of the training program 110.
  • the training program comprises three levels: Level 1 114 includes Grades 1 and 2, Level 2 116 includes Grades 3 and 4 and Level 3 118 includes Grades 5 and 6.
  • Level 1 114 includes Grades 1 and 2
  • Level 2 116 includes Grades 3 and 4
  • Level 3 118 includes Grades 5 and 6.
  • Each level provides an increasing difficulty level, with Grade 1 being the easiest and Grade 6 being the most challenging.
  • each level may comprise varying number of stages.
  • the level“space symbols” refers to the second mode and only one level is available for the“space symbols” level.
  • the first mode comprising letters
  • the following techniques may be used: (i) increasing the complexity level of words; (ii) increasing the literacy level.
  • Increasing the complexity level of words may include using an increasing number of letters, using less frequently used words, or increasing the number of syllables, as the levels progress.
  • Increasing the literacy level of words may include using words that are aligned with standardized grade-level bands that are based on a child’s age. For example, a 6 year old child may be expected to be able to read a range of words based on the standardized grade-level band of a 6 year old. The range of words that are expected of a 6 year old may also be split into each of the six grades based on the level of difficulty of the words. The object behind this is that other than simply seeing the letters, children need to understand the position of the letters within the word and then place it accordingly in order to create the word.
  • Figures 5A to 5C illustrates screenshots of different levels of the first mode of the training program.
  • Figures 5A, 5B and 5C may represent different levels of the first mode of the training program.
  • the letters are arranged in a random order and the user has to spell out the name of the object based on the letters that are arranged randomly.
  • the spelling of the words for each level is different as the words used corresponds to the age of the user or the literacy level of the user.
  • Letters are placed around the screen at 3 different depths - Very Near, Near and Far. These depths refer to the simulated distances from the user’s eyes to a simulated depth within the display interface of the communication device 101.
  • the letters may be positioned very near (30 cm), near/intermediate (1.5 m) and far (3m). Letters for the answer are placed in all 3 depths, creating a near-and-far eye exercise for the user. For users who are not confident with spelling, a question mark icon or a hint icon may be provided in the top right corner. Once that icon is activated, the child will be able to see the word briefly, again enhancing working memory, in order to help the child to get the correct spelling.
  • Figure 3 also illustrates a screenshot of the second mode 111 of the training program 110 which comprises three levels: Level 1 113 includes Grades 1 and 2, Level 2 115 includes Grades 3 and 4 and Level 3 117 includes Grades 5 and 6.
  • Each level provides an increasing difficulty level, with Grade 1 being the easiest and Grade 6 being the most challenging.
  • a level comprises four stages, each stage presents a “key” and up to 14 symbols that the player has to use to match the“key” in sequential order.
  • The“key” is made up of 9 random symbols. These 9 symbols are randomly placed around the screen on 3 different depths - Very Near, Near and Far. These depths refer to the simulated distances from the user’s eyes to a simulated depth within the electronic display unit of the device.
  • the symbols may be positioned very near (30 cm), near/intermediate (1.5 m) and far (3m).
  • the symbols for the answer are placed in all 3 depths, creating a near-and-far eye exercise for the user, as illustrated in Figure 5.
  • the training program 110 brings the symbols displayed on the electronic display from very far to near or from very near to far. For example, one or some of the symbols on the electronic display will be progressively brought closer to the user from a simulated distance of around 3 metres to 30 centimetres. Alternatively, one or some of the symbols on the electronic display will be progressively brought further to the user from a simulated distance of around 30 centimetres to 3 metres, i.e. from very near to very far.
  • the symbols can be either brought closer to the user or further from the user within a predetermined range of time. And as the user progresses and advances through the levels, the predetermined range of time is reduced accordingly and the speed at which the symbols are brought closer to the user or further from the user is increased.
  • the training system 100 receives a user input from the user symbol by symbol, or letter by letter or object by object.
  • the user inputs data into one or more text boxes provided on the display.
  • To enter an input data into each of the text boxes the user clicks on the appropriate symbol and that symbol will appear in the text box or the next empty text box.
  • the user input may be received through a user input device (not shown( of the communication device 101.
  • a user input device my be a keyboard, a mouse, a track pad, a touch screen, an image sensor, a remote sensing device (e.g, Microsoft Kinect), a microphone or any other input device.
  • the user input device may be a device capable of recognizing a gesture of a user.
  • a camera capable of transmitting user input information to a computing device may visually identify a gesture performed by the user. Upon visually identifying the gesture of the user, a corresponding user input may be received by the computing device from the camera.
  • the user input device may be a microphone that includes an acoustic-to-electric transduccer or sensor that converts sound waves into one or more electric signals. For example, the microphone may pick up the voice of a user and convert the sound wave into a corresponding electrical signal into a digital representation that may be used as an input.
  • the input device may be a presence-sensitive screen. A presence-sensitive screen can generate one or more signals corresponding to a location selected by a gesture performed on or near the presence-sensitive screen.
  • the presence-sensitive screen detects a presence of an input unit, for example, a finger that is in close proximity to, but does not physically touch the presence-sensitive screen.
  • the presence-sensitive screen generates a signal corresponding to the location of the input unit. Signals generated by the selection of the corresponding location are then provided as input data to the training system.
  • Figure 4 shows an exemplary computer-implemented method for improving the dynamic cognitive visual functions of a user based on a first mode of the training program.
  • the user opens a standard web browser on a communication device 101 connected to the communication network 140 and goes to a predetermined web site containing the training system.
  • the user logs into the training system via the user portal 101 using a user identification and password.
  • the user portal 101 allows the user to access the training system 100 that is designed to be accessed in a computing device in a treatment center or during a treatment session, or over the internet on any communication device 101 that is connected to the communications network 140.
  • the user will be presented with a plurality of levels on the display interface for selection.
  • the plurality of levels can be pre-configured by the supervising clinician and the user will be presented with a plurality of levels that is based on the user’s age group or literacy level.
  • the user selects one of the predetermined number of levels. For example, as illustrated in Figure 3, each mode include multiple levels and the user may be presented with three levels: Level 1 which includes Grades 1 and 2, Level 2 which includes Grades 3 and 4 and Level 3 which includes Grades 5 and 6.
  • Each level may also include multiple stages. For example, each level may include eight stages.
  • the user proceeds to select Grade 1 and to progress to succeeding grades and levels.
  • each stage of a selected Grade will be loaded for the user.
  • the display interface will proceed to display a plurality of letters on the display.
  • the letters are arranged in a random order and the user has to spell out the name of the object based on the letters that are arranged randomly.
  • the letters are additionally arranged on the screen at 3 different depths - Very Near, Near and Far. These depths refer to the simulated distances from the user’s eyes to a simulated depth within the display interface of the communication device 101.
  • the letters may be positioned very near (30 cm), near/intermediate (1.5 m) and far (3 m). Letters are placed in all 3 depths to create a near-and-far eye exercise for the user.
  • the object of the user is to input the letters that are arranged randomly into each of the text boxes placed at the bottom of the display interface to spell out a word based on the letters shown.
  • the user will key in the input through the user input device of the communication device 101.
  • the training program will determine if the input keyed in by the user corresponds to the correct answer. If the training program determines that an incorrect answer has been keyed in, at step 203, the player will be allowed to repeat the stage again until the training program determines that the user has input the correct answer. If a third attempt to input the correct answer is unsuccessful, at step 204, the user will be allowed to repeat the stage again. However, in this case, the size of the letters will increased by 10%. If the user continues to be unsuccessful in inputting the correct answer, the size of the letters will continue to be increased by 10% successively until the user inputs the correct answer, up to a maximum predetermined size.
  • the training program will calculate the score based on the number of correct answers.
  • each user is awarded an additional 100 points to the total score at the end of each stage and a certain number of points are deducted from these 100 points depending on the following criteria: (i) did the player use the hint?, (ii) were there more than 2 unsuccessful attempts?, and (iii) has there been only 1 failed attempt?.
  • steps 207 and 210 20 points will be deducted from the total score.
  • 75 points will be deducted from the total score.
  • step 214 the total score will be saved in the training program or saved in the database or external database or in the memory of the communciation device 101. Once the user has completed a stage within the selected Grade, the entire sequence of steps 201 to 214 will be repeated until the user has completed all the stages within the Grade and the levels.
  • each user is given a predetermined time to complete each level. For example, a total of approximately 10 minutes may be provided for completion of each level.
  • the training program will aggregate the time taken to complete each stage and to aggregate the total time taken over all the stages within the level. Should the total time taken to complete the level exceed the predetermined aggregate time, at step 216, the training program will proceed to save the total score up to that point in time and to save the total score to the database and to proceed to end the exercise.
  • the date and time played, level played, total score and the aggregate time taken will be tabulated and saved in the database.
  • the user will also be able to access the aforesaid information on the training program.
  • An example of a screen shot showing the aforesaid information is shown on Figure 8.
  • Figure 9A and 9B shows the same information but compare the tabulated score with other users in the training program.
  • Figure 6 shows an exemplary computer-implemented method for improving the dynamic cognitive visual functions of a user based on a second mode of the training program.
  • the user opens a standard web browser on a communication device 101 connected to the communication network 140 and goes to a predetermined web site containing the training system.
  • the user logs into the training system via the user portal 101 using a user identification and password.
  • the user portal 101 allows the user to access the training system 100 that is designed to be accessed in a computing device in a treatment center or during a treatment session, or over the internet on any communication device 101 that is connected to the communications network 140.
  • each mode may include multiple levels and the user may be presented with three levels: Level 1 which includes Grades 1 and 2, Level 2 which includes Grades 3 and 4 and Level 3 which includes Grades 5 and 6.
  • Level 1 which includes Grades 1 and 2
  • Level 2 which includes Grades 3 and 4
  • Level 3 which includes Grades 5 and 6.
  • Each level may also include multiple stages. For example, each level may include four stages.
  • the user will proceed to select Grade 1 and to progress to succeeding levels.
  • each stage of a selected Grade will be loaded for the user.
  • the display interface will proceed to display a plurality of symbols on the display.
  • each stage will present a“key” and up to 14 symbols that the player has to use to match the“key” in sequential order.
  • The“key” is made up of 9 random symbols. These 9 symbols are randomly placed around the screen on 3 different depths - Very Near, Near and Far. These depths refer to the simulated distances from the user’s eyes to a simulated depth within the electronic display unit of the device. The symbols may be positioned very near (30 cm), near/intermediate (1.5 m) and far (3m). The symbols for the answer are placed in all 3 depths, creating a near-and-far eye exercise for the user, as illustrated in Figure 5.
  • the user will key in the input through the user input device of the communication device 101.
  • the training program will determine if the input keyed in by the user corresponds to the correct answer. If the training program determines that an incorrect answer has been keyed in, at step 303, the player will be allowed to repeat the stage again until the training program determines that the user has input the correct answer. If a third attempt to input the correct answer is unsuccessful for the same stage, at step 204, the user will be allowed to repeat the stage again. However, in this case, the size of the symbols or the keys will be increased by 10%. If the user continues to be unsuccessful in inputting the correct answer, the size of the symbols and the keys will continue to be increased by 10% successively until the user inputs the correct answer, up to a maximum predetermined size.
  • the training program will calculate the score based on the number of correct answers.
  • each user is awarded an additional 100 points to the total score at the end of each stage and a certain number of points are deducted from these 100 points depending on the following criteria: (i) were there more than 2 unsuccessful attempts?, and (ii) has there been only 1 failed attempt?.
  • steps 307 and 309 75 points will be deducted from the total score.
  • steps 308 and 310 50 points will be deducted from the total score.
  • the points will be cumulatively deducted accordingly from the total score.
  • the total score will be saved in the training program or saved in the database or external database or in the memory of the communciation device 101.
  • each user is given a predetermined time to complete each level. For example, a total of approximately 10 minutes may be provided for completion of each level.
  • the training program will aggregate the time taken to complete each stage and to aggregate the total time taken over all the stages within the level. Should the total time taken to complete the level exceed the predetermined aggregate time, at step 314, the training program will proceed to save the total score up to that point in time, at step 313, and to save the total score to the database and to proceed to end the exercise.
  • the date and time played, level played, total score and the aggregate time taken will be tabulated and saved in the database. The user will also be able to access the aforesaid information on the training program. An example of a screen shot showing the aforesaid information is shown on Figure 8.
  • Both the first mode 112 and the second mode 111 of the training program 110 have two key features. Firstly, the size of the letters or the object or logo symbols are increased after a number of failed attempts by the user to provide the correct answer; and secondly, the size of the letters or the object or logo symbols are increased progressively by a certain percentage in size, corresponding to the number of failed attempts beyond a certain number of attempts, such increase in size being subject to a predetermined maximum size. As illustrated in Figures 2 and 3, preferably, the size of the letter-like symbols and/or the object or logo symbols are increased by about 10% with each attempt after the third attempt by the user.
  • Figures 7A and 7B illustrates screenshots of different stages or levels of the second mode of the training program.
  • Figures 7A and 7B may represent different levels or stages of the second mode of the training program.
  • the objects or logos are arranged in a random order and the user has to input the objects in the same order as the key that is visible at the top of the screen.
  • the objects are placed around the screen at 3 different depths - Very Near, Near and Far. These depths refer to the simulated distances from the user’s eyes to a simulated depth within the display interface of the communication device 101.
  • the objects may be positioned very near (30 cm), near/intermediate (1.5 m) and far (3m). The objects are placed in all 3 depths, creating a near-and-far eye exercise for the user.
  • Time and score in each category were recorded and noted as an indicator of progress and dates were used to track and reflect a subject’s performance during in- house vision therapy sessions.
  • User B did one block of in-house vision therapy and performed the training program 14 times during which User B displayed faster aggregate time taken and had higher scores on grades 5 and 6, 720 points and 460 seconds on first day and 730 points and 282 seconds on last day on grades 5 and 6.
  • User C’s results showed steady growth in score from 640 points on first day and 800 points on last day of performing the training program, whereas the aggregate time taken was significantly reduced from 640 seconds on the first session to 266 seconds on the last session.
  • User C underwent one block of in-house vision therapy.
  • User D showed great improvement in speed, decreased aggregate time taken, from 278 to 114 seconds and score that reflects accuracy reaching its maximum of 780 points on grades 3 & 4 and 400 points for performing the training program relating to the recognition of object or logo symbols and/or combinations.
  • User D was especially good in the recognition of object or logo symbols and/or combinations where he could reach maximum score of 400 points in just 87 seconds towards the end of therapy sessions.
  • User D underwent two blocks of in-house vision therapy sessions.
  • User A underwent one block of in-house vision therapy and its chart and graph analysis reflects good results in score that rose from 605 to 800 points on grades 1 & 2; 3 & 4; 5 & 6.
  • the aggregate time taken to complete the training program also dipped from 294 to 219 seconds respectively.
  • User A performed well on the training program relating to the recognition of object or logo symbols and/or combinations where he could reach maximum of 400 points in just 175 seconds.
  • User B completed two blocks of in-house vision therapy during which showed fluctuations in reaction time and poor stamina as reflected in his chart and graph results.
  • User B manged do score a maximum of 800 points or 400 points on the recognition of object or logo symbols and/or combinations on most of the days, however he needed more time to finish the tasks.
  • User B needed more time to perform the training program relating to recognition of object or logo symbols and/or combinations.
  • User C showed steady progress in performing the training program. He did extremely well on the training program relating to the recognition of object or logo symbols and/or combinations, often finishing the level in just 49 seconds, which is considered a very short aggregate time taken, where maximum 400 points could be achieved on most days. Grades 3 and 4, 5 and 6 also showed shorter aggregate time taken and ability to reach higher score bands. User C underwent two blocks of in- house therapy.
  • User D underwent one block of in-house vision therapy and an increase in score and decrease in reaction time on both, grades 3 and 4 and for completing the training program relating to the recognition of object or logo symbols and/or combinations.
  • User D went from 730 points to 800 points and 334 to 144 seconds respectively on grades 3 and 4, whereas for the training program relating to the recognition of object or logo symbols and/or combinations, a maximum score of 400 points went to 450 points on day 9 and 148 seconds on last day of performing the training program.
  • the present dislcosure provides a training systema and training programs for improving dynamic cognitive visual functions in users and can be applied to the following individuals, which includes:
  • Presbyopia refers to blurred vision at close up that typically happens after the age of forty. However, presbyopia may also occur earlier than the age of forty as a result of lifestyle reasons such as prolonged use of the eyes.
  • the training programs will work on the focusing abilities of individuals above the age of forty. The training programs will however be modified for individuals with this problem below the age of forty where the program will be based much more on relaxation of the accommodation and eye exercises that ease the movements of the eye muscles which will help them to relax.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Pain & Pain Management (AREA)
  • Epidemiology (AREA)
  • Developmental Disabilities (AREA)
  • Ophthalmology & Optometry (AREA)
  • Physics & Mathematics (AREA)
  • Educational Technology (AREA)
  • Psychiatry (AREA)
  • Medical Informatics (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Hospice & Palliative Care (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Rehabilitation Tools (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

A system and device for improving visual cognitive functions of a user comprises a memory and one or more processors coupled with the memory. The memory includes processor executable code that, when executed by the one or more processors causes the processors to perform operations including: presenting at least one training mode configured for display on a communication device for a user to select; displaying a plurality of symbols on the display, wherein the plurality of symbols are positioned at a predetermined simulated distance from the user's eyes; receiving an input from the user via an input device, determining whether the input received from the user corresponds correctly to the plurality of symbols; adjusting a parameter of the plurality of symbols when the input received from the user corresponds incorrectly to the plurality of symbols and outputting a performance score based on the input received from the user.

Description

A SYSTEM AND DEVICE FOR IMPROVING DYNAMIC COGNITIVE VISUAL
FUNCTIONS IN A USER
Technical Field
[0001] The present invention generally relates to training and/or improving dynamic cognitive visual functions of a user. More particularly, the present invention relates to a system and a device for training or improving the dynamic cognitive visual functions in a user.
Background
[0002] The following discussion of the background to the invention is intended to facilitate an understanding of the present invention. However, it should be appreciated that the discussion is not an acknowledgment or admission that any of the material referred to was published, known or part of the common general knowledge in any jurisdiction as at the priority date of the application.
[0003] With advancement in technology and mainstream Internet usage, modern lifestyle in many countries create an environment in which children, teenagers and adults spend prolonged hours in front of lifestyle devices such as computers, tablets and smart phones, often on a daily basis. Prolonged usage of such devices requires extreme focusing power which causes the inability of the ciliary muscle of the eye to relax which in turn causes inadequate accommodation-blurred vision and leads to deterioration of the dynamic visual functions. Dynamic visual functions include Visual Acuity, both near (within 0.33m) and far (within 3.0m), Accommodation Facility and range of Accommodation, Convergence, Stereo Acuity-depth perception, Tracking Abilities-fme pursuits, eye-hand coordination, balance, Visual Memory, Visual Spatial Orientation which influences visual and overall attention. [0004] Inadequate Accommodation and poor eye teaming have great effect over the clarity of images that fall on the Retina-back of the eye and its further transmission to the visual centers in the brain. Such images of that kind will not be perceived clearly which will cause excessive and often unequal Accommodative power of each eye in an attempt by the visual system, comprising the eye balls and corresponding visual centers in the brain, to make the image as clear as possible and to make sense of it. This often leads to reduced Visual Acuity and poor eye teaming abilities. Many individuals with such conditions would often be diagnosed by eyecare professionals with certain form of refractive error and the glasses would be prescribed with the power of the lenses increasing steadily through time to correct the Visual Acuity but they do not correct erroneous way of visual functioning.
[0005] The way a person uses his or her eyes has been found to affect the way a person receives, perceives and processes information. This phenomenon poses a huge problem for educators, especially in their handling of younger children, particularly in a classroom environment. This is because when children are frequently engaged in activities in front of computer, tablet or smart phone screens such as playing computer games and surfing the Internet, they are used to the cortical stimulation arising from such activities. In contrast, when they are placed in a typical classroom environment where majority of the classroom activities in schools involve reading, writing, listening and copying, all of which are tasks involving correspondingly lower cortical frequency, many of them are unable to cope with the change in frequency and respond to such change by displaying various forms of behavioral patterns such as restlessness, and showing a lack of focus.
[0006] The natural progression from functioning on higher frequency most of the time is Attention Deficit Disorder (ADD) or Attention Deficit Hyperactivity Disorder (ADHD), Dyslexia and other forms of learning and/or behavioral issues. Treating any of those issues would be focusing on symptoms rather than on the root cause of the problem. Focusing on underlying visual functional issues proves to be a very good way of remediation and removes the root cause of the learning and/or issues.
[0007] Studies have also indicated that majority of children with high myopia in fact suffer from substantial stress that may stem from prolonged activities in front of computer, tablet or smart phone screens such as playing computer games and surfing the Internet. As more schools introduce the use of computers during school hours as well as for homework, it is inevitable that more pronounced visual perceptual issues will arise.
[0008] Furthermore, prolonged exposure to screens and excessive focusing, especially on devices with small screens, would lead to modification of the entire accommodative system and the eye(s) would not be able to change its focal distance from near to far fast enough and vice versa (also known as myopic shift) thus causing blurred vision and the inability to see distant objects clearly. In cases of prolonged daily usage of screens, it is possible that a person might even suffer a spasm of ocular accommodation after which one’s vision becomes blurred at any viewing distance.
[0009] Prolonged screen time may also cause the degrees of shortsighted children to increase while prescribed reading glasses would be required much earlier among young individuals. For individuals above the age of forty years, the effects of prolonged screen time would be even more pronounced as the ciliary muscle of the eye becomes rigid due to aging and therefore the process of accommodation slows down as the ciliary muscle is unable to relax.
[0010] Currently, there are some programmes for improving a person’s near vision sharpness and reading capabilities but they are based mainly on defocus and clear focus, applying stimulation on both eyes together at a single viewing distance. However, current modern lifestyle in many countries, particularly in developed countries, causes people to mainly focus on one viewing distance due to prolonged screen time that often causes loss of accommodation especially on one eye, and symptoms not confined to merely myopia, but also headaches, blurred vision at any viewing distance, ocular discomfort and dry eyes, which current programmes do not address.
[0011] The present invention seeks to provide a system and a device for training or improving dynamic cognitive visual functions in a user to overcome or to address at least in part some of the aforementioned disadvantages.
Summary of the Invention
[0012] The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
[0013] In accordance with a first embodiment of the invention, there is provided a training system for improving visual cognitive functions of a user comprising:
a training program configured to be operable on a communication device and configured to: present at least one training mode on an electronic display of the communication device for a user to select; display a plurality of symbols on the electronic display, wherein each of the plurality of symbols are positioned at a predetermined simulated distance from the user’s eyes; receive an input from the user via an input device, determine whether the input received from the user corresponds correctly to the plurality of symbols;
adjust a parameter of the plurality of symbols when the input received from the user corresponds incorrectly to the plurality of symbols; and
output a performance score based on the input received from the user.
[0014] Preferably, the parameter of each of the plurality of symbols includes the size of each of the plurality of symbols.
[0015] Preferably, the step of determining whether the input received from the user corresponds correctly to the plurality of symbols includes determining whether the input received matches a word of the English language based on the number of plurality of symbols.
[0016] Preferably, the plurality of symbols on the display includes any one of the following: a letter, a combination of letters, an object or a logo.
[0017] Preferably, each of the plurality of symbols is positioned at a predetermined simulated distance from the user’s eyes, wherein the predetermined simulated distance of each of the plurality of symbols can range from 30cm to 3.0m.
[0018] Preferably, each of the plurality of symbols are positioned at a predetermined simulated distance from the user’s eyes, wherein the predetermined simulated distance of each of the plurality of symbols can range from very near to far. [0019] Preferably, the step of adjusting a parameter of each of the plurality of symbols include increasing the size of each of the plurality of symbols by a predetermined percentage level when each of the input received does not correspond to the each of the plurality of symbols.
[0020] Preferably, the predetermined percentage level is approximately 10%.
[0021] Preferably, the training mode comprises a plurality of levels that progressively increases in difficulty as the user completes each level.
[0022] Preferably, selecting the training mode is based on one of the following criteria: the user’s age, the user’s literacy level or the user’s ability to speak English.
[0023] Preferably, the training system further comprises the step of computing the aggregated time taken by the user to complete the input via the input device.
[0024] In accordance with a second embodiment of the invention, there is a device for improving visual cognitive functions of a user comprising:
a memory;
one or more processors coupled with the memory, wherein the memory includes processor executable code that, when executed by the one or more processors, causes the processors to perform operations including:
presenting at least one training mode configured for display on a communication device for a user to select; displaying a plurality of symbols on the display, wherein the plurality of symbols are positioned at a predetermined simulated distance from the user’s eyes;
receiving an input from the user via an input device,
determining whether the input received from the user corresponds correctly to the plurality of symbols;
adjusting a parameter of the plurality of symbols when the input received from the user corresponds incorrectly to the plurality of symbols; and
outputting a performance score based on the input received from the user.
[0025] Preferably, the parameter of each of the plurality of symbols includes the size of each of the plurality of symbols.
[0026] Preferably, the step of determining whether the input received from the user corresponds correctly to the plurality of symbols includes determining whether the input received matches a word of the English language based on the number of plurality of symbols.
[0027] Preferably, the plurality of symbols on the display includes any one of the following: a letter, a combination of letters, an object or a logo.
[0028] Preferably, each of the plurality of symbols is positioned at a predetermined simulated distance from the user’s eyes, wherein the predetermined simulated distance of each of the plurality of symbols can range from 30cm to 3.0m. [0029] Preferably, each of the plurality of symbols are positioned at a predetermined simulated distance from the user’s eyes, wherein the predetermined simulated distance of each of the plurality of symbols can range from very near to far.
[0030] Preferably, the step of adjusting a parameter of each of the plurality of symbols include increasing the size of each of the plurality of symbols by a predetermined percentage level when each of the input received does not correspond to the each of the plurality of symbols.
[0031] Preferably, the predetermined percentage level is approximately 10%.
[0032] Preferably, the training mode comprises a plurality of levels that progressively increases in difficulty as the user completes each level.
[0033] Preferably, selecting the training mode is based on one of the following criteria: the user’s age, the user’s literacy level or the user’s ability to speak English.
[0034] Preferably, the device further comprises the step of computing the aggregated time taken by the user to complete the input via the input device.
[0035] In accordance with a third embodiment of the invention, there is a computer- implemented method for improving visual cognitive functions of a user comprising: presenting at least one training mode configured for display on a communication device for a user to select;
displaying a plurality of symbols on the display, wherein the plurality of symbols are positioned at a predetermined simulated distance from the user’s eyes; receiving an input from the user via an input device, determining whether the input received from the user corresponds correctly to the plurality of symbols; adjusting a parameter of the plurality of symbols when the input received from the user corresponds incorrectly to the plurality of symbols; and outputting a performance score based on the input received from the user.
[0036] To the accomplishment of the foregoing and related ends, the one or more aspects include the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative features of the one or more aspects. These features are indicative, however, of but a few of the various ways in which the principles of various aspects may be employed, and this description is intended to include all such aspects and their equivalents.
Brief Description of the Drawings
[0037] In the drawings, like reference characters generally refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. The dimensions of the various features or elements may be arbitrarily expanded or reduced for clarity. In the following description, various embodiments of the invention are described with reference to the following drawings, in which:
[0038] Figure 1 shows an exemplary training system for improving dynamic cognitive visual functions of a user according to various embodiments;
[0039] Figure 2 shows an exemplary graphical user interface screen display of a user portal for accessing the training system according to various embodiments; [0040] Figure 3 shows an exemplary graphical user interface screen display presenting various levels of a trining program according to various embodiments;
[0041] Figure 4 shows an exemplary computer-implemented method for improving dynamic cognitive visual functions of a user according to various embodiments;
[0042] Figures 5A-5C show exemplary graphical user interface screen displays of a first mode of a training program according to various embodiments;
[0043] Figure 6 shows an exemplary computer-implemented method for improving dynamic cognitive visual functions of a user according various embodiments;
[0044] Figures 7A and 7B show exemplary graphical user interface screen displays of a second mode of a training program according to various embodiments;
[0045] Figure 8 shows an exemplary graphical user interface screen display presenting the performance scores and results obtained for a user on completion of a training program according to various embodiments; .
[0046] Figures 9A and 9B show exemplary graphical user interface screen displays of the scoreboard and leaderboard for various training programs according to various embodiments;
[0047] Figure 10 is substantially a graph of scores and aggregated time taken versus time of a user A according to various embodiments;
[0048] Figure 11 is substantially a graph of scores and aggregated time taken versus time of a user B according to various embodiments;
[0049] Figure 12 is substantially a graph of scores and aggregated time taken versus time of a user C according to various embodiments;
[0050] Figure 13 is substantially a graph of scores and aggregated time taken versus time of a user D according to various embodiments;
[0051] Figure 14 is substantially a graph of scores and aggregated time taken versus time of a user A according to various embodiments;
[0052] Figure 15 is substantially a graph of scores and aggregated time taken versus time of a user B according to various embodiments;
[0053] Figure 16 is substantially a graph of scores and aggregated time taken versus time of a user C according to various embodiments; [0054] Figure 17 is substantially a graph of scores and aggregated time taken versus time of a user D according to various embodiments;
[0055] Figure 18 is substantially a graph of scores and aggregated time taken versus time of a user E according to various embodiments.
Detailed Description
[0056] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present invention. Additionally, unless defined otherwise, all technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art to which this invention belongs. The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized and structural, and logical changes may be made without departing from the scope of the invention. The various embodiments are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments.
[0057] Several aspects of the devices, systems and computer-implemented methods for training or improving the dynamic cognitive visual functions of a user will now be presented with reference to various devices, systems and computer-implemented methods. Dynamic visual cognitive functions include Visual Acuity (both near (0.3m) and far (3m)), Accommodation Facility and range of Accommodation, Convergence, Stereo Acuity-depth perception, Tracking Abilities-fine pursuits, eye-hand coordination, balance, Visual Memory, Visual Spatial Orientation, and visual and overall attention and sensory integration. These devices, systems and computer- implemented methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, circuits, processes, algorithms, etc. (collectively referred to as“elements”). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
[0058] By way of example, an element, or any portion of an element, or any combination of elements may be implemented as a“processing system” that includes one or more processors. Examples of processors include microprocessors, microcontrollers, graphics processing units (GPUs), central processing units (CPUs), application processors, digital signal processors (DSPs), reduced instruction set computing (RISC) processors, systems on a chip (SoC), baseband processors, field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
[0059] Accordingly, in one or more example embodiments, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer- readable media may include a random-access memory (RAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
[0060] In the specification the term“comprising” shall be understood to have a broad meaning similar to the term“including” and will be understood to imply the inclusion of a stated integer or step or group of integers or steps but not the exclusion of any other integer or step or group of integers or steps. This definition also applies to variations on the term“comprising” such as“comprise” and“comprises”.
[0061] In order that the invention may be readily understood and put into practical effect, particular embodiments will now be described by way of examples and not limitations, and with reference to the figures. It will be understood that any property described herein for a specific system or device may also hold for any system or device described herein, respectively. It will be understood that any property described herein for a specific computer-implemented method may also hold for any computer- implemented method described herein. Furthermore, it will be understood that for any system or device or computer-implemented method described herein, not necessarily all the components or steps described must be enclosed in the system or method, but only some (but not all) components or steps may be enclosed.
[0062] The term“coupled” (or“connected”) herein may be understood as electrically coupled or as mechanically coupled, for example attached or fixed, or just in contact without any fixation, and it will be understood that both direct coupling or indirect coupling (in other words: coupling without direct contact) may be provided.
[0063] To achieve the stated features, advantages and objects, the present disclosure provides solutions that make use of computer hardware and software to train or to improve the dynamic cognitive visual functions of a user. The present disclosure also provides for a device that is configured to train the visual functions of the user, i.e., using the visual pathways to stimulate certain centers in the brain which influence the other corresponding centers. The present disclosure provides a device, system and computer-implemented method that are based on focusing the entire visual system, having a motor and sensory component, and focusing on different viewing distances. This provides the advantage of boosting ocular accommodation, visual acuity, convergence, eye teaming, enhancing the sensory motor system of the vision and integrating the senses of sight, hearing and balance. These advantages can be achieved by providing a device, system and computer-implemented method that stimulates the visual and vestibular systems. Consequently, the visual functions will be well connected to the other elements of the sensory system giving rise to long lasting enhancement of visual functions.
Ocular Accommodation
[0064] Ocular accommodation involves the change of focusing distance and is the ability to shift focus from far to near and vice versa. The main problem with myopia and in majority of the cases with learning difficulties lies in the unequal accommodation of each eye, which greatly disrupts the use of both eyes together. As a result of the strain that arises from using both eyes together, the vision starts to blur and the eye teaming abilities that are essential for accurate tracking (ability to track symbols such as a row of letters) slows down and becomes inaccurate.
[0065] The visual component includes focusing, particularly facility of accommodation-facilitation of the change of the focusing distance. Facility of accommodation is an absolutely necessary function for a regular school day or regular daily activities. It is translated into ability to keep clear focus irrespective of the focusing distance and not fully dependent on light condition. According to some studies, the speed of ocular accommodation in children is enormous. A child is able to refocus from infinity to 6.7 cm in 350 milliseconds. However, as a result of prolonged usage of various screens the ability to accommodate and its speed falls sharply, affecting not only vision but the entire sensory system.
[0066] The present disclosure provides for a device, system and computer- implemented method for boosting ocular accomodation of a user by firstly enhancing the weaker eye (first eye) by stimulating the use of the weaker eye. A next step of providing the user with a prescribed red/green goggles and having the user match the red/green color on a display unit can be used to faciliate binocular stimulation. During the process of binocular stimulation, monocular accommodation as well as vergence (conjugate movement of the eye balls) are also stimulated which could be translated into visual stamina while using both eyes together.
[0067] Enhancing convergence is provided as a next step to boost ocular accommodation. Convergence is in nature a conjugate movement of both eyes towards the nose; it is crucial for the eyes to stay together during the process of reading and writing. Enhancing convergence is also particularly important in helping children who are academically or behaviorally challenged. The aforementioned steps, when utilized, have been shown to significantly improve the focusing ability, eye sight, tracking over all eye teaming, and enhanced academic performance of children who would otherwise be diagnosed as suffering from Attention Deficit Disorder (ADD), Attention Deficit Hyperactivity Disorder (ADHD) or dyslexia.
Sensory Motor System
[0068] The present disclosure is also configured for enhancing the sensory motor system of a user’s vision and this involves both visual and vestibular stimulation. A balancing board or the like can be utilized in conjunction with the device, system or computer-implemented methods as disclosed herein. The user may be tasked to stay on a balancing board and perform visual activities on the device or system disclosed herein. The device and system may calculate, based on a predetermined model, how well the user was able to stay on the balancing board and integrate the balancing with the visual activities on the device or system. The visual exercises together with the physical exercise of using the balancing board enables the enhancement of the connections between the visual and the vestibular system.
[0069] It is well known that good eye hand coordination is a prerequisite for fine motor skills while eye body coordination is very important for normal development of gross motor skills. Gross motor skills are again very dependent on the relationship between the visual and vestibular system (sense of balance). Consequently, the present disclosure provides a device, system and computer-implemented method that ensures the visual functions directly affect the vestibular system. A relationship between the visual and the vestibular system that is not favorable affects the coordination of movement and the diminishing of self-awareness.
Sense of Sight Balance and Hearing
[0070] A significant number of children suffer from the central sensory processing disorder (CSPD) as a result of prolonged usage of screens which in most cases causes imbalance of the senses perceived and makes the integration of the sensory information perceived impossible. The entire sensory system becomes imbalanced as a result and the child would be diagnosed with CSPD. Those children often undergo very long and very often unsuccessful remediation or marginal improvement of the behavioral and academic challenges with parents spending enormous amounts of money and time trying to help the child. However, it has rarely worked on the life style, change of the visual habits and activities that will help to sink different elements of the sensory system. The present disclosure provides a device, system and computer- implemented method that aims to educate parents about the beneficial visual habits which would enhance visual functions and promote the integration of the sensory information perceived. The present disclosure also provides a device, system and computer-implemented method that is intended to ameliorate the effects of CSPD by stimulating the visual and vestibular systems of a user.
Exemplary Embodiment
[0071] The present disclosure provides a system and a device that makes use of computer hardware and software to improve dynamic cognitive visual functions of a user. Various embodiments are provided for systems, and various embodiments are provided for devices. It will be understood that basic properties of the system also hold for the devices and vice versa. Therefore, for the sake of brevity, duplicate description of such properties may be omitted.
[0072] Figure 1 illustrates an embodiment of a training system 100 for improving dynamic cognitive visual function of a user. The training system 100 comprises a multi-faceted, web-deliverable, browser-based system configured to improve the dynamic cognitive visual function of users. The training system 100 comprises a user portal 101, a clinician portal 102, an administration manager 104, a report generator module 105 and a database 103.
[0073] The training system 100 provides the training program 110 through a user portal 101 that is designed to allow users to access the training program 100 via a communication device 120 connected to a communication network 140. The training program 100 is designed to be platform-independent so that it can be delivered over the internet through any communication device 120. The user creates a user account and accesses the training program 110 via the user portal 101 using an identifier such as a user identification and password as illustrated in Figure 2. Further details of the user portal 101 will be described hereinafter. Every aspect of the user’s progress, scores and results are recorded upon completion of the training program 110 and these can be stored under the user account for further reference and analysis, thereby allowing the user’s progress to be monitored as illustrated in Figure 8. The user’s progress, scores and results can be stored on a cloud-based database 103 and/or on the database of the training system 100. The server makes the user’s data available for review by the supervising clinician through the clinician portal 102. The supervising clinician uses the clinician portal 102 to regularly check on the usage and progress of each user and to provide helpful guidance and coaching remotely or in-person during sessions. The training system 100 comprises a report generator that allows a user or a clinician to access the user’s account to generate a report of the user’s sessions. Access to the user’s account is provided to the clinician via the clinician portal 102. This allows the clinician to monitor the user’s progress and to provide recommendations or to adjust the parameters of the training program 110 accordingly.
[0074] The communication device 101 can comprise a portable computing device such as a laptop, a mobile phone, or any other appropriate storage and/or communication device 120 to exchange data via a web browser and/or communications network 140. The user portal 101 provides access to users via any communication device 120 located anywhere in the world and allows users to access the training program 110. Users can work on the training program as frequently as their time and schedule permits. Users can also work under the supervision of a clinician who can also be located remotely and can access the training program through the clinician portal 102. The communication device 101 includes one or more processors (not shown) and memory (not shown) for storing applications, modules and other data. In one embodiment, the memory includes a large number of memory blocks where calculations may be performed. Memory may be any suitable type of computer readable and programmable memory and is preferably a non-transitory, computer readable storage medium. The communication device also includes one or more display interface for viewing content. The display interface permits a user to interact with the training system 100 and its components and functions. This may be facilitated by the inclusion of a user input device which can include a mouse, keyboard, or any other peripheral device that allows a user to interact with the training system 101 or the communication device 101. It is to be understood that the components and functions of the training system 100 may be represented as one or more discrete systems or workstations, or may be integrated as part of a larger system or workstation.
[0075] The processor (not shown) executes instructions contained in programs such as the training program 110 within the training system 100 stored in the memory of the communication device 101 or external database 132. The processor may provide the central processing unit (CPU) functions of a computing device on one or more integrated circuits. As used herein, the term‘processor’ broadly refers to and is not limited to single or multi-core general purpose processor, a special purpose processor, a conventional processor, a graphical processing unit, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, one or more Application Specific Integrated Circuits (ASICs), one or more Field Programmable Gate Array (FPGA) circuits, any other type of integrated circuit, a system on a chip (SOC), and/or a state machine.
[0076] The training system 100 is communicably operative via a communications network 140 to a server 130. The server 130 may be owned or operated by a different entity such as a private data service provider or by an intermediary company via the communications network 140. An external database 132 is communicatively coupled to the server 130 and is configured for storing of data such as the trainnig system and information pertaining to a user, such information comprising at least one identifier such as name, address, identity card or passport number, age and gender. Alternatively, a database 103 is also provided within the training system 100 and is configured for storing of information pertaining to a user, such information comprising at least one identifier such as name, address, identity card or passport number, age and gender. Information pertaining to the scores and results of the user, analysis of such scores and results are also stored in the database 103 or the external database 132.
[0077] As used herein, the communication device 120 may exchange information via any communications network 140, such as a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a proprietary network, and/or Internet Protocol (IP) network such as the Internet, an Intranet or an extranet. Each device, module or component within the system may be connected over a network or may be directly connected. A person skilled in the art will recognize that the terms‘network’,‘computer network’ and‘online’ may be used interchangeably and do not imply a particular network embodiment. In general, any type of network may be used to implement the online or computer networked embodiment of the present invention. The network may be maintained by a server or a combination of servers or the network may be serverless. Additionally, any type of protocol (for example, HTTP, FTP, ICMP, UDP, WAP, SIP, H.323, NDMP, TCP/IP) may be used to communicate across the network. The devices as described herein may communicate via one or more such communication networks.
[0078] Figure 2 shows an embodiment of a screen shot of the user portal 101 of the training system 100. To acccess the training system, the user opens a standard web browser on a communication device 101 connected to the communication network 140 and goes to a predetermined web site containing the training system. The user then logs into the training system via the user portal 101 using a user identification and password. The user portal 101 allows the user to access the training system 100 that is designed to be accessed in a computing device in a treatment center or during a treatment session, or over the internet on any communication device 101 that is connected to the communications network 140. Practically any registered user on any communication device 101 located anywhere in the world can access the user portal 101 and can access the training programs 110 as frequently as their time and schedule permits, or under the supervision of a clinician who can also be located either physically or remotely. To acccess the training system, the user opens a standard web browser on a communication device 101 connected to the communication network 140 and goes to a predetermined web site containing the training system. The user then logs into the training system via the user portal 101 using a user identification and password. [0079] The training system 100 comprises a training program 110 comprising a plurality of exercises configured to target a plurality of cognitive visual functions. These cognitive visual functions can be improved from performing these exercises based on focusing the entire visual system, having a motor and sensory component, and focusing on different viewing distances. This provides the advantage of boosting ocular accommodation, visual acuity, convergence, eye teaming, enhancing the sensory motor system of the vision and integrating the senses of sight, hearing and balance. The training program 110 comprises two modes: a first mode 112 and a second mode 111. The first mode requires the user to identify a series of letter-like symbols and/or combinations of letter-like symbols and the second mode requires the user to identify a series of object or logo symbols and/or combinations of the same. While the two modes achieve the same objective of training and/or improving the dynamic cognitive visual functions of a user, each of the two modes cater to different age groups of children and to the child’s literacy level or ability to recognise the English alphabet. For example, if the child comes from an English speaking environment or any other environment that uses the latin alphabet, and has an acceptable level of literacy, the first mode can be used. Additionally, other than improving the dynamic cognitive visual functions of the user, the first mode provides another advantage of enhancing the user’s spelling abilities and therefore reading and writing abilities. If the child comes from a non-English speaking environment, the second mode of using a series of object or logo symbols and/or combinations of the same the symbols of the game can be used. The complexity of the symbols enhances the user’s ability to recognise those symbols. For example, very young children and those with low literacy levels can use the second mode of the training program.
[0080] Figure 3 illustrates the various levels of the training program 110. The training program comprises three levels: Level 1 114 includes Grades 1 and 2, Level 2 116 includes Grades 3 and 4 and Level 3 118 includes Grades 5 and 6. Each level provides an increasing difficulty level, with Grade 1 being the easiest and Grade 6 being the most challenging. Depending on whether the training program refers to the first mode (i.e. letters) or the second mode (i.e. objects or logos), each level may comprise varying number of stages. The level“space symbols” refers to the second mode and only one level is available for the“space symbols” level. With reference to the first mode comprising letters, to increase the difficulty level of each grade, the following techniques may be used: (i) increasing the complexity level of words; (ii) increasing the literacy level. Increasing the complexity level of words may include using an increasing number of letters, using less frequently used words, or increasing the number of syllables, as the levels progress. Increasing the literacy level of words may include using words that are aligned with standardized grade-level bands that are based on a child’s age. For example, a 6 year old child may be expected to be able to read a range of words based on the standardized grade-level band of a 6 year old. The range of words that are expected of a 6 year old may also be split into each of the six grades based on the level of difficulty of the words. The object behind this is that other than simply seeing the letters, children need to understand the position of the letters within the word and then place it accordingly in order to create the word.
[0081] Figures 5A to 5C illustrates screenshots of different levels of the first mode of the training program. For example, Figures 5A, 5B and 5C may represent different levels of the first mode of the training program. In each of these levels, the letters are arranged in a random order and the user has to spell out the name of the object based on the letters that are arranged randomly. The spelling of the words for each level is different as the words used corresponds to the age of the user or the literacy level of the user. Letters are placed around the screen at 3 different depths - Very Near, Near and Far. These depths refer to the simulated distances from the user’s eyes to a simulated depth within the display interface of the communication device 101. The letters may be positioned very near (30 cm), near/intermediate (1.5 m) and far (3m). Letters for the answer are placed in all 3 depths, creating a near-and-far eye exercise for the user. For users who are not confident with spelling, a question mark icon or a hint icon may be provided in the top right corner. Once that icon is activated, the child will be able to see the word briefly, again enhancing working memory, in order to help the child to get the correct spelling.
[0082] Figure 3 also illustrates a screenshot of the second mode 111 of the training program 110 which comprises three levels: Level 1 113 includes Grades 1 and 2, Level 2 115 includes Grades 3 and 4 and Level 3 117 includes Grades 5 and 6. Each level provides an increasing difficulty level, with Grade 1 being the easiest and Grade 6 being the most challenging. A level comprises four stages, each stage presents a “key” and up to 14 symbols that the player has to use to match the“key” in sequential order. The“key” is made up of 9 random symbols. These 9 symbols are randomly placed around the screen on 3 different depths - Very Near, Near and Far. These depths refer to the simulated distances from the user’s eyes to a simulated depth within the electronic display unit of the device. The symbols may be positioned very near (30 cm), near/intermediate (1.5 m) and far (3m). The symbols for the answer are placed in all 3 depths, creating a near-and-far eye exercise for the user, as illustrated in Figure 5.
[0083] In another embodiment, the training program 110 brings the symbols displayed on the electronic display from very far to near or from very near to far. For example, one or some of the symbols on the electronic display will be progressively brought closer to the user from a simulated distance of around 3 metres to 30 centimetres. Alternatively, one or some of the symbols on the electronic display will be progressively brought further to the user from a simulated distance of around 30 centimetres to 3 metres, i.e. from very near to very far. In another embodiment, the symbols can be either brought closer to the user or further from the user within a predetermined range of time. And as the user progresses and advances through the levels, the predetermined range of time is reduced accordingly and the speed at which the symbols are brought closer to the user or further from the user is increased.
[0084] The training system 100 receives a user input from the user symbol by symbol, or letter by letter or object by object. The user inputs data into one or more text boxes provided on the display. To enter an input data into each of the text boxes, the user clicks on the appropriate symbol and that symbol will appear in the text box or the next empty text box. The user input may be received through a user input device (not shown( of the communication device 101. An example of a user input device my be a keyboard, a mouse, a track pad, a touch screen, an image sensor, a remote sensing device (e.g, Microsoft Kinect), a microphone or any other input device. In one embodiment, the user input device may be a device capable of recognizing a gesture of a user. In one example, a camera capable of transmitting user input information to a computing device may visually identify a gesture performed by the user. Upon visually identifying the gesture of the user, a corresponding user input may be received by the computing device from the camera. In another embodiment, the user input device may be a microphone that includes an acoustic-to-electric transduccer or sensor that converts sound waves into one or more electric signals. For example, the microphone may pick up the voice of a user and convert the sound wave into a corresponding electrical signal into a digital representation that may be used as an input. In another embodiment, the input device may be a presence-sensitive screen. A presence-sensitive screen can generate one or more signals corresponding to a location selected by a gesture performed on or near the presence-sensitive screen. In some examples, the presence-sensitive screen detects a presence of an input unit, for example, a finger that is in close proximity to, but does not physically touch the presence-sensitive screen. The presence-sensitive screen generates a signal corresponding to the location of the input unit. Signals generated by the selection of the corresponding location are then provided as input data to the training system.
[0085] Figure 4 shows an exemplary computer-implemented method for improving the dynamic cognitive visual functions of a user based on a first mode of the training program. To acccess the training system, as shown in step 200, the user opens a standard web browser on a communication device 101 connected to the communication network 140 and goes to a predetermined web site containing the training system. The user then logs into the training system via the user portal 101 using a user identification and password. The user portal 101 allows the user to access the training system 100 that is designed to be accessed in a computing device in a treatment center or during a treatment session, or over the internet on any communication device 101 that is connected to the communications network 140. Once the user has been granted access to the training system 100, the user will be presented with a plurality of levels on the display interface for selection. In one embodiment, the plurality of levels can be pre-configured by the supervising clinician and the user will be presented with a plurality of levels that is based on the user’s age group or literacy level. The user selects one of the predetermined number of levels. For example, as illustrated in Figure 3, each mode include multiple levels and the user may be presented with three levels: Level 1 which includes Grades 1 and 2, Level 2 which includes Grades 3 and 4 and Level 3 which includes Grades 5 and 6. Each level may also include multiple stages. For example, each level may include eight stages. Typically, the user proceeds to select Grade 1 and to progress to succeeding grades and levels. At step 201, each stage of a selected Grade will be loaded for the user. The display interface will proceed to display a plurality of letters on the display. The letters are arranged in a random order and the user has to spell out the name of the object based on the letters that are arranged randomly. The letters are additionally arranged on the screen at 3 different depths - Very Near, Near and Far. These depths refer to the simulated distances from the user’s eyes to a simulated depth within the display interface of the communication device 101. The letters may be positioned very near (30 cm), near/intermediate (1.5 m) and far (3 m). Letters are placed in all 3 depths to create a near-and-far eye exercise for the user. The object of the user is to input the letters that are arranged randomly into each of the text boxes placed at the bottom of the display interface to spell out a word based on the letters shown. At step 202, the user will key in the input through the user input device of the communication device 101. The training program will determine if the input keyed in by the user corresponds to the correct answer. If the training program determines that an incorrect answer has been keyed in, at step 203, the player will be allowed to repeat the stage again until the training program determines that the user has input the correct answer. If a third attempt to input the correct answer is unsuccessful, at step 204, the user will be allowed to repeat the stage again. However, in this case, the size of the letters will increased by 10%. If the user continues to be unsuccessful in inputting the correct answer, the size of the letters will continue to be increased by 10% successively until the user inputs the correct answer, up to a maximum predetermined size.
[0086] At step 205, once the user keys in the correct answer, the training program will calculate the score based on the number of correct answers. At step 206, each user is awarded an additional 100 points to the total score at the end of each stage and a certain number of points are deducted from these 100 points depending on the following criteria: (i) did the player use the hint?, (ii) were there more than 2 unsuccessful attempts?, and (iii) has there been only 1 failed attempt?. In the case of a player utilising a hint, as shown in steps 207 and 210, 20 points will be deducted from the total score. In the case of a player with more than 2 failed attempts, as shown in steps 208 and 211, 75 points will be deducted from the total score. In the case of a player with only one failed attempt, as shown in steps 209 and 212, 50 points will be deducted from the total score. It is possible that a user may satisfy one or more of each of the three criteria listed above, and the points will be cumulatively deducted accordingly from the total score. Once the total score is calculated, as in step 214, the total score will be saved in the training program or saved in the database or external database or in the memory of the communciation device 101. Once the user has completed a stage within the selected Grade, the entire sequence of steps 201 to 214 will be repeated until the user has completed all the stages within the Grade and the levels.
[0087] Although no time limit is provided for each of the individual stages within the levels, each user is given a predetermined time to complete each level. For example, a total of approximately 10 minutes may be provided for completion of each level. The training program will aggregate the time taken to complete each stage and to aggregate the total time taken over all the stages within the level. Should the total time taken to complete the level exceed the predetermined aggregate time, at step 216, the training program will proceed to save the total score up to that point in time and to save the total score to the database and to proceed to end the exercise. Once the user has completed a level, the date and time played, level played, total score and the aggregate time taken will be tabulated and saved in the database. The user will also be able to access the aforesaid information on the training program. An example of a screen shot showing the aforesaid information is shown on Figure 8. Figure 9A and 9B shows the same information but compare the tabulated score with other users in the training program.
[0088] Figure 6 shows an exemplary computer-implemented method for improving the dynamic cognitive visual functions of a user based on a second mode of the training program. To acccess the training system, as shown in step 300, the user opens a standard web browser on a communication device 101 connected to the communication network 140 and goes to a predetermined web site containing the training system. The user then logs into the training system via the user portal 101 using a user identification and password. The user portal 101 allows the user to access the training system 100 that is designed to be accessed in a computing device in a treatment center or during a treatment session, or over the internet on any communication device 101 that is connected to the communications network 140. Once the user has been granted access to the training system 100, the user will be presented with a predetermined number of levels. For example, as illustrated in Figure 3, each mode may include multiple levels and the user may be presented with three levels: Level 1 which includes Grades 1 and 2, Level 2 which includes Grades 3 and 4 and Level 3 which includes Grades 5 and 6. Each level may also include multiple stages. For example, each level may include four stages. Typically, the user will proceed to select Grade 1 and to progress to succeeding levels. At step 301, each stage of a selected Grade will be loaded for the user. The display interface will proceed to display a plurality of symbols on the display. Since the training program relates to the second mode, and each level comprises four stages, each stage will present a“key” and up to 14 symbols that the player has to use to match the“key” in sequential order. The“key” is made up of 9 random symbols. These 9 symbols are randomly placed around the screen on 3 different depths - Very Near, Near and Far. These depths refer to the simulated distances from the user’s eyes to a simulated depth within the electronic display unit of the device. The symbols may be positioned very near (30 cm), near/intermediate (1.5 m) and far (3m). The symbols for the answer are placed in all 3 depths, creating a near-and-far eye exercise for the user, as illustrated in Figure 5. At step 302, the user will key in the input through the user input device of the communication device 101. The training program will determine if the input keyed in by the user corresponds to the correct answer. If the training program determines that an incorrect answer has been keyed in, at step 303, the player will be allowed to repeat the stage again until the training program determines that the user has input the correct answer. If a third attempt to input the correct answer is unsuccessful for the same stage, at step 204, the user will be allowed to repeat the stage again. However, in this case, the size of the symbols or the keys will be increased by 10%. If the user continues to be unsuccessful in inputting the correct answer, the size of the symbols and the keys will continue to be increased by 10% successively until the user inputs the correct answer, up to a maximum predetermined size.
[0089] At step 305, once the user keys in the correct answer, the training program will calculate the score based on the number of correct answers. At step 306, each user is awarded an additional 100 points to the total score at the end of each stage and a certain number of points are deducted from these 100 points depending on the following criteria: (i) were there more than 2 unsuccessful attempts?, and (ii) has there been only 1 failed attempt?. In the case of a player with more than 2 failed attempts, as shown in steps 307 and 309, 75 points will be deducted from the total score. In the case of a player with only one failed attempt, as shown in steps 308 and 310, 50 points will be deducted from the total score. It is possible that a user may satisfy one or more of each of the three criteria listed above, and the points will be cumulatively deducted accordingly from the total score. Once the total score is calculated, as in step 312, the total score will be saved in the training program or saved in the database or external database or in the memory of the communciation device 101. Once the user has completed a stage within the selected Grade, the entire sequence of steps 301 to 312 will be repeated until the user has completed all the stages within the Grade.
[0090] Although no time limit is provided for each of the individual stages within the levels, each user is given a predetermined time to complete each level. For example, a total of approximately 10 minutes may be provided for completion of each level. The training program will aggregate the time taken to complete each stage and to aggregate the total time taken over all the stages within the level. Should the total time taken to complete the level exceed the predetermined aggregate time, at step 314, the training program will proceed to save the total score up to that point in time, at step 313, and to save the total score to the database and to proceed to end the exercise. Once the user has completed a level, the date and time played, level played, total score and the aggregate time taken will be tabulated and saved in the database. The user will also be able to access the aforesaid information on the training program. An example of a screen shot showing the aforesaid information is shown on Figure 8.
[0091] Both the first mode 112 and the second mode 111 of the training program 110 have two key features. Firstly, the size of the letters or the object or logo symbols are increased after a number of failed attempts by the user to provide the correct answer; and secondly, the size of the letters or the object or logo symbols are increased progressively by a certain percentage in size, corresponding to the number of failed attempts beyond a certain number of attempts, such increase in size being subject to a predetermined maximum size. As illustrated in Figures 2 and 3, preferably, the size of the letter-like symbols and/or the object or logo symbols are increased by about 10% with each attempt after the third attempt by the user.
[0092] Figures 7A and 7B illustrates screenshots of different stages or levels of the second mode of the training program. For example, Figures 7A and 7B may represent different levels or stages of the second mode of the training program. In each of these stages or levels, the objects or logos are arranged in a random order and the user has to input the objects in the same order as the key that is visible at the top of the screen. The objects are placed around the screen at 3 different depths - Very Near, Near and Far. These depths refer to the simulated distances from the user’s eyes to a simulated depth within the display interface of the communication device 101. The objects may be positioned very near (30 cm), near/intermediate (1.5 m) and far (3m). The objects are placed in all 3 depths, creating a near-and-far eye exercise for the user.
Experimental Data
[0093] Experimental data was collected from participants of test trials that have attended in-house vison therapy sessions. The subjects were children between the ages of 6-11 years old, diagnosed with functional vision deficiencies. The aim of the sessions was to assess the positive effect on the visual functioning in individuals with deficient functional vision and to measure their improvement from performing the training programs.
[0094] The data was collected and sampled from the training programs and therapist records. For better illustration, the data is further divided into 4 categories:
1. Grades 1 and 2 (maximum score 800 pts)
2. Grades 3 and 4 (maximum score 800 pts)
3. Grades 5 and 6 (maximum score 800 pts)
4. Space symbols (maximum score 400 pts).
[0095] Time and score in each category were recorded and noted as an indicator of progress and dates were used to track and reflect a subject’s performance during in- house vision therapy sessions.
[0096] When users did not do well in Grades 1&2, 3&4, 5&6 and the recognition of object or logo symbols and/or combinations, the convergence and binocularity on chart reading was poor, while challenges with lateralization and/or balance were more pronounced. Overall, the therapist impression was that good performance on therapy sessions reflects on shorter reaction time and higher scores on all segments of the training programs. The aggregate scores of the training programs corresponds to the user’s ability to see the letters from the chart at distance and at near which relates to the accommodation ability of the user.
Results (first group)
Figure 10. Table and graph 1. User A.
[0097] The graph as illustrated in Figure 10, Table and graph 1 of User A shows an upward trend in score and decline in aggregate time taken that user A needs to complete the level. This means that user A has become more accurate with faster reaction time after performing the training program. User A had two blocks of in- house vision therapy.
[0098] On the second day of performing the training program, the score was kept at 720 points, reaching its maximum of 800 points on the l4th day and showing steady growth on all categories, whereas the aggregate time taken was reduced from 320 to 160 seconds. It was reported that on days when user A did not do well on the training program relating to the second mode of the training program, i.e. the recognition of object or logo symbols and/or combinations, the convergence was poor and as well as binocularity on chart reading. However, grades 1 and 2 and grades 3 and 4 high score corresponded with good performance in vision therapy sessions and better focusing abilities.
Figure 11. Table and graph 2. User B.
[0099] User B did one block of in-house vision therapy and performed the training program 14 times during which User B displayed faster aggregate time taken and had higher scores on grades 5 and 6, 720 points and 460 seconds on first day and 730 points and 282 seconds on last day on grades 5 and 6. For the training program relating to the second mode, i.e. recognition of object or logo symbols and/or combinations, user B needed less aggregate time to complete, from 210 seconds to 159 seconds respectively, but score was at its maximum of 400 points throughout.
[00100] It was noted that when User B required a higher aggregate time taken for spelling on grades 3 & 4 or grades 5 & 6, the Chart reading was slower with less precision on binocular activities, range of convergence was significantly reduced and challenges with lateralization and/or balance were more pronounced.
Figure 12 - Table and srayh 4. User C.
[00101] User C’s results showed steady growth in score from 640 points on first day and 800 points on last day of performing the training program, whereas the aggregate time taken was significantly reduced from 640 seconds on the first session to 266 seconds on the last session. User C underwent one block of in-house vision therapy.
Figure 13 - Table 5 and graph 5. User D.
[00102] User D showed great improvement in speed, decreased aggregate time taken, from 278 to 114 seconds and score that reflects accuracy reaching its maximum of 780 points on grades 3 & 4 and 400 points for performing the training program relating to the recognition of object or logo symbols and/or combinations. User D was especially good in the recognition of object or logo symbols and/or combinations where he could reach maximum score of 400 points in just 87 seconds towards the end of therapy sessions. User D underwent two blocks of in-house vision therapy sessions.
Results (second group)
Figure 14 - Table 1 and graph 1. User A.
[00103] User A underwent one block of in-house vision therapy and its chart and graph analysis reflects good results in score that rose from 605 to 800 points on grades 1 & 2; 3 & 4; 5 & 6. The aggregate time taken to complete the training program also dipped from 294 to 219 seconds respectively. User A performed well on the training program relating to the recognition of object or logo symbols and/or combinations where he could reach maximum of 400 points in just 175 seconds.
Figure 15 - Table 2 and srayh 2. User B.
[00104] User B completed two blocks of in-house vision therapy during which showed fluctuations in reaction time and poor stamina as reflected in his chart and graph results. User B manged do score a maximum of 800 points or 400 points on the recognition of object or logo symbols and/or combinations on most of the days, however he needed more time to finish the tasks. Interestingly, User B needed more time to perform the training program relating to recognition of object or logo symbols and/or combinations.
Figure 16 - Table 3 and graph 3. User C.
[00105] User C showed steady progress in performing the training program. He did extremely well on the training program relating to the recognition of object or logo symbols and/or combinations, often finishing the level in just 49 seconds, which is considered a very short aggregate time taken, where maximum 400 points could be achieved on most days. Grades 3 and 4, 5 and 6 also showed shorter aggregate time taken and ability to reach higher score bands. User C underwent two blocks of in- house therapy.
Figure 17 - Table 4 and graph 4. User D.
[00106] User D underwent one block of in-house vision therapy and an increase in score and decrease in reaction time on both, grades 3 and 4 and for completing the training program relating to the recognition of object or logo symbols and/or combinations. User D went from 730 points to 800 points and 334 to 144 seconds respectively on grades 3 and 4, whereas for the training program relating to the recognition of object or logo symbols and/or combinations, a maximum score of 400 points went to 450 points on day 9 and 148 seconds on last day of performing the training program.
Figure 18 - Table 5 and srayh 5. User E.
[00107] User E did exceptionally well for completing the training program relating to the recognition of object or logo symbols and/or combinations, and grades 3 and 4 where he could finish the training program in just 41 and 47 seconds respectively. User E became much faster in processing visual information, and had short aggregate time taken, however his score plateaued on 3rd day at 740 points.
[00108] The present dislcosure provides a training systema and training programs for improving dynamic cognitive visual functions in users and can be applied to the following individuals, which includes:
• Individuals (both children and adults) with compromised clarity of vision, such as myopia, short-sightedness and long-sightedness. The degree or extent of the exercises will depend on the severity of the problem. After completing the training programs, individuals with minor issues to begin with would advantageously be able to see without glasses while individuals with medium and/or difficult issues to begin with would benefit from the training program by being able to lower the existing prescription of the glasses and to prevent any further increase of the same.
• Children and young adults with learning difficulties. The training programs will address existing focusing issues, which will affect their ability to sustain attention while focused on a task. Studies have shown that academic abilities are greatly enhanced once the visual functions are improved, as the individual is now able to spend less time and energy to decode visual symbols and more time for higher cognitive processes such as focusing on comprehension, memory and problem solving.
• Sports vision for professional sportsmen. Many professional sportsmen whose work requires good eye-hand coordination have worked on their visual abilities and eye-hand coordination. The training programs will help tremendously in the execution of the precise and timely coordinated hand or foot movements and enhances their balance eye hand and eye body coordination, which is crucial in sport performance.
• Individuals above the age of forty years who are affected by presbyopia.
Presbyopia refers to blurred vision at close up that typically happens after the age of forty. However, presbyopia may also occur earlier than the age of forty as a result of lifestyle reasons such as prolonged use of the eyes. The training programs will work on the focusing abilities of individuals above the age of forty. The training programs will however be modified for individuals with this problem below the age of forty where the program will be based much more on relaxation of the accommodation and eye exercises that ease the movements of the eye muscles which will help them to relax.
• Individuals who would like to prevent visual issues. The training programs will focus on individuals who have good visual functions but are eager to retain them at that level. This is particularly important for young individuals who spend long hours in front of computer, tablet or smart phone screens.
• Individuals with visual issues and also with squint (strabismus) and other forms of binocular (use of both eyes together) anomalies. Specific training programs will be included in the training system that may be incorporated from the beginning of the programme. However, the individual will be required to be under the supervision of an eye care provider while on the programme. Manuals will also need to be produced and distributed to the eye care provider. [00109] While the invention has been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced.

Claims

1. A training system for improving visual cognitive functions of a user comprising:
a training program configured to be operable on a communication device and configured to:
present at least one training mode on an electronic display of the communication device for a user to select;
display a plurality of symbols on the electronic display, wherein each of the plurality of symbols are positioned at a predetermined simulated distance from the user’s eyes;
receive an input from the user via an input device,
determine whether the input received from the user corresponds correctly to the plurality of symbols;
adjust a parameter of the plurality of symbols when the input received from the user corresponds incorrectly to the plurality of symbols; and
output a performance score based on the input received from the user.
2. The training system according to claim 1, wherein the parameter of each of the plurality of symbols includes the size of each of the plurality of symbols.
3. The training system according to claim 1, wherein the step of determining whether the input received from the user corresponds correctly to the plurality of symbols includes determining whether the input received matches a word of the English language based on the number of plurality of symbols.
4. The method according to claim 1, wherein the plurality of symbols on the display includes any one of the following: a letter, a combination of letters, an object or a logo.
5. The training system according to claim 1, wherein each of the plurality of symbols is positioned at a predetermined simulated distance from the user’s eyes, wherein the predetermined simulated distance of each of the plurality of symbols can range from 30cm to 3.0m.
6. The training system according to claim 1, wherein each of the plurality of symbols are positioned at a predetermined simulated distance from the user’s eyes, wherein the predetermined simulated distance of each of the plurality of symbols can range from very near to far.
7. The training system according to claim 1, wherein the step of adjusting a parameter of each of the plurality of symbols include increasing the size of each of the plurality of symbols by a predetermined percentage level when each of the input received does not correspond to the each of the plurality of symbols.
8. The training system according to claim 6, wherein the predetermined percentage level is approximately 10%.
9. The training system according to claim 1, wherein the training mode comprises a plurality of levels that progressively increases in difficulty as the user completes each level.
10. The training system according to claim 1, wherein selecting the training mode is based on one of the following criteria: the user’s age, the user’s literacy level or the user’s ability to speak English.
11. The training system according to claim 1, further comprising the step of computing the aggregated time taken by the user to complete the input via the input device.
12. A device for improving visual cognitive functions of a user comprising:
a memory;
one or more processors coupled with the memory, wherein the memory includes processor executable code that, when executed by the one or more processors, causes the processors to perform operations including:
presenting at least one training mode configured for display on a communication device for a user to select;
displaying a plurality of symbols on the display, wherein the plurality of symbols are positioned at a predetermined simulated distance from the user’s eyes;
receiving an input from the user via an input device,
determining whether the input received from the user corresponds correctly to the plurality of symbols;
adjusting a parameter of the plurality of symbols when the input received from the user corresponds incorrectly to the plurality of symbols; and
outputting a performance score based on the input received from the user.
13. The device according to claim 1, wherein the parameter of each of the plurality of symbols includes the size of each of the plurality of symbols.
14. The device according to claim 1, wherein the step of determining whether the input received from the user corresponds correctly to the plurality of symbols includes determining whether the input received matches a word of the English language based on the number of plurality of symbols.
15. The device according to claim 1, wherein the plurality of symbols on the display includes any one of the following: a letter, a combination of letters, an object or a logo.
16. The device according to claim 1, wherein each of the plurality of symbols is positioned at a predetermined simulated distance from the user’s eyes, wherein the predetermined simulated distance of each of the plurality of symbols can range from 30cm to 3.0m.
17. The device according to claim 1, wherein each of the plurality of symbols are positioned at a predetermined simulated distance from the user’s eyes, wherein the predetermined simulated distance of each of the plurality of symbols can range from very near to far.
18. The device according to claim 1, wherein the step of adjusting a parameter of each of the plurality of symbols include increasing the size of each of the plurality of symbols by a predetermined percentage level when each of the input received does not correspond to the each of the plurality of symbols.
19. The device according to claim 18, wherein the predetermined percentage level is approximately 10%.
20. The device according to claim 1, wherein the training mode comprises a plurality of levels that progressively increases in difficulty as the user completes each level.
21. The device according to claim 1, wherein selecting the training mode is based on one of the following criteria: the user’s age, the user’s literacy level or the user’s ability to speak English.
22. The device according to claim 1, further comprising the step of computing the aggregated time taken by the user to complete the input via the input device.
23. A computer-implemented method for improving visual cognitive functions of a user comprising:
presenting at least one training mode configured for display on a communication device for a user to select;
displaying a plurality of symbols on the display, wherein the plurality of symbols are positioned at a predetermined simulated distance from the user’s eyes;
receiving an input from the user via an input device,
determining whether the input received from the user corresponds correctly to the plurality of symbols;
adjusting a parameter of the plurality of symbols when the input received from the user corresponds incorrectly to the plurality of symbols; and
outputting a performance score based on the input received from the user.
PCT/SG2019/050273 2018-05-28 2019-05-27 A system and device for improving dynamic cognitive visual functions in a user WO2019231397A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG10201804477X 2018-05-28
SG10201804477X 2018-05-28

Publications (1)

Publication Number Publication Date
WO2019231397A1 true WO2019231397A1 (en) 2019-12-05

Family

ID=68699042

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2019/050273 WO2019231397A1 (en) 2018-05-28 2019-05-27 A system and device for improving dynamic cognitive visual functions in a user

Country Status (1)

Country Link
WO (1) WO2019231397A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU196218U1 (en) * 2019-12-15 2020-02-19 Леонид Евгеньевич Селявко Quadrangular chip for group exercises on the restoration of visual memory in patients with a neurological clinic
RU197191U1 (en) * 2020-03-02 2020-04-09 Леонид Евгеньевич Селявко Digital chip for group exercises on restoration and preventive training of visual-spatial memory
RU197669U1 (en) * 2020-04-06 2020-05-21 Леонид Евгеньевич Селявко Round chip with concentric recesses for group corrective-developing exercises and training visual-spatial memory and fine motor skills
RU197668U1 (en) * 2020-03-10 2020-05-21 Леонид Евгеньевич Селявко Chip with letters for group classes on the restoration and preventive training of visual-spatial memory
CN111265391A (en) * 2020-01-22 2020-06-12 张秀丽 Control method and device of visual training system and storage medium
CN111481411A (en) * 2020-04-20 2020-08-04 绍兴启视电子科技有限公司 Control system of goggles

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070288411A1 (en) * 2006-06-09 2007-12-13 Scientific Learning Corporation Method and apparatus for developing cognitive skills
US20100188637A1 (en) * 2007-04-13 2010-07-29 Nike, Inc. Unitary Vision Testing Center
US20110116047A1 (en) * 2004-09-03 2011-05-19 Uri Polat System and method for vision evaluation
US20110300522A1 (en) * 2008-09-30 2011-12-08 Universite De Montreal Method and device for assessing, training and improving perceptual-cognitive abilities of individuals
US20120238831A1 (en) * 2011-03-18 2012-09-20 Jacob Benford Portable Neurocognitive Assesment and Evaluation System
US20160210870A1 (en) * 2015-01-20 2016-07-21 Andrey Vyshedskiy Method of improving cognitive abilities
US20170025033A1 (en) * 2014-03-06 2017-01-26 Matthias Rath Computer-implemented method and system for testing or training a users cognitive functions

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110116047A1 (en) * 2004-09-03 2011-05-19 Uri Polat System and method for vision evaluation
US20070288411A1 (en) * 2006-06-09 2007-12-13 Scientific Learning Corporation Method and apparatus for developing cognitive skills
US20100188637A1 (en) * 2007-04-13 2010-07-29 Nike, Inc. Unitary Vision Testing Center
US20110300522A1 (en) * 2008-09-30 2011-12-08 Universite De Montreal Method and device for assessing, training and improving perceptual-cognitive abilities of individuals
US20120238831A1 (en) * 2011-03-18 2012-09-20 Jacob Benford Portable Neurocognitive Assesment and Evaluation System
US20170025033A1 (en) * 2014-03-06 2017-01-26 Matthias Rath Computer-implemented method and system for testing or training a users cognitive functions
US20160210870A1 (en) * 2015-01-20 2016-07-21 Andrey Vyshedskiy Method of improving cognitive abilities

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU196218U1 (en) * 2019-12-15 2020-02-19 Леонид Евгеньевич Селявко Quadrangular chip for group exercises on the restoration of visual memory in patients with a neurological clinic
CN111265391A (en) * 2020-01-22 2020-06-12 张秀丽 Control method and device of visual training system and storage medium
CN111265391B (en) * 2020-01-22 2021-11-09 广东目视远眺信息科技有限公司 Control method and device of visual training system and storage medium
RU197191U1 (en) * 2020-03-02 2020-04-09 Леонид Евгеньевич Селявко Digital chip for group exercises on restoration and preventive training of visual-spatial memory
RU197668U1 (en) * 2020-03-10 2020-05-21 Леонид Евгеньевич Селявко Chip with letters for group classes on the restoration and preventive training of visual-spatial memory
RU197669U1 (en) * 2020-04-06 2020-05-21 Леонид Евгеньевич Селявко Round chip with concentric recesses for group corrective-developing exercises and training visual-spatial memory and fine motor skills
CN111481411A (en) * 2020-04-20 2020-08-04 绍兴启视电子科技有限公司 Control system of goggles

Similar Documents

Publication Publication Date Title
WO2019231397A1 (en) A system and device for improving dynamic cognitive visual functions in a user
Lai et al. A comparative study on the effects of a VR and PC visual novel game on vocabulary learning
US20200306124A1 (en) Method and apparatus for treating diplopia and convergence insufficiency disorder
US20030232319A1 (en) Network-based method and system for sensory/perceptual skills assessment and training
Whitehill et al. Towards an optimal affect-sensitive instructional system of cognitive skills
Winter et al. Where is the evidence in our sport psychology practice? A United Kingdom perspective on the underpinnings of action.
Boon et al. Treatment and compliance with virtual reality and anaglyph‐based training programs for convergence insufficiency
WO2022187279A1 (en) Systems, methods, and devices for vision assessment and therapy
Backus et al. Use of virtual reality to assess and treat weakness in human stereoscopic vision
CN117438065B (en) Data processing method, system and storage medium of VR vision training instrument
Thelwell et al. Can reputation biases influence the outcome and process of making competence judgments of a coach?
Faltaous et al. Understanding challenges and opportunities of technology-supported sign language learning
Brata et al. Virtual reality eye exercises application based on bates method: a preliminary study
Getman A commentary on vision training
Boon et al. Vision training; Comparing a novel virtual reality game of snakes with a conventional clinical therapy
EP3461394A1 (en) Method and system for adapting the visual and/or visual-motor behaviour of a person
CN113143705B (en) Vision training method and system for improving eyesight
Vice A new era of assistive technology for patients with low vision
CN103919665A (en) Multimedia visual training system
Braun Flexible Methodology for Assisted Small School Children Investigation
Longshore et al. Mindfulness-and Acceptance-Based Approaches to the Treatment of Athletes and Coaches
Hussaindeen Binocular Vision Anomalies and Normative Data BAND of Binocular Vision Parameters among School Children Between 7 and 17 Years of Age in Rural and Urban Tamilnadu
Kurtel et al. Developing Eye Tracking Exercise System for Treatment of Lazy Eye
Khabbaz et al. Designing a Serious Game for Children with Autism using Reinforcement Learning and Fuzzy Logic
Podugolnikova Impact of binocular vision impairments on reading skills in first-year schoolchildren with high visual acuity

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19812386

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19812386

Country of ref document: EP

Kind code of ref document: A1