GB2622119A - Head mounted device and methods for training peripheral vision - Google Patents

Head mounted device and methods for training peripheral vision Download PDF

Info

Publication number
GB2622119A
GB2622119A GB2300972.3A GB202300972A GB2622119A GB 2622119 A GB2622119 A GB 2622119A GB 202300972 A GB202300972 A GB 202300972A GB 2622119 A GB2622119 A GB 2622119A
Authority
GB
United Kingdom
Prior art keywords
user
visual stimuli
positions
visual
stimuli
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2300972.3A
Other versions
GB202300972D0 (en
GB2622119A8 (en
Inventor
Michael Falk Geoffrey
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of GB202300972D0 publication Critical patent/GB202300972D0/en
Priority to PCT/GB2023/052230 priority Critical patent/WO2024047338A1/en
Publication of GB2622119A publication Critical patent/GB2622119A/en
Publication of GB2622119A8 publication Critical patent/GB2622119A8/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H5/00Exercisers for the eyes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/06Radiation therapy using light
    • A61N5/0613Apparatus adapted for a specific treatment
    • A61N5/0618Psychological treatment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/01Constructive details
    • A61H2201/0188Illumination related features
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/16Physical interface with patient
    • A61H2201/1602Physical interface with patient kind of interface, e.g. head rest, knee support or lumbar support
    • A61H2201/1604Head
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/16Physical interface with patient
    • A61H2201/1602Physical interface with patient kind of interface, e.g. head rest, knee support or lumbar support
    • A61H2201/165Wearable interfaces
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5023Interfaces to the user
    • A61H2201/5048Audio interfaces, e.g. voice or music controlled
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5058Sensors or detectors
    • A61H2201/5092Optical sensor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0044Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Rehabilitation Therapy (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Pain & Pain Management (AREA)
  • Epidemiology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biomedical Technology (AREA)
  • Engineering & Computer Science (AREA)
  • Psychology (AREA)
  • Psychiatry (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Child & Adolescent Psychology (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method for relaxing gaze and/or training attention to peripheral vision, comprising providing visual stimuli 107 simultaneously to the left and right monocular regions of a subject’s peripheral vision, and a system comprising a head mounted device 100 configured to provide such visual stimuli. The visual stimuli 107 may comprise one or more light elements on a pair of elongate elements 102,101. The headgear maybe headphones, headband, hat, or glasses. The device may receive user input is response to the visual stimuli. The visual stimuli maybe based on an indicated or detected level of relaxation of the user e.g., breathing of the user. The visual stimuli may be based on sound e.g., rhythmic beat or music. The device may provide visual stimuli 107 at a plurality of angular positions to the left and right from the centre of the user’s vision and the spacing between the left and right visual stimuli 107 may increase during a training session.

Description

HEAD MOUNTED DEVICE AND METHODS FOR TRAINING PERIPHERAL VISION
FIELD
The present invention relates to devices for training relaxation of gaze by directing attention to peripheral vision.
BACKGROUND
Having a relaxed gaze and open awareness of events in the peripheral visual field is important for many activities where there is a need for heightened awareness of one's general surroundings as opposed to singular focus on a tight central point, or fovea! vision. For example, sport activities may require an awareness of movement, shape and colour in the extremities of vision, for instance the movement of other players, whilst keeping gaze anchored on the main focus of attention, for instance the nearest opponent, net or ball.
Similarly, a tracker in a combat or hunting situation may wish to keep an open visual awareness to detect small changes in colour or movement in the widest possible visual field.
Techniques to soften focus and move attention away from a singular focus to the wider peripheral field are also used by therapists and Neuro Linguistic Programming (NLP) counsellors as a method to promote relaxation, reduce negative thought patterns and increase feelings of presence and calm in patients.
Finally, as an increasing proportion of people's lives are spent with attention tightly focussed on small screens, there is a need to actively help people disengage from this behaviour, which has been linked to tension and stress, at regular times throughout the day.
Previous technology in the field of peripheral vision training has presented various drawbacks which has prevented it from becoming accessible for wider use, for example outside a laboratory or other controlled setting. Furthermore, previous technology has generally concentrated on visual recognition of stimuli in the area of peripheral vision that lies just outside the central, foveal region, and does not address vision in the wider peripheral regions.
The Applicant accordingly believes that there remains scope for improvements to technologies for training relaxation of gaze by directing attention to peripheral vision.
SUMMARY
In one aspect, the present invention provides a system for relaxing gaze and/or training attention to peripheral vision comprising: a head mounted device configured to provide visual stimuli simultaneously to the left and right monocular regions of a user's peripheral vision.
In another aspect, the present invention provides a method for relaxing gaze and/or training attention to peripheral vision comprising: providing visual stimuli simultaneously to the left and right monocular regions of a subject's peripheral vision.
In this regard, the left monocular region of a user's peripheral vision is the region which can be viewed by the left eye only (and not the right eye). Likewise, the right monocular region of a user's peripheral vision is the region which can be viewed by the right eye only (and not the left eye). This is in contrast to the binocular region which can be viewed by both eyes.
The Applicant has recognised that providing visual stimuli simultaneously to the left and right monocular regions can assist with training the user's peripheral vision whilst encouraging the user to keep a central focus. In such configurations, because the left and right visual stimuli can only be seen by a respective right and left eye, the user is unlikely to improve their perception of these stimuli by shifting their gaze away from a central focus.
Thus, to succeed the user will naturally anchor their gaze centrally, without requiring any additional instructions, in order to keep in view both left and right stimuli simultaneously. (In this regard, a central focus corresponds to a user's focus being directed generally towards their centre of vision, i.e. in the forwards direction).
Advantageously, it is therefore not necessary to measure a user's compliance with maintaining a central focus (e.g. by eye tracking or by a user self-reporting compliance), and accordingly in embodiments a user's compliance with maintaining a central focus is not measured.
In this regard, providing visual stimuli simultaneously to the left and right monocular regions comprises providing a visual stimulus to the left monocular region of a user's vision at a same time as providing a visual stimuli to the right monocular region of the user's vision, e.g. such that respective time intervals at which the respective left and right visual stimuli are provided at least partially overlap, and in some embodiments fully overlap, (e.g., and preferably such that a start time and/or an end time of the right and left visual stimuli is the same, preferably such that the right and left stimuli are provided for exactly the same period of time).
The visual stimuli provided simultaneously to the left and right monocular regions of a user's vision may also be referred to herein as a "pair of left and right visual stimuli" or a "pair of visual stimuli".
For most humans, the monocular region of vision comprises angular positions from about 60 to about 110 degrees from the centre of vision On the left and right directions, for the left and right monocular regions respectively). Thus, in embodiments, providing visual stimuli in the left and right monocular regions respectively of the user's peripheral vision comprises providing visual stimuli at one or more angular positions to the left and right respectively from the centre of a user's vision, the one or more angular positions preferably being from about 60 degrees and about 110 degrees from the centre of vision of the user in the left and right directions. This allows the widest possible area of peripheral vision to be trained.
The centre of vision can be taken as the direction pointing forwards from the bridge of the user's nose. The angular position of a visual stimulus in the left or right direction relative to the centre of a user's vision, is measured as the angle between the forwards direction from the bridge of the user's nose, and the visual stimulus, as measured along a horizontal plane (i.e. being the angle along the horizontal meridian). Accordingly, the direction straight ahead (forwards) of the user corresponds to an angular position of zero degrees, and positions to the left and right have angular positions greater than zero degrees.
Preferably (during a training session) a vertical position of the visual stimuli provided is close to the vertical position (height) of the user's eyes, preferably being within 5 cm (above or below) of the vertical position user's eyes, preferably within 2 cm, preferably within 1 cm (preferably as measured in the vertical direction from the bridge of the user's nose, which generally aligns with the middle of a user's eye). Whether above, below, or exactly in line with the bridge of the user's nose, the angular position of a visual stimulus provided in the left or right direction can be measured as above, by measuring the angle between the forwards direction from the bridge of the user's nose, and the visual stimulus, along a horizontal plane (so as to measure the angular position along that horizontal plane which the visual stimulus lies directly above or below).
In embodiments, (during a training session) visual stimuli provided to the left and right monocular regions simultaneously are provided at an equal angular position to the left and right from the centre of a user's vision (such that the left and right visual stimuli are provided at the same angular position as one another).
In other words, the left and right visual stimuli provided simultaneously are preferably provided at a same distance to the left and right of the bridge of the user's nose, and at a same distance forwards or backwards of the bridge of the user's nose as one another.
The Applicant has recognised in this regard, that providing visual stimuli simultaneously at an equal angular position to the left and right from the centre of the user's vision can help the user to retain a relaxed centred gaze. Providing visual stimuli at equal angular positions may be generally more relaxing than providing visual stimuli which are mismatched in angular position.
In embodiments, (during a training session) visual stimuli provided to the left and right monocular regions simultaneously are provided at a same vertical position as one another.
Visual stimuli at a same vertical position may be generally more relaxing than visual stimuli which are mis-matched in vertical position.
As will be discussed in more detail below, during a training session, the angular position of left and right visual stimuli provided simultaneously may vary.
During a training session, the vertical position (height) of the left and right visual stimuli provided simultaneously may (also) vary. Alternatively (and preferably), each pair of left and right visual stimuli provided simultaneously could be provided at a same vertical position (height).
Aspects of the present invention comprise a head mounted training device for providing visual stimuli. Likewise, the method of the invention may be performed using a training device such as a head mounted device.
The visual stimuli provided may be any suitable and desired stimuli which are visually discernible by a user. Each visual stimulus preferably comprises provision of one or more of: a colour, intensity, texture, size, shape, localised movement or other visual quality by a visual element. Different visual stimuli may be provided by changing one or more such qualities (e.g. colour) of the visual element.
The visual elements could be mechanical elements. However, preferably, the visual elements are light elements.
Thus, preferably the training device (head mounted device) comprises one or more visual elements, more preferably comprising one or more light elements.
Preferably, providing visual stimuli comprises activating one or more visual elements, preferably activating one or more light elements (of the head mounted device), e.g. at a desired angular position as to provide visual stimuli at that angular position. Activating one or more light elements preferably comprises illuminating the one or more light elements (e.g. with a coloured light).
The Applicant has found that light elements may be particularly effective for providing visual stimuli in the left and right monocular regions of a user's vision, as these can be readily discernible despite the user having low visual acuity in these regions.
The one or more light elements could be any suitable and desired light elements. The one or more light elements could comprise a continuous light element (which spans a range of angular positions), for example such as a screen, e.g. an LCD or plasma screen or projection onto a screen. In this case, providing a visual stimulus preferably comprises illuminating a portion of the continuous light element, e.g. in a particular colour, shape or pattern.
Alternatively, the one or more light elements could comprise discrete light elements (which are provided at discrete angular positions), for example such as individual or groups of lights, e.g. light emitting diode (LED) lights. In this case, providing a visual stimulus preferably comprises illuminating one or more of the discrete light elements (by illuminating individual or groups of the discrete light elements).
In preferred embodiments, the one or more light elements comprises one or more (variable colour) LED lights.
The one or more light elements are preferably activated (illuminated) to provide visual stimuli within the left and right monocular regions of a user's vision, as discussed above.
Thus, (when the training device is in a training position) the one or more light elements preferably span a range of (e.g. are provided at plural) angular positions within the left and right monocular regions of a user's vision, preferably within about 60 to about 110 degrees from the centre of vision in the left and right directions. Accordingly, the one or more light elements preferably span a range of (e.g. are provided at plural) positions forwards and/or backwards relative to the bridge of the user's nose.
In embodiments, the one or more light elements are present only within the left and right monocular regions of a user's vision (and preferably are positionable so as to be present only within the left and right monocular regions of a user's vision).
Alternatively, in embodiments, light elements could also be present outside of the left and right monocular regions (e.g. in the binocular region) of a user's vision, but preferably light elements are not activated (visual stimuli are not provided) at angular positions outside the left and right monocular regions during a training session (in a training sequence of visual stimuli). In such embodiments, the system may be configurable to determine (the method may include determining) which angular positions (e.g. which discrete light elements) fall within the left and right monocular regions of a user's vision, and during a training session On a training sequence) activate light elements at those angular positions only. In this regard, the system may be configured to perform a calibration routine or receive a user input in order to identify the angular positions falling within the user's left and right monocular region of vision, and accordingly determine which angular positions light elements should be activated at during a training session and/or for a training sequence.
For discrete light elements, e.g. LEDs, the discrete light elements preferably comprise an array of light elements, the light elements being provided at a plurality of left and right angular positions (when the training device is in a training position).
For example the array of discrete light elements, e.g. LEDs, could form a single row of light elements, having a same vertical position (height) such that each row extends horizontally, e.g. at a vertical position close to that of the user's eyes. Alternatively, the discrete light elements could form plural (e.g. two, three, or more, e.g. up to five) horizontally extending rows of light elements, each row at a different vertical position (height) close to the height of the user's eyes. Alternatively, other grouping or patterns of discrete light elements could be provided within the array of light elements.
The Applicant has found that visual elements, e.g. such as LED light elements can suitably be incorporated into a head mounted device. The Applicant has furthermore found that a head mounted device can provide a compact and portable form for providing visual stimuli, which is accessible to every-day users. In this way, the head mounted device can be used in any environment throughout the day as desired to provide training of peripheral vision and relaxation of gaze. In this regard, the present device does not necessarily require large static components, such as PC's or cameras, or complex hardware which needs to be finely tuned in a laboratory setting. Due to the compact and portable form, the head mounted device can be of equal use to professional athletes who require short training sessions interspersed throughout the day, as well as to office workers who require a screen break and rest for their eyes after an intensive period of attention on screens, and would benefit from a short session where focus is softened and attention moved to the periphery.
In this regard, the head mounted device may be any suitable and desired device which is configured to be mounted to a user's head. In embodiments, the head mounted device is mountable (indirectly) to a user's head by (removably) mounting on an item of headwear, such as for example a pair of over-head head phones, a headband, a hat, or a pair of glasses, or the like.
During a training session, the visual elements (e.g. LED lights) (of the head mounted device) are (accordingly) provided in proximity to a user's head (and eyes), preferably within a distance of about 150 mm from a user's left and right eyes respectively, preferably within a distance of about 100 mm, preferably within about 80 mm, preferably within about 70 mm. In embodiments the distance is from about 40 mm to about 60 mm.
In embodiments, one or more (or all) of the visual elements (which are to be activated during a training session) are provided (are configurable to be provided) at a distance of at least about 5 mm from the user's left and right eyes respectively, preferably at a distance of at least about 10 mm, preferable at least about 20 mm, preferably at least about 30 mm.
Preferably the head mounted device comprises a pair of elongate members, wherein one or more visual elements (light elements) are provided on each elongate member of the pair. Thus, preferably the head mounted device comprises a left elongate member comprising one or more left visual elements, and a right elongate member comprising one or more right visual elements. The one or more visual elements on an elongate member together form a visual display unit.
The pair of elongate members are preferably formed integrally with or mountable (attachable) to an item of headwear, such as one or more of: a pair of over-head headphones, a headband, a hat, a pair of glasses or the like. In an embodiment, the pair of elongate members may be mountable to a rim of and/or the arms of a pair of glasses. In another embodiment, the pair of elongate members are mountable or integrated within a brim of a cap. In another embodiment, the pair of elongate members are formed integrally with or are attachable to a pair of over-head headphones.
In this regard, the term 'elongate 'typically indicates that each elongate member is longer than it is wide (has a length which is greater than its width). The visual display unit on each elongate member is preferably also elongate. Preferably, each elongate member (and visual display unit) is at least 2 times as long as it is wide (has a length which is at least twice its width), preferably at least 3 times as long as it is wide (has a length which is at least three times its width). In this regard, the length corresponds to the average length of the elongate member (or visual display unit) measured along the member (or visual display unit) from one end to the other, and the width corresponds to the average width of the elongate member (or visual display unit) measured across the member (of visual display unit) from one side to the other.
In embodiments, the length of the visual display unit is at least about 50 mm, preferably at least about 50mm, preferably at least about 60 mm, preferably at least about 70 mm. The length of the visual display unit may be less than about 150 mm, preferably less than about 100 mm. In embodiments the width of the visual display unit is at least about 1 mm, and in embodiments less than about 50 mm, preferably less than about 20 mm. For example, for visual display unit having (only) a single row of LED lights, the width of the visual display unit may be about 6 mm, whereas for two rows of LED lights the width may be about 11mm.
The length of each of the elongate members is, in embodiments, greater than or equal to the length of the visual display unit. In embodiments the length of each elongate member is up to about 150mm, preferably up to about 130 mm. For example, elongate members attached or mountable to a pair of glasses the length of the elongate members may be about 100mm, whereas for elongate members attached or mountable to a set of headphones the length of the elongate members may be about 130 mm (however, other lengths could be used if desired).
If the elongate members (or visual display unit) are curved, then the length and width are measured along (so as to include) the curve.
In embodiments, each elongate member comprises an attachment means by which the elongate member is attached to or mountable to an item of headwear. In embodiments, the attachment means (point of attachment) is towards one end of the elongate member (in the length direction), preferably with the other end (in the length direction) of the elongate member being free (such that the elongate member is cantilevered). Alternatively, the elongate members could be attached or mountable to an item of headwear in any other suitable and desired manner.
The visual elements provided on each elongate member may be of the form discussed above. For example, each elongate member may comprise a continuous visual element which extends along at least part of the elongate member. Alternatively (and preferably), each elongate member may comprise an array of discrete light elements (e.g. LEDs), the array of discrete light elements (e.g. LEDs) extending along at least part of the elongate member.
Preferably the one or more visual elements on the left and right elongate member are provided at the same relative positions along the elongate members (and thus can be activated to provide left and right visual stimuli at a same angular position as one another). Thus preferably, the one or more visual elements on the left and right elongate members are mirror images of one another.
When mounted on a user's head in a training orientation, for performing a training session, the elongate members are preferably oriented so as to extend substantially horizontally (in a substantially horizontal plane), and are preferably positioned substantially at the height of the user's eyes. In the training orientation, preferably the one or more visual elements on each elongate member are positioned to allow provision of visual stimuli at a plurality of angular positions to the left and right of the centre of the user's field of vision (and accordingly at a plurality of positions forwards and/or backwards of the bridge of the user's nose). In the case of one or more rows of discrete light elements provided on an elongate member, in the training orientation, the rows of light elements extend substantially horizontally.
In embodiments, each elongate member is curved (along its length), such that in the training configuration each elongate member is curved in the horizontal plane, so as to at least partly wrap around the user's head. This may facilitate positioning (and in embodiments the elongate members are configured such that) one or more (or all) visual elements on the left and right elongate members at approximately a same distance from the user's left and right eyes respectively, if desired for viewing comfort.
In embodiments, the pair of elongate members are positionable (moveable) so as to position the one or more visual elements within (and preferably only within) the left and right monocular regions of the user's vision. In embodiments, the elongate members are extendible and retractable (along their length) so as to alter the position the one or more visual elements, e.g. via a telescopic mechanism or other suitable mechanism. Alternatively, the pair of elongate members may be mountable at (and movable to) different positions (e.g. forwards and backwards) on a head worn device (e.g. different positions along the arms of a pair of glasses) so as to alter the position the one or more visual elements. In embodiments, the pair of elongate members may be rotatable (e.g. about or near their attachment means) and/or distortable (bendable) along at least part of their length (e.g. between the attachment means and the visual display unit). This adjustability allows the head mounted device to be adapted for providing visual stimuli in the monocular region, for example for users with different vision ranges and nose shapes.
Preferably, in the training orientation, the elongate members do not extend into the binocular region of the user's vision. Thus, preferably, in the training orientation there is a gap between the elongate members, and preferably an angular gap (measured along the horizontal meridian from a bridge of user's nose) of at least 20 degrees, preferably at least degrees, preferably at least 60 degrees, preferably at least 90 degrees (and preferably up to 120 degrees).
In embodiments, the pair of elongate members are movable relative to the head-worn device, so as to move the elongate members away from a training orientation (and preferably into a stowed orientation, preferably where the elongate members substantially cannot be seen by a user). For example, in embodiments (e.g. where the elongate members are mountable to a pair of glasses or integrated into a pair of headphones), the elongate members may be rotatable upwards away from a training orientation when training is not being performed, and rotatable downwards into a training orientation when a training session is desired to be performed. In embodiments where the elongate members are integrated within or attachable to a pair of over-head headphones, the elongate members may be configured to be rotated upwards to align with (e.g. to be stowed within) the head-band portion of the headphones. The elongate members may additionally (or alternatively) be foldable, or otherwise collapsible into a smaller form when a training session is not being performed.
The Applicant has recognised that a pair of elongate members each comprising one or more visual elements (e.g. light elements) having one or more of the features discussed above, may provide a lightweight and adjustable means for providing visual elements to the left and right monocular regions of a user's vision. The elongate members may be less intrusive than, e.g. a conventional virtual reality or augmented reality display headset which is designed to fill the user's entire field of vision. Accordingly, a user may be able continue wearing the head-mounted device having the pair of elongate members for a desired period of time for the training session, and also between training sessions without the pair of elongate members causing distraction or discomfort.
As discussed above, the methods and systems disclosed herein allow left and right visual stimuli to be provided simultaneously, preferably at a same (angular) position as one another. The visual stimuli can preferably be provided at a range of one or more (angular) positions (e.g. by activating left and right visual elements at a desired (angular) position).
Preferably (during a training session), a sequence of left and right visual stimuli are provided. The sequence may be referred to herein as a 'training sequence', since it is provided for the purpose of training the user's peripheral vision.
In embodiments, On the 'training 'sequence) left and right visual stimuli provided simultaneously are provided at a plurality of (angular) positions in turn (and accordingly preferably at a plurality of positions in the forwards and/or backwards direction, relative to the position of the bridge of the user's nose). The plurality of positions at which visual stimuli are provided form a sequence of positions.
One or more (different) visual stimuli may be provided at each (angular) position, in the training sequence, for example by changing one or more qualifies of visual stimuli provided at a position and/or among the positions. Thus, in embodiments, during a training session, one or more qualities of visual stimuli provided are permitted to vary (and preferably do vary).
As will be discussed in more detail below, a user may (the system may be configured to allow a user to) control various parameters for the training sequence (e.g. the positions at which visual stimuli are provided and/or the qualifies of the visual stimuli to be provided). This may allow fine, granular, user control of the training sequence.
Alternatively, or additionally, the user may interact with the system at a (more) abstracted level. For example, in embodiments, a user may (the system is configured to allow a user to) select a training program from a plurality of training programs. Each training program may comprise (differ in) one or more 'training' sequences of visual stimuli that it provides (e.g. with respect to the order of positions and/or the qualities of stimuli provided, e.g. as will be described in more detail below). A training program could comprise a plurality of different 'training sequences', each forming an 'exercise' for training the user. The training sequence(s) of a training program could be (and in embodiments are) provided with a (particular) soundtrack and/or with a sequence of (e.g. audio) instructions (e.g. directing the user to interact with the system in a particular way during the training sequence). For example, the training program could be an energising program choreographed to upbeat dance music, or a relaxing programme choreographed to forest sounds, or a session choreographed to a recorded (e.g. meditative) instruction soundtrack.
During a training session, the one or more qualities of the visual stimuli which are varied may comprise one or more of: a colour, intensity, texture, size, shape, localised motion of the visual stimulus.
For example, for a visual stimulus provided by a light element, a 'texture 'of a visual stimulus may correspond to a texture or pattern of light formed by the light element. Intensity may be a colour intensity (saturation) and/or a brightness of a light element when activated. Localised motion may be motion about (e.g. centred) on a particular position.
In embodiments where the visual stimuli are provided by light elements (e.g. LED lights), a quality (and in embodiments the only quality) of the visual stimuli which is permitted to vary is a colour.
During a training session, one or more qualities may be permitted to differ (be mismatched) between left and right visual stimuli provided simultaneously. Accordingly, the system may be configured to control the one or more qualities of the left and right visual stimuli independently. For example, in the case of light elements having a quality which is the colour, a left light element may be activated to be a particular colour (e.g. green), whilst a right light element may be activated to be a different colour (e.g. blue). However, preferably regardless of whether or not one or more qualities differ (are mis-matched), the left and right visual stimuli provided simultaneously are preferably provided at a same angular position as one another.
The Applicant has found that providing left and right visual stimuli having a same quality (and preferably having identical qualifies) as one another (e.g. having the same colour) is generally more relaxing than having mis-matched qualities (e.g. having different colours).
Accordingly, preferably for the majority of time during a training session, the left and right visual stimuli are provided with one or more (or preferably all) qualities being the same (e.g. having a same colour). Thus, preferably left and right visual stimuli with the same quality (or qualities) are provided more often than left and right visual stimuli with a differing quality (or qualifies).
In embodiments, e.g. in advance of commencing a training session, a user is permitted to choose (the system is configured to receive a user selection for) the one or more qualifies which are to be varied during a training session. For example, the user may be permitted to choose that colour is to be varied, and to choose which colours are to be provided. For example, a user could select, e.g., blue, green, and purple visual stimuli to be provided (and not red and orange visual stimuli). Alternatively, one or more qualities of the visual stimuli may depend on a training program selected by the user (e.g. an energising' program or a relaxing program').
In embodiments, during a training session, one or more of the qualities of (e.g. the colour of) visual stimuli provided vary randomly (e.g. being selected according to a weighted random selection). In this way, the quality (e.g. colour) which is to be provided is not predictable by the user, which may improve user attention when using the device.
Preferably, during a training session, each visual stimulus (of the training sequence) is provided for a discernible period of time, to allow the user to perceive the visual stimulus.
Preferably each visual stimulus is provided for a period of at least about 0.1 seconds, preferably at least about 0.5 seconds, preferably at least about 1 second. Preferably, each visual stimulus (of the training sequence) is provided for at most 60 seconds, preferably at most about 20 seconds, preferably at most about 10 seconds, preferably at most about 5 seconds (such that the user does not lose attention to the visual stimuli). In embodiments, each visual stimulus is provided for a time from about 0.5 seconds to about 20 seconds. Preferably the amount of time that a visual stimulus is provided for is the same for each visual stimulus (in the training 'sequence). In other words, the quality (or qualities) of visual stimuli preferably change at regular intervals in time. This can provide a relaxing effect.
Alternatively, in embodiments, the amount of time that a visual stimulus is provided for may be permitted to vary (varies), e.g. varying randomly (however, left and right visual stimuli presented simultaneously will preferably be provided for the same amount of time as each other). In this regard, the amount of time that a visual stimulus is provided for could be selected according to a weighted randomised amount of time. In that case, the amount of time for which a visual stimulus is presented will not be predictable by the user, which may help to improve user attention when using the training device.
In embodiments, the user is able to control (select) a duration of the visual stimuli. In embodiments, this is achieved by a user controlling (selecting) the amount of time between (rate of) visual stimuli. This duration (or rate) may be selected as desired by the user, for comfortable use of the system. Alternatively, the user may be able to control a duration of the visual stimuli by selecting a training program (e.g. an 'energising' program or a 'relaxing program') and/or by selecting a rhythmic (e.g. musical) soundtrack to be provided with the training sequence (wherein the system may be configured to provide the visual stimuli in synchronisation with the beat of the soundtrack). Accordingly, the system is preferably configured to receive a user input for controlling the duration (or rate) of visual stimuli.
Preferably, when performing a training session (in the 'training sequence 'of visual stimuli) a spacing between the left and right visual stimuli provided simultaneously increases with increasing time, and/or based on a user response.
In this regard, the spacing between the left and right visual stimuli provided simultaneously preferably corresponds to the angular spacing (angular distance) between the left and right visual stimuli as measured to the left and right from the centre of the user's vision (from the bridge of the user's nose, along the horizontal meridian). The spacing between the left and right visual stimuli thus corresponds to the sum of the angular positions of the left and right visual stimuli. Thus, increasing the spacing between left and right visual stimuli comprises providing left and right stimuli which are further apart from one another along the horizontal meridian.
Thus, preferably increasing the spacing between the left and right visual stimuli provided comprises increasing the angular position of the left and right stimuli (and correspondingly providing left and right stimuli which are further backwards). Conversely, decreasing the spacing between the left and right visual stimuli provided preferably comprises decreasing the angular position of the left and right stimuli (and correspondingly providing left and right stimuli which are further forwards).
The Applicant has recognised that, during the course of a training session, the user may become more relaxed and may become receptive to left and right visual stimuli which are deeper within their peripheral vision (and accordingly at wider angles, and further backwards within the peripheral vision). By increasing the spacing between the left and right visual stimuli, an increasingly wide visual field of the user can be trained.
In embodiments, increasing the spacing between (the angular position of) the left and right visual stimuli provided is performed in a defined, preferably predetermined manner (automatically, without receiving user input during the training session). For example, the spacing between the left and right visual stimuli provided may be increased according to a defined (e.g. predetermined) sequence of positions.
For example, the predetermined sequence of positions may progress from a predetermined initial (minimum) angular position of the left and right visual stimuli, to a final (maximum) angular position of the left and right visual stimuli, e.g. according to a predetermined pattern of positions. A user may be able to select in advance of a training session (the system is configured to receive a user input for) one or more of: a minimum angular position, a maximum angular position, and a pattern of positions (for example by the user selecting these parameters directly, or by selecting a desired training program).
Alternatively, the spacing between (angular position of) the left and right visual stimuli could be increased based on a user input during the training session. The spacing between (angular position of) the left and right visual stimuli could also be decreased based on a user input during the training session.
The user input which is used to increase and/or decrease spacing between left and right visual stimuli may comprise an active (conscious) user input, comprising a user actively interacting with the system, e.g. to select appropriate parameters. Alternatively, the user input may comprise a passive (subconscious) user input, for example an input detected by a suitable sensor.
For example, the user input may comprise a user selecting (e.g. adjusting) one or more positions at which the user desires visual stimuli to be provided, and the system may accordingly provide visual stimuli at positions among those one or more positions. Alternatively, as will be discussed in further detail below, the user input could be a sensed or user-reported level of relaxation of the user, and/or an input indicative of the user's perceptiveness to the visual stimuli. In this regard, the spacing between (angular position of) left and right visual stimuli is preferably increased when it is determined that the user has a higher level of relaxation and/or better perceptiveness to the visual stimuli (and conversely the spacing between left and right visual stimuli is preferably decreased when it is determined that the user has a lower level of relaxation and/or worse perceptiveness to the visual stimuli). Other user input(s) could also or instead be used, if desired.
In an embodiment, increasing (or conversely decreasing) the spacing between the left and right visual stimuli comprises providing a pair of left and right stimuli which have a larger (or conversely smaller) spacing compared to one or more previous (preferably immediately preceding) pairs of left and right stimuli.
In this regard, it is possible to increase (or decrease) the spacing between left and right visual stimuli compared to immediately preceding pair(s) of stimuli (e.g. after a predetermined period of time and/or responsive to a user input, e.g. responsive to a user input indicative of level of relaxation and/or level of perceptiveness), and in embodiments this is done.
Alternatively, in embodiments, increasing (or decreasing) the spacing between left and right visual stimuli (e.g. over time and/or response to user input) is done gradually such that there is an overall trend of increasing (or decreasing) the spacing between left and right visual stimuli.
For example, the positions at which left and right visual stimuli are provided could be determined (selected) on a weighted basis, and increasing the spacing between left and right visual stimuli could comprise increasing the weighting (and therefore the rate of occurrence) of positions which have a larger angular spacing (and are positioned further backwards). Conversely, decreasing the spacing between left and right visual stimuli could comprise increasing the weighting of positions which have a smaller angular spacing (and are positioned further forwards).
In embodiments, a gradual increase or decrease in the spacing between left and right visual stimuli is achieved by providing one or more cycles of visual stimuli (by performing one or more cycles of operation), wherein in each cycle visual stimuli are provided at one or more positions within a defined range of one or more positions. In embodiments, the position(s) at which left and right stimuli are provided is permitted to vary (can be altered) between cycles of activation, preferably by altering either or both of: the closest and/or furthest spacing between the left and right visual stimuli in the range of one or more positions for a cycle; and the one or more positions at which visual elements are provided within the range of one or more positions for a cycle.
Preferably, varying the position(s) at which visual stimuli are provided comprises altering the position(s) for a cycle compared to a previous (preferably immediately preceding) cycle of visual stimuli.
In this manner, a spacing between left and right visual stimuli can be increased or decreased in graduated steps by changing position(s) for visual stimuli across one or more cycles. This Applicant has found that this allows a user to soften their gaze gradually, promoting a heightened sense of relaxation and calm.
Thus, in embodiments, increasing (or conversely decreasing) the spacing between left and right visual stimuli comprises either or both of: increasing (or conversely decreasing) the closest and/or furthest spacing of left and right visual stimuli within the range of one or more positions for a cycle; or increasing (or conversely decreasing) an (angular) position of one or more of the position(s) at which visual stimuli are provided within the range of one or more positions for a cycle.
The defined range of one or more positions for a cycle of visual stimuli preferably comprises a range of one or more angular positions (and accordingly a range of positions in the forwards and/or backwards directions).
Preferably, the range of one or more positions for a cycle comprises a range of one or more left positions for left visual stimuli, and one or more right positions for right visual stimuli. Preferably, the range of left and right position(s) are a mirror image of one another relative to the centre of the user's vision, preferably such that left and right visual stimuli provided at positions in the range can be (and are) provided simultaneously at an equal angular position relative to the centre of the user's vision.
In each cycle, the left and right visual stimuli are preferably provided by activating appropriate visual element(s), e.g. such as those visual element(s) described above. For discrete visual elements, e.g. LEDs, the range of one or more positions for a cycle preferably encompasses one or more discrete visual elements (at one or more different angular positions).
It would be possible to provide visual stimuli at each and every possible (angular) position within the range of one or more positions for a cycle (e.g. to activate each and every discrete visual element falling in the range during a cycle), and in embodiments this is done. In other words, in embodiments, during a cycle, visual stimuli may be provided at position(s) which are (all) adjacent one another.
Alternatively, during a cycle, visual elements may be provided at one or more positions within the range of positions comprising (forming) a sub-set of the possible positions in the range (e.g. such that a sub-set of visual elements falling in the range are activated during a cycle). In other words, one or more positions in the range of positions for a cycle may be skipped and no visual stimuli provided at those positions. In other words, during a cycle, one or more visual stimuli may be provided at (angular) positions which are spaced apart from one another (in the left direction or the right direction respectively for left or right visual stimuli).
As noted above, one or more positions at which visual stimuli are provided may be varied between cycles of visual stimuli by altering the one or more positions at which visual elements are provided within the range of one or more positions for a cycle. This may comprise adding or removing one or more positions at which visual elements are provided. In other words, this may comprise altering (e.g. adding or removing) one or more positions in the range of positions for a cycle which are skipped and at which no visual stimuli are provided. In other words, this may comprise increasing a spacing between the (angular) positions of one or more of the (respective left or right) visual stimuli provided in the cycle.
As noted above, one or more positions at which visual stimuli are provided may additionally or alternatively be varied between cycles of visual stimuli by altering the closest and/or furthest (angular) spacing of left and right visual stimuli in the range of one or more positions forming a cycle. In this regard, altering the closest spacing comprises altering a smallest angular position within the range of left and/or right positions (altering the furthest forward position), and conversely altering the furthest spacing comprises altering the largest angular position within the range of left and/or right positions (altering the furthest backwards position).
For example, the range of position(s) for a first cycle of operation (a first cycle of visual stimuli provided in a training session) may comprise a closest possible spacing in the monocular region (of the possible positions at which the system is able to provide left and right visual stimuli within the monocular region), e.g. corresponding to an angular position of about 60 degrees. The range of position(s) for later cycles of operation may comprise position(s) which are further apart.
The defined ranges of one or more positions for different (e.g. successive) cycles of visual stimuli could be non-overlapping, or could overlap.
Likewise, the discrete visual element(s) which fall within the range of position(s) for different (e.g. successive) cycles could include none, or one or more of the same discrete visual elements.
In embodiments, (e.g. to allow a smooth increase/decrease in the spacing of left and right visual stimuli among the cycles), there is at least some overlap between the range of position(s) for successive cycles of visual stimuli. For example, each range of one or more positions forming a cycle could have a same closest spacing between left and right visual stimuli, but could differ in the furthest spacing between left and right visual stimuli. Alternatively, each range of one or more positions could have a different closest spacing between left and right visual stimuli and a different furthest spacing between left and right visual stimuli compared to a preceding cycle. Other permutations are also possible.
As noted above, increasing (or decreasing) the spacing between left and right visual stimuli may be achieved by increasing (or decreasing) the closest and/or furthest spacing between the left and right visual stimuli in the range of one or more positions for a cycle. Increasing (or decreasing) the spacing between left and right visual stimuli may also (or instead) be achieved by altering the one or more positions at which visual stimuli are provided within a cycle. In embodiments increasing (or conversely decreasing) the spacing between left and right visual stimuli is achieved by moving one or more visual stimuli to a larger (or conversely smaller) angular position within the range of positions for the cycle stimuli. In embodiments, increasing (or conversely decreasing) the spacing between left and right visual stimuli comprises increasing (or conversely decreasing) the average angular position of visual stimuli within the cycle (wherein the average angular position of visual stimuli can be calculated as the sum of the magnitude of the angular positions at which left and right visual stimuli are provided during a cycle, divided by the number of positions at which visual stimuli are provided during a cycle).
In an embodiment, increasing (or conversely decreasing) the spacing between left and right visual stimuli between cycles comprises increasing (or conversely decreasing) the closest and/or furthest spacing between the left and right visual stimuli in the range of one or more positions for a cycle is increased, and also increasing (or conversely decreasing) the spacing between the positions of one or more of the visual stimuli provided in the cycle. This may have an overall effect of widening (or conversely narrowing) the cycle. Other variations for altering the positions at which visual stimuli are provided are also possible.
Preferably in each cycle of operation (for each cycle of visual stimuli), left and right visual stimuli are provided at according to a sequence of positions.
Preferably, during a cycle, left and right visual stimuli are provided at a sequence of positions of progressively increasing spacing (are provided at progressively increasing angular positions, and accordingly progressively further back), the sequence preferably progressing from a closest spacing (smallest angular position, furthest forward position) in the range of one or more positions to a furthest spacing (largest angular position, furthest backwards position) in the range of one or more positions. In this way, a 'wave 'of visual stimuli of increasing spacing is provided. At the end of each 'wave', the stimuli will have preferably reached the furthest extreme of peripheral vision that the user desires or that the program dictates at that time.
The sequence of visual stimuli provided at increasing positions (the 'wave 'of stimuli) may be repeated one or more times within a cycle. In this regard, the Applicant has recognised that, regardless of whether the spacing of visual stimuli is increased or decreased between successive cycles, by providing 'waves 'of visual stimuli which increase in spacing within each cycle, a relaxing effect which encourages user awareness to the peripheral vision can still be achieved.
As noted above, in a cycle, visual stimuli could be provided at each and every possible position within the defined range of position(s) for the cycle, or at a selection of positions within the range. In either case, the 'wave' of stimuli may progress through the relevant positions at which visual stimuli are to be provided in the cycle.
As mentioned above, in embodiments, a user's perceptiveness to visual stimuli is determined and is used to increase (or decrease) the spacing between left and right visual stimuli. This increase (or decrease) in spacing may be done in any suitable and desired manner, e.g. such as using cycles of stimuli as described herein.
Determining a user's perceptiveness to the visual stimuli may alternatively be advantageous in its own right, without being used to increase or decrease the spacing between left and right visual stimuli (which may proceed, for example according to a predefined sequence of positions, or may for example be responsive to a different user input, e.g. indicative of a level of user relaxation).
In embodiments, a user's perceptiveness is determined (the system is configured to determine the user's perceptiveness) based on a user identifying a target characteristic of left and/or right visual stimuli provided. The target characteristic preferably comprises a target quality for a visual stimulus, a matched (identical) quality between left and right visual stimuli provided simultaneously, or a mismatched quality between left and right visual stimuli provided simultaneously.
The target, matched, or mismatched quality may be any one or more of the qualities of visual stimuli described above. For example, a target quality could be a particular colour (e.g. green) visual stimulus. A matched quality could be a matched colour (e.g. a green left visual stimulus provided simultaneously with a green right visual stimulus). A mis-matched quality could be a mis-matched colour (e.g. a green left visual stimulus provided simultaneously with a blue right visual stimulus).
In embodiments, (e.g. in advance of commencing a training session, or as part of selecting a training program), a user is permitted to choose (the system is configured to receive a user selection for) one or more target characteristics for the visual stimuli. For example, the user may be permitted to choose a target quality (e.g. a green colour) or a quality which is to be matched or mis-matched (e.g. a colour being matched or mis-matched, rather than e.g. a shape).
In embodiments, during a training session, left and right visual stimuli having the one or more target characteristics are provided (for example, being provided one or more times within a training sequence 'of visual stimuli). Preferably, visual stimuli having the target characteristic(s) are shown less often than visual stimuli not having the target characteristic(s).
Preferably, visual stimuli having the one or more target characteristic(s) are shown intermittently, such that the time between occurrences of the target characteristic(s) is variable and preferably randomised such that occurrences of the target characteristic(s) are not predictable by a user. The Applicant has recognised that varying the time between occurrences of the target characteristic may improve user attention when performing a training system.
Preferably, e.g. in advance of commencing a training session, a user is permitted to select (the system is configured to receive a user selection for) a rate at which the one or more target characteristics appear (e.g. so as to select a rate which is comfortable and relaxing for the user). In embodiments, this is achieved by a user controlling (selecting) the amount of time between (rate of) target characteristics. Alternatively, the rate of provision of target characteristics may vary based on a training program selected by the user (e.g. being relatively less frequent for a relaxing' program, and relatively more frequent for an energising' program).
During a training session, the system is preferably configured to receive (comprises a user input means for receiving) a user input indicative of whether a user has perceived a target characteristic. It is then determined whether the user has correctly perceived the target characteristic. In embodiments, if the user has correctly perceived the target characteristic, then the position or range of positions at which left and right stimuli are provided by the head-mounted device are altered.
In this regard, it is preferably determined that the user has correctly perceived a target characteristic if the user input comprises a response (if a user response is received) indicating that the user has perceived the target characteristic within a predefined period of time after the target characteristic has started being shown. The predefined period of time in embodiments corresponds to the amount of time for which the visual stimulus is provided (such that it is determined that a user has correctly perceived a target characteristic if the user input comprises a response whilst the target characteristic is being shown). Alternatively, the predefined period of time could be longer or shorter than the period of time for which the target characteristic is shown. The predefined period of time could be less than about 10 seconds, or less than about 5 seconds, or less than about 2 seconds, or less than about 1 second from the target characteristic starting being shown.
The user response may comprise a user identifying (confirming) that a target characteristic has occurred. If there are plural target characteristics (e.g. a blue colour, and a purple colour), correctly perceiving a target characteristic could require the user to provide a response (and correspondingly receiving a user response) which correctly identifies which of the plural target characteristics were shown (e.g. which of blue or purple were shown). It may also be determined whether a user has not correctly perceived a target characteristic that has been shown. Preferably, it is determined that a user has not correctly perceived a target characteristic if a user response is received later than the predefined period of time (disclosed above) after the target characteristic being shown, and/or if a user response is received before or without a target characteristic being shown. For example, in embodiments, it is determined that a user has not correctly perceived a target characteristic if a user response is not received whilst the target characteristic is being shown. If there are plural target characteristics (e.g. a blue colour, and a purple colour), incorrectly perceiving a target characteristic could comprise the user providing a response which incorrectly identifies which of the plural target characteristics were shown (e.g. identifying blue, when in fact purple was shown).
Preferably, in response to a user correctly perceiving (when a user correctly perceives) a target characteristic, a spacing between the left and right visual stimuli provided simultaneously is increased. Conversely, in response to a user incorrectly perceiving (when a user incorrectly perceives) a target characteristic, the spacing of the left and right visual stimuli could be decreased. Increasing/decreasing the spacing of the left and right visual stimuli may be done, for example, in any of the ways described above (e.g., by changing a range of one or more positions forming a cycle of visual stimuli).
According, in embodiments, during a training session, the system is configured to: receive a user input in response to a user perceiving a target characteristic; determine whether the user has correctly perceived the target characteristic; and when the user has correctly perceived the target characteristic, alter one or more positions at which left and right stimuli are provided. Preferably, altering one or more positions at which visual stimuli are provided comprises altering the range of one or more positions forming a cycle of visual stimuli provided and/or altering the one or more positions at which visual elements are provided within the range of one or more positions for a cycle.
In embodiments, the system is also configured to determine whether the user has incorrectly perceived the target characteristic, and to alter one or more positions at which left and right stimuli are provided correspondingly (preferably by altering the range of one or more positions forming a cycle of visual stimuli provided, and/or altering the one or more positions at which visual elements are provided within the range of one or more positions for a cycle).
The spacing of left and right visual stimuli (and preferably one or more positions at which visual stimuli are provided in a cycle) could be altered immediately in response to a user correctly (or incorrectly) perceiving the target characteristic, such that it is altered based on a single occurrence of the target characteristic.
Alternatively, the spacing of left and right visual stimuli could be altered after a predetermined (e.g. threshold) number of (e.g. successive) correctly or (e.g. successive) incorrectly perceived occurrences of a target characteristic, or responsive to the proportion of correctly or incorrectly perceived target characteristic occurrences (e.g. corresponding to a success rate of the user). This may allow a more subtle change to the spacing of the left and right visual stimuli, such that the spacing of the left and right visual stimuli is changed in a way that does not immediately follow a single correct (or incorrect) perceived target characteristic. In this way, a user is unlikely to associate their individual responses with changes to the spacing of visual stimuli, which may help to avoid a user having a stress response to correct (or incorrect) perception of visual stimuli (a stress response would potentially undermine the relaxing effect of the training).
Alternatively (as discussed herein), the system may increase the spacing of left and right visual stimuli irrespective whether the user has correctly (or incorrectly) perceived target characteristics (such that the correct (or incorrect) perception of target characteristics is determined but not used to adjust the spacing of the left and right visual stimuli).
Determination of a user's perceptiveness to target characteristics, in of itself, may still provide a useful output indicating a user's awareness to visual stimuli in the peripheral field of vision.
Other parameters of the system could additionally (or alternatively) be changed in response to a user correctly (or incorrectly) perceiving the target characteristic, for example such as one or more of: the particular target characteristic (e.g. the target colour), the rate of occurrence of the target characteristic, and the rate that visual stimuli are provided. For example, when a user correctly identifies a target characteristic, then the target characteristic may change to a more subtle characteristic (e.g. a more subtle colour difference, or intensity difference, or shape difference etc. compared to other visual stimuli provided), and/or the target characteristics may be provided more or less often, and/or the visual stimuli may be provided at a faster rate.
In embodiments, during a training session, the system is configured to provide positive feedback to a user when it is determined that a user has correctly perceived a target characteristic. The system could also (or instead) provide negative feedback when it is determined that a user has incorrectly perceived a target characteristic (although in embodiments no negative feedback is provided to avoid causing a stress response from the user). The positive (or negative) feedback could be given immediately, and preferably each time, a user correctly (or incorrectly) perceives a target characteristic. Alternatively, the positive (or negative) feedback could be given based on proportion of correct (or incorrect) user responses (e.g. based on a determined success rate of the user).
The positive (or negative) feedback could comprise any suitable and desired feedback, such as a visual, audible, or other sensory stimulus. For example positive feedback could comprise, a sequence of visual stimuli forming a 'success 'sequence, e.g a single wave of stimuli progressing from the forwards-most to the backwards-most visual stimuli of the head mounted device.
In embodiments, the system is configured to (and the method comprises) keeping a record of the user's perception of visual stimuli, preferably by recording one or more of: a number or proportion of correctly perceived stimuli; a number or proportion of incorrectly perceived stimuli; and an average time which the user took to respond to stimuli. Preferably the record of the user's perception is provided to the user as a training report, once a training session is complete.
Regarding the user response indicating that a user has perceived the target characteristic(s), the user response could comprise a response provided consciously (actively) by the user (e.g. by the user interacting with a suitable input means when the user perceives, of believes they have perceived, the target characteristic). Alternatively, the user response could be provided subconsciously (passively) (e.g. by a user input means sensing a state of a user).
A user response could comprise, for example, a user pressing a button or other touch sensitive input device (e.g. touching a button on a screen of a mobile phone), making a movement (e.g. gesture), making a sound (e.g. verbal input), actively thinking a measurable thought, or performing any other action measurable by a user input means.
The system may accordingly comprise a suitable input means for receiving a user response, for example comprise any one or more of: a button or other touch sensitive input, a movement sensor (e.g. motion detector or accelerometer), a sound sensor (microphone), an electromyography (EMG) sensor (a sensor responsive by muscular motion), an Electroencephalography (EEG) sensor (a sensor responsive to brain wave activity), or other desired sensor. The user input means could be provided as part of the head-mounted device, or by handheld device (e.g. such as a controller or joystick), or by a portable electronic device (e.g. such as a mobile phone, tablet, laptop or the like).
Preferably, for the purposes of a user identifying a target characteristic, the input means (which receives the user input), is configured to be operated without the user shifting their gaze. Thus, in a preferred embodiment, the input means is a relatively large button displayed on the screen of a portable electronic device (e.g. within an app on a mobile phone or tablet), the button having an area at least 1 cm2, preferably at least 2cm2, preferably at least 3cm2, and/or occupying at least 10%, preferably at least 20%, preferably at least 30% of the area of a screen of the portable electronic device.
The system may (also) be configured to receive responses and accordingly comprise a user input means (e.g. such as those described above) for other purposes, for example for configuring one or more parameters in advance of or during a training session.
As discussed above, in embodiments, the system is (additionally or alternatively) configured to (and the method comprises) alter the spacing between left and right visual stimuli provided simultaneously based on a level of relaxation of the user.
Accordingly, in embodiments, a position or range of positions at which the left and right visual stimuli are provided is controlled based on a level of relaxation of a user.
The level of relaxation of the user may be an indicated level of relaxation (e.g. based on a user self-reporting a level of relaxation), or may be a detected level of relaxation (e.g. being sensed by a sensor).
Thus in embodiments, the system is configured to receive (and the method comprises receiving) a self-reported level of relaxation provided actively (consciously) by a user (e.g. via the user interacting with a suitable user input device, such as any of the input devices discussed above).
In embodiments, the system is (additionally or alternatively) configured to receive (and the method comprises receiving) a sensor output sensing a physical state of the user, the sensor output indicative of a level of relaxation of a user. The sensor may be configured to sense, and to provide an output indicative of one or more of a user's: motion, breathing, heart rate, blood pressure, brain wave activity, or other physical property.
Preferably, the system is configured to determine a level of user relaxation from the sensor output. For example, one or more of more agitated movements, shorter breaths, higher blood pressure, certain patterns of brain wave activity, or other sensor inputs, are preferably used to indicate (are preferably correlated to) a lower level of relaxation (the user being less relaxed). Conversely, preferably one or more of slower user movements, longer breaths, lower blood pressure, certain patterns of brain wave activity or other sensor inputs are preferably used to indicate (are preferably correlated to) a higher level of relaxation (the user being relatively more relaxed).
As mentioned above, in embodiments, the spacing between (angular position of) left and right visual stimuli is preferably increased when it is determined that the user has a higher level of relaxation (and conversely the spacing between left and right visual stimuli is preferably decreased when it is determined that the user has a lower level of relaxation).
Increasing and/or decreasing the spacing of the left and right visual stimuli in response to the user's level of relaxation may be done, for example, in any of the ways described above (e.g., by changing a range of one or more positions forming a cycle of visual stimuli). Accordingly, preferably the range of one or more positions at which right and left stimuli are provided (e.g. for a cycle of visual stimuli) is preferably selected based on the level of relaxation of the user.
Alternatively, or additionally, in embodiments a quality (e.g. a colour) of one or more visual stimuli provided (e.g. for a cycle of visual stimuli) is selected based on the level of relaxation of the user. Preferably, visual stimuli having a colour which is relatively further towards the blue/violet end of the visible spectrum are provided when the user is relatively more relaxed (and visual stimuli having a colour which is relatively further towards the red end of the visible spectrum are provided when the user is relatively less relaxed).
Thus accordingly, when a user has a lower level of relaxation, then preferably left and right visual stimuli are provided relatively closer to one another (at smaller angular positions, and further forwards) and redder in colour, and preferably when a user has a higher level of relaxation then preferably left and right visual stimuli are provided relatively further apart from one another (at larger angular positions, and further backward) and bluer in colour.
In preferred embodiments the level of user relaxation is determined based on the breathing of a user. Preferably, the level of user relaxation is determined based on the duration of the user's breaths (e.g. based on the duration of the in-breaths and/or out-breaths, or based on the time between successive in-breaths or the time between successive out-breaths).
Preferably, it is determined that the user has a relatively higher level of relaxation (is more relaxed) when the user's breaths are longer in duration (and that the user has a relatively lower level of relaxation (is less relaxed) when the user's breaths are shorter in duration). Accordingly, in embodiments, the spacing between (angular position of) left and right visual stimuli is preferably increased when the user's breaths are longer in duration (and conversely decreased when the when the user's breaths are shorter in duration).
In embodiments, the breathing of a user is detected using a breathing (breath) sensor. The breathing sensor may be any suitable and desired type of breathing sensor. For example, the breathing sensor could be any one or more of: a temperature sensor (e.g. positionable under a nostril of the user), an air-flow sensor (e.g. positionable under a nostril of the user), a microphone (e.g. an external microphone, a microphone of a pair of over-head headphones, or an in-ear microphone, which detects the sound of movement of breath). Preferably, the breathing sensor is part of the head-mounted device which is configured to display the visual stimuli.
Alternatively or additionally, in embodiments, information on the breathing of a user is provided by a user (consciously) interacting with a user input device to identify (self-report) their breaths. For example, during a training session, the system may be configured to receive (and the method may comprise receiving) a user input identifying a, and preferably each, in-breath (and/or out-breath). For this purpose, the user may interact with any desired and suitable input means, e.g. such as those as discussed above (e.g. pressing a touch-sensitive button provided on the display of a portable electronic device).
Self-reporting of breathing by a user may help to draw a user's attention to their breath and provide a relaxing effect that is synergistic with the relaxing effect of the visual stimuli provided in the technology described herein.
Therefore, in embodiments (regardless of whether or not a level of user relaxation is determined from a user's breathing and used to control spacing of left and right visual stimuli), the system is configured to, during a training session in which visual stimuli are presented to the user, receive a user response identifying one or more in-breaths and/or out-breaths. The system may furthermore determine whether the user has correctly (or incorrectly) identified an in-breath and/or out-breath. The system may furthermore be configured to provide positive (or negative) feedback, e.g. based on the user correctly (or incorrectly) identifying a or a threshold number of in-and/or out-breaths or based on the proportion of correctly (or incorrectly) identified in-and/or out-breaths. When a training session has finished, the system may provide a final report to the user indicating the success rate of determining in-and/or out-breaths.
In embodiments (preferably, in addition to being used to adjust the range of one or more positions of visual stimuli provided), the breathing of a user is used to change the position of successive visual stimuli (e.g. within the range of one or more positions of a cycle). For example, in embodiments, when an in-breath of a user occurs (e.g. as detected by a breathing sensor or as reported by a user), then one or more pairs of left and right visual stimuli are provided at a closer (smaller) spacing (a smaller angular position, further forwards) compared to an immediately preceding pair of left and right visual stimuli. Conversely, in embodiments, when an out-breath of a user occurs, then one or more pairs of left and right visual stimuli are provided at a larger spacing (a larger angular position, further backwards). Accordingly, in embodiments, the position of successive visual stimuli provided changes in synchronisation with the in-and out-breaths of a user.
In embodiments, during a training session, the visual stimuli are provided (activated) in synchronisation with a rhythmic beat of a soundtrack. In this regard, preferably, the position of successive visual stimuli provided (e.g. within a cycle) changes in synchronisation the beat of the soundtrack. For example, the position of successive visual stimuli could change at (exactly) on the beat of the soundtrack, or the rate of change of position of visual stimuli could be correlated to the speed of the beat.
Alternatively (or additionally) one or more qualifies of the visual stimuli (e.g. colour) could be configured to change in synchronisation with a rhythmic beat of a soundtrack.
The Applicant has found that providing the visual stimuli in synchronisation with a rhythmic beat of a soundtrack has a synergistic effect of improving relaxation and allowing the user to become aware of visual stimuli provided wider within their peripheral vision.
The soundtrack provided could be a musical and/or verbal soundtrack.
A verbal soundtrack provided simultaneously with the visual stimuli may comprise instructions guiding a user though the training session (e.g. through a training program), e.g. comprising any of: informing a user of a target characteristic(s) to be identified, encouraging a user to breathe, providing guided meditation, or any other suitable and desired instructions.
Accordingly, in embodiments, the system is configured to play a soundtrack. In other embodiments, the system is configured to play the soundtrack by controlling a speaker integrated into the head mounted device or a speaker external to the head mounted device (e.g. the speakers of a mobile phone) via a suitable wired or wireless communication (e.g. Bluetooth). For example, when the head mounted system is incorporated into a pair of overhead headphones, the system is preferably configured to control the over-head headphones to play the soundtrack.
In embodiments, the user is permitted to select (the system is configured to receive a user selection for) the soundtrack which is to be played, e.g. from a library of plural different soundtracks, e.g. stored on the head-mounted device, or a portable electronic device coupled thereto, or a cloud-based music service.
As discussed above, during a training session, the position at which visual stimuli are provided may vary over time, or in response to a user input (e.g. a level of user relaxation or a user's perceptiveness to visual stimuli). Various triggers could be used to end a training session, e.g. such as a predetermined period of time for a training session time having elapsed (e.g. a soundtrack finishing), receiving a user input indicating that a user wishes to end the training session, determining that a user has reached a particular level or relaxation, reaching a cycle of visual stimuli which is a final cycle (e.g. being a cycle with the largest spacing of visual stimuli among a predetermined set of cycles of visual stimuli), or other suitable and desired triggers for ending a training session.
As will be appreciated from the above, in the 'training 'sequence, one or more (preferably a majority of, preferably all) of the visual stimuli are provided within the left and right monocular regions of the user's vision, and preferably one or more (preferably a majority of, preferably all) of the visual stimuli are provided simultaneously to the left and right of the centre of the user's vision preferably at a same angular and/or vertical position as one another.
In embodiments, (during a training session), visual stimuli forming only the 'training' sequence of visual stimuli are provided. Alternatively, during a training session, other visual stimuli which are not part of the 'training 'sequence of visual stimuli could be provided (e.g. for the purposes of conveying information to a user), however such visual stimuli which are not part of the 'training 'sequence are preferably provided in a manner which does not distract from the 'training 'sequence.
Outside of a training session, visual stimuli could be provided (the head mounted device could be configured to provide visual stimuli) in a different manner to that described herein for the 'training sequence '(e.g. for the purposes of conveying information to a user), and in embodiments this is done.
Thus, for example, outside of a training session and/or in addition to a training sequence, visual stimuli could be provided (the head mounted device may be configured to provide visual stimuli) which are one or more of: provided to the left and right individually (not simultaneously); provided to the binocular region of a user's vision; provided simultaneously to the left and right at different angular positions; provided simultaneously to the left and right at different heights relative to the user's eyes, etc. The system described herein, including the head mounted device may operate under the control of any suitable and desired controller or controllers, for example comprising one or more processors. The one or more processors may comprise a microprocessor, a programmable FPGA (field programmable gate array), etc..
For example, a controller may be integrated into the head mounted device, e.g. for controlling the activation of visual elements to provide visual stimuli.
In embodiments, a controller (processor) integrated into the head mounted device may operate to perform the methods of the present invention independently (such that the head mounted device is configured to operate as an isolated system, without any external control).
Alternatively, the head mounted device may be configured to communicate with one or more other (external) devices having processors thereon for the purposes of implementing the methods described herein and controlling the head mounted device. The external device (which in embodiments forms part of the present system) may comprise, e.g. a portable electronic device (e.g. mobile phone or tablet), laptop, desktop computer, cloud computing service, or other device.
In a preferred embodiment, the head mounted device is configured to communicate with (and the system comprises) a portable electronic device (e.g. mobile phone or tablet) for implementing the methods described herein.
The methods in accordance with the present disclosure may be implemented at least partially using software e.g. computer programs. It will thus be seen that the present disclosure herein may provide computer software code for performing the methods described herein when run on one or more data processors.
The computer program (computer software code) may be executed by a processor integrated within the head mounted device. Alternatively, (and preferably) one or more external devices (e.g. a mobile phone) may execute a computer program (e.g. an application, e.g. a mobile phone app) for controlling the head mounted device for implementing the methods described herein.
The present disclosure may suitably be embodied as a computer program product for use with the present system. The computer program product may comprise a series of computer readable instructions either fixed on a tangible, non-transitory medium, such as a computer readable medium, for example, diskette, CDROM, ROM, RAM, flash memory, or hard disk. It could also comprise a series of computer readable instructions transmittable to a computer system, via a modem or other interface device, over either a tangible medium, including but not limited to optical or analogue communications lines, or intangibly using wireless techniques, including but not limited to microwave, infrared or other transmission techniques.
As will be appreciated from the above, the present system preferably comprises one or more input means for receiving user inputs.
The user input means could be provided as part of (integrated into) the head-mounted device. Alternatively, the input means could be an external device, such as a handheld device (e.g. such as a controller or joystick), or a portable electronic device (e.g. such as a mobile phone, tablet, laptop or the like), or a sensor device, or other external device.
As discussed herein, the input means may be configured to receive user inputs provided actively (consciously) or passively (subconsciously) by the user.
For example, the system may be configured to receive a user input comprising one or more of, for example a user: pressing a button or other touch sensitive input device (e.g. a button displayed on the screen of a portable electronic device), making a movement (e.g. gesture), making a sound (e.g. verbal input), actively thinking a measurable thought, or performing any other action measurable by a user input means.
The user input means may comprise, for example, any one or more of: a button or other touch sensitive input, a movement sensor (e.g. motion detector or accelerometer), a sound sensor (microphone), an electromyography (EMG) sensor (a sensor responsive by muscular motion), an Electroencephalography (EEG) sensor (a sensor responsive to brain wave activity), a breath sensor, or other desired sensor e.g. such as those described herein.
The controller(s) (processor(s)) of the present system are preferably configured to receive input data from the one or more input means, and to use the input data to implement the methods described herein.
The present system preferably also comprises one or more output means for providing an output to a user.
The output means could be provided as part of (integrated into) the head-mounted device. Alternatively, the input means could be an external device, such as a handheld device (e.g. such as a controller or joystick), or a portable electronic device (e.g. such as a mobile phone, tablet, laptop or the like), or a sensor device, or other external device.
For example, the output means could comprise one or more of: a visual element of the head mounted device, an external display (e.g. a display of a portable electronic device), a speaker, or any other suitable and desired output device. The output means may provide an output to a user comprising one or more of: an auditory output, a haptic output, a visual output, or other suitable and desired output.
The controller(s) (processor(s)) of the present system are preferably configured to control the one or more output means, to provide an output to a user as indicated in the methods described herein.
In embodiments, the output means is controlled so as to provide instructions to a user for using the system of the present invention. Preferably, one or more auditory instructions are provided to a user when using the head mounted device.
Where the system comprises one or more devices external to the head mounted device, the head mounted devices and the one or more external devices are preferably configured to share data via a suitable wired or wireless connection, e.g. such as Bluetooth, or WFi. Preferably, the head mounted device is configured with wireless connection capability for connection to one or more external devices.
The system may comprise one or more memories for storing data for implementing the methods described herein, e.g. such as for storing computer software code, calibration data, user inputs, a record of user relaxation levels and/or user perceptiveness to visual stimuli provided during a training session, or other suitable and desired data.
The present system preferably comprises a suitable power source for powering the head mounted device. The power source may comprise a wired or wireless connection from the head mounted device to a power source, or preferably an integrated power source (e.g. battery).
BRIEF DESCRIPTION OF THE DRAWINGS
Various embodiments will now be described, by way of example only, and with reference to the accompanying drawings in which: Figure 1 shows a head-mounted training device in accordance with embodiments of the present invention, the training device integrated into a set of over-head headphones, and comprising left and right arms which are shown rotated downwards and extended outwards into a training position, each arm comprising light elements for providing visual stimuli in a monocular region of a user's vision; Figure 2 shows a top view of the head-mounted training device of Figure 1; Figure 3 illustrates the position of the arms of the device of Figure 1 with respect to a
trainee's visual field during a training session;
Figure 4 shows a rear view of the head mounted device of Figure 1 in the training position; Figure 5 shows the head mounted device of Figure 1 in a stowed configuration with arms folded up into the main body of head mounted device; Figure 6 shows a side view of Figure 5; Figure 7 shows a side view of part of the main body of the head mounted device of Figure 1 with the arm detached, showing a track along which the arm moves and a magnetic contact; Figure 8 shows an arm disconnected from the device of Figure 1, showing track runners and a magnetic contact; Figure 9 is another view of the disconnected arm of Figure 8; Figure 10 is a schematic diagram of a system in accordance with embodiments of the invention; Figure 11 is a rear view of the head mounted device of Figure 1 illustrating example relative positions of light elements on left and right visual displays during a training session; Figure 12 illustrates some of the positions of light elements shown in Figure 11 from a top view aspect; Figure 13 shows an example screen display of a mobile app in embodiments of the present invention, during a training session; Figure 14 shows an alternative embodiment of the head mounted training device in accordance with the present invention, the training device being mounted on the brim of a 30 cap; Figure 15 shows another alternative mounting of the training device on the brim of a cap, in accordance with embodiments of the present invention; Figure 16 shows an alternative embodiment of the head mounted training device in accordance with the present invention, the training device being mounted over the bridge of a pair of glasses; Figure 17 shows the glasses and training device of Figure 16 from a front view; Figure 18 shows an alternative embodiment of the head mounted training device in accordance with the present invention, the training device comprising a pair of arms mountable to the arms of a pair of glasses; Figure 19 shows the head mounted training device of Figure 18 with a breath sensor attached; Figure 20 is a front perspective view of the head mounted device of Figure 19; Figure 21 is a view of the device of Figures 19 when mounted to a pair of glasses; Figure 22 shows an alternative embodiment of the head mounted training device in accordance with the present invention in a training position, the training device being attached to a pair of headphones; Figure 23 shows the training device of Figure 22 with in a stowed position; Figure 24 is a top view of the training device of Figure 22 in a training position; Figure 25 shows in isolation an arm of the training device of Figure 25 showing the visual elements provided on the arm; Figure 26 is a perspective view showing the training device of Figure 22 provided on a user's head, in the training position; Figure 27 is a side view showing the training device of Figure 22 provided on a user's head, in the training position; Figure 28 shows an alternative embodiment of the head mounted training device in accordance with the present invention in a training position, the training device being attached to a pair of headphones; Figure 29 shows an alternative embodiment similar to that of Figure 29, with a detachable breath sensor; Figure 30 is a flowchart showing an embodiment for controlling the positions of visual stimuli based on a level of relaxation of a user; Figure 31 is a flowchart showing an embodiment for controlling the positions of visual stimuli based on a user's perceptiveness to visual stimuli; and Figure 32 shows example Electroencephalography (EEG) data, indicating a change in alpha and beta wave activity during a training session using a device in accordance with the present invention.
DETAILED DESCRIPTION
As discussed above, the technology disclosed herein relates to methods and systems for training peripheral vision, particularly by providing visual stimuli simultaneously to the left and right monocular regions of a user's (trainee's) vision.
Figures 1 to 6 show various views of a head-mounted training device 100 for providing visual stimuli, in accordance with embodiments of the present invention.
The training device 100 shown in Figures 1 to 6 is shown integrated into a headset in the form of over-head headphones 106.
The training device could however be incorporated into or mountable to the brim of a cap, an in embodiments this is done, as shown for example in Figures 14 and 15 which show a training device 1400, 1500 having elongate members 101, 102 mounted to a brim 1401 of a cap. The training device could also or instead be incorporated into or mountable to a pair of glasses, for example as shown in Figures 16 to 21. Another configuration using headphones is also shown in Figures 22 to 27. Like features among these various embodiments are indicated with like reference numerals.
Referring to Figure 1, the training device 100 comprises a left elongate member 102 in the form of an arm which extends along the left hand side of a user's head, and a right elongate member 101 in the form of an arm which extends along the right hand side of a user's head.
For reference, the left L', right R', forwards F 'and backwards B 'horizontal directions (together forming the horizontal plane) are shown in Figure 1, along with the upwards U 'and downwards D 'vertical directions. In this regard, directions are preferably defined with respect to the orientation of the user's head, such that the left L', right R', forwards F', and backwards B 'horizontal directions (and accordingly the horizontal plane), and likewise the upwards U' and downwards D 'vertical directions, move as the user's head moves. In other words, the horizontal and vertical directions correspond to a world orientation when a user's head is in its usual upright position, but deviate from the world orientation if a user tilts their head.
As shown in Figure 1, for example, in embodiments, the elongate members 101, 102 are attached (or attachable) to an item of headwear, e.g. headphones, by an attachment means 110 at one end of each elongate member, with the other end of the elongate member being free so that it is cantilevered. This is similarly the case in the embodiments shown in Figures 22 to 29. This may be similarly the case for elongate members attached (or attachable to) other items of headwear, e.g. glasses, such as shown in Figs 16 to 21.
Each of the elongate members 101, 102 respectively comprises a visual display unit 103, 104 towards the distal (forwards) end of the elongate member which is operable to provide visual stimuli to the user. In the embodiment shown, the visual display unit comprises a plurality of discrete visual elements 107, 108 which can each be activated to provide a visual stimulus. In the embodiments shown, the visual elements comprise light elements in the form of an array of colour LED lights.
The colour LED lights 107, 108 are preferably configured to provide visual stimuli with differing colour. Other qualities of the visual stimuli could also be variable, e.g. such as the intensity (brightness) of visual stimuli. In embodiments where LED lights are grouped together, then different patterns or shapes of LED lights could be illuminated simultaneously to provide different qualities of visual stimuli.
Other visual elements could be used instead of LED lights, such as for example a continuous visual element on each elongate member 101, 102, e.g. an LCD or plasma screen or light projection on each elongate member. Such visual elements may similarly provide visual stimuli with variable qualities, e.g. such as colour, intensity, texture, size, shape, or localised motion.
As can be seen in Figure 9, the elongate members 101, 102 have a length L which is larger than their width W. The visual display units 103, 104 are also elongate, having a length L"which is larger than their width W".
This allows for a relatively light-weight and compact configuration, which does not interfere with the user's vision, so that the head-mounted device can be used as part of a daily routine.
Figures 1 to 4 show a head mounted device in a training position. In embodiments, in the training position the elongate members 101, 102 (and likewise the visual display units 103, 104 and visual elements 107, 108 thereon) extend substantially horizontally and substantially at the height (vertical position) of the user's eyes. This is illustrated in Figures 1 and 2 for example, and also at least in Figures 26 and 27.
In the training position, visual display units 103, 104 (and accordingly the visual elements, LED lights 107, 108) of the elongate members 101, 102 are provided only in the right monocular region 301 and left monocular region 302 of the user's vision. In embodiments, the right and left elongate members 101, 102 (and likewise the right and left visual elements 107, 108) do not extend into the binocular region of the user's vision 303. This is shown, for example, in the top view of Figure 3.
(Alternatively, the visual elements could extend into the binocular region. In such embodiments, preferably the head-mounted training device is controlled so as to activate visual elements only in the monocular region during a training session. Visual elements falling within the monocular region are preferably identified in a calibration routine, or based on a user identifying which visual elements can be seen by a single eye only).
As illustrated in Figure 3, the right monocular region 301 is the region of a user's vision which is visible only to the user's right eye, and the left monocular region 302 is the region visible only to the user's left eye (as compared to the binocular region 303 of the user's vision which is visible to both right and left eyes).
The right monocular region 301 for a human typically includes positions at angles a from about 60 to about 110 degrees to the right of the centre of the user's vision 304. The left monocular region 302 for a human likewise typically includes positions at angles p from about 60 to about 110 degrees to the left of the centre of the user's vision 304. The centre of the user's vision in this regard can be taken to be the direction directly forwards from the bridge 305 of the user's nose, and the angles a, p can be measured from the bridge of the user's nose in the right and left directions respectively along a horizontal plane (i.e. being the angle along the horizontal meridian 306).
Accordingly, preferably, in the training position, the visual elements on the right and left elongate members 101, 102 are present within a range of angular positions from about to about 110 degrees in the right and left monocular regions. More preferably, the visual elements span a segment along the horizontal meridian of at least 30 degrees (thus preferably, the visual elements span angular positions between 60 and at least 90 degrees to the left and right of the centre of a user's vision).
In embodiments, the head mounted device is adjustable so as to position the visual display units 103, 104 of the elongate members 101, 102 (and accordingly the visual elements e.g. LED lights 107, 108) in the right and left unshared monocular regions of the user's vision only.
For example, the elongate members 101, 102, may be extendible and retractable along their length (so as to be extendible and retractable forwards and backwards in the horizontal direction when in the training position). The embodiment of Figures 1 to 4 has a telescoping mechanism 109 on each of the elongate members 101, 102 for this purpose. At full extension, the visual display units 103, 104 within the elongate members 101, 102 will be located at a more central area of monocular peripheral vision, thus training vision at this location. When least extended, the outer extremities of monocular peripheral vision can be trained. Other mechanisms could instead be used if desired. For example, the elongate members could be bendable, for example as shown in Figures 28 and 29, in which the elongate members 101, 102, have a bendable section 2801 between their attachment means 110 and visual display unit 103, 104.
The head mounted device may also be adjustable to fit a user's head, e.g. having an adjustable main body, e.g. a head band 105 with telescoping mechanism 401.
In embodiments where the head mounted device which is a set of headphones, the headphones may comprise in-ear speakers 402, 403 as shown in Figure 5 and 6 for example, or over-ear speakers 2203, 2204 as shown in Figure 22 and Figure 28 for example, or alternatively speakers that user bone conduction technology, or other suitable and desired speaker technology.
The position of the visual display units 103, 104 and visual elements 107, 108 of the head mounted device could also (or instead) be adjusted by changing a mounting position of the elongate members 101, 102. This may be particularly suitable for a head mounted device that is mountable to a pair of glasses, such as shown in Figures 18 to 21. In this case, the elongate members 101, 102 are mountable to respective right and left arms 1801, 1802 of a pair of glasses, and can be moved forwards and backwards relative to the arms of the pair of glasses.
The elongate members could be extendible and retractable, and/or bendable, and/or mountable at different positions when provided with any suitable and desired item of headwear, such as headphones, glasses, a cap, etc. Preferably, the elongate members 101, 102 are movable between a training position for performing training, and a stowed position when training is no longer desired to be performed. An example stowed position is shown in Figures 5 and 6, and also in Figure 23. Preferably, in the stowed position, the elongate members and/or visual elements are not readily visible by the user (e.g. are positioned outside of the user's field of vision).
In example embodiments, the elongate members 101, 102 are movable (e.g. rotatable) upwards into the stowed position, and downwards into the training position.
This may be achieved by means of runners 801, 802 on the inside of each elongate member, movable along a respective guide rail 701, the guide rails being provided e.g. on the left and right sides the over-head headphones, e.g. as shown in Figures 7 to 9. In these examples, in the upwards rotated position, a base of each elongate members rests inconspicuously behind a user's ear, whilst in downwardly rotated position, the runners move up and forward along the guiding rail so that the base of the elongate members rests above the user's ears.
Alternatively, rotation could be achieved by means of a rotatable joint 2201, 2202, e.g. at a proximal (rearwards) end of each elongate member, e.g. connecting the elongate member to the headwear (e.g. headphones), as shown for example in Figures 22 to 27. In Figures 28 and 29 a male connector (e.g. jack) 2802 and female receiver (e.g. socket) 2803 form the attachment means 110, and allow rotation of the elongate members 102, 103 when attached. Other mechanisms could instead be provided.
Although Figures 7 to 9, and Figures 22 to 27 show a head mounted device in the form of over-head headphones, the elongate members could equally be movable into a stowed position when mounted on or incorporated into other items of headwear, such as glasses or a cap.
In the stowed position shown in Figures 5 and 6, the elongate members are retracted and rotated upwards to fit within the main body 105 of a set of over-head headphones, to allow for storage of the elongate members and allowing the headphones to be used as regular headphones between training sessions.
In embodiments, when in the training position, the elongate members 101, 102 are configured to electrically connect with a controller (processor(s)) for controlling activation of the visual stimuli (LEDs) and/or to a power source for providing power for activating the visual stimuli. Preferably, the elongate members are electrically disconnected when in the stowed position. In example embodiments, as shown in Figure 7, in the training position, a magnetic contact 803 on each elongate member contacts a respective magnetic contact 702 on a main body of the headwear, performing the dual function of holding left and right elongate members in place, and allowing power to pass through from a controller (processor(s)) in the headwear and control the output of the visual display units in each elongate member.
Alternatively, the elongate members 101, 102 could be electrically connected with a controller (processor(s)) and/or power source regardless of their position. Alternatively, the controller (processor(s)) and/or power source could be integrated within the elongate members.
Whilst the embodiments shown have two elongate members 101, 102 with visual elements which are activatable at positions within the left and right monocular regions, a single member (e.g. a single elongate member, or e.g. a VR headset comprising a single continuous screen, or other suitable and desired display) could instead be provided with one or more visual elements activatable at (controlled so as to active at) positions within the left and right monocular regions simultaneously.
For providing training in accordance with the technology described herein, the visual elements (LED lights) 107, 108 are activatable at a plurality of angular positions a, p, as illustrated for example in Figures 11 and 12.
In the embodiments shown, an array of discrete visual elements in the form of LED lights 107, 108 are provided at a plurality of angular positions to the left and right of the centre of the user's vision. This is achieved in embodiments by using one or more rows of LED lights, each row extending substantially in the horizontal plane close to the vertical level of the user's eyes. In the embodiment shown in Figures 1-9, 11 and 12, two rows of LED lights are provided (so as to form a ten-by-two array of coloured LEDs on each elongate member). Alternatively, a single row of LED lights could be provided as shown in Figures 18 to 27 for example.
Positions of visual elements in an example embodiment are shown in Figure 11 and labelled A to J, with a selection the angular positions of the visual elements relative to the centre of the user's vision shown in Figure 12.
As can be seen from Figures 11 and 12 for example, visual elements (visual stimuli) which are at a larger angular position are further backwards and preferably further apart in the left L 'and right R 'directions.
Whilst the figures show one or more rows of visual elements aligned in the horizonal plane such that each visual element in the row has approximately the same vertical position, other patterns of discrete visual elements (e.g. LED lights) could be used. For example, at each angular position e.g. A to J, a group of visual elements could be provided. Alternatively, visual stimuli could be provided by activating continuous visual element, e.g. an LCD screen, at different angular positions.
In alternative embodiments, the visual stimuli (e.g. discrete visual elements or activated positions of a continuous visual element) could differ in height among the angular positions.
As can been seen in at least Figures 11 and 12, the positions of visual elements (LED lights) 107, 108 are preferably mirror images of one another relative to the centre of a user's vision. This allows the system described herein to activate visual elements at a same angular position in the left and right monocular regions of a user's vision simultaneously.
Preferably, when performing a training session, the head mounted device is controlled so as to provide a sequence of visual stimuli (a 'training sequence') at plural angular positions in turn. Throughout the training sequence, left and right visual stimuli are provided simultaneously at the same angular position as one another.
The system of the present invention may comprise any suitable and desired arrangement for controlling the head mounted device to activate the visual elements to provide a 'training sequence' The system may also comprise one or more input devices, e.g. based on which the sequence of visual stimuli can be controlled. Figures 18 to 21 and 29, for example, show a breathing sensor 1014 in the form of a nasal temperature and/or airflow sensor which is provided on an additional elongate member 1901 which is, for positioning in proximity to a user's nose. The position of the nasal breathing sensor 1014 may be adjustable by a bendable portion 2901 of its elongate member 1902. The breathing sensor 1014 may be attachable to and detachable from the head mounted device, e.g. as shown in Fig. 29. Other breathing sensors could instead be used, e.g. an in-ear or external microphone for detecting breathing.
Figure 10 shows schematically a system 1000 in embodiments of the present invention, in which a control system 1015 integrated within the head mounted device comprises an on-board controller 1001 integrated within the head mounted device (e.g. within one of both of the elongate members 101, 102) is configured to control an output module 1002 which controls the left display unit 104 and right display unit 103 of the elongate members so as to provide visual stimuli. The on-board controller 1001 may be considered as a central control unit, and may run updateable firmware for controlling the outputs of the left and right display units 103, 104.
The output module 1002 may also control other output devices which are integrated within the head mounted device such as vibrational motors 1009, and earphone speakers 1010, and any other suitable and desired output devices. Such output devices may be used for providing useful outputs to the user, such as audio instructions, tactile feedback, an accompanying soundtrack for the training session or any other suitable and desired outputs. The input module 1013 may receive input data from one or more input devices, such as breathing sensor 1014. As discussed herein data from the breathing sensor may provide inputs indicative of a user's level of relaxation (which is in embodiments used to determine a adjust the spacing of visual stimuli).
Other input devices could be also or instead provide data indicative of a user's level of relaxation, e.g. such as a brainwave (EEG) sensor, heart rate sensor, blood pressure sensor or other sensor.
The controller 1001 may receive input data from input devices via any suitable and desired wired or wireless connection.
The control system 1015 is configured to draw power from a power supply integrated within the head mounted device. The power supply may be any suitable and desired power source, e.g., a rechargeable battery 1004 chargeable via a USB charging port 1008.
In embodiments, the controller 1001 integrated within the head mounted device controls the activation of the visual elements and controls the output devices associated with the head mounted device, based on instructions received from an external controller (controller app 1012) executing a computer program (e.g. application or "app"). In embodiments, the external controller is provided as part of an external device, e.g a portable electronic device (mobile device 1016).
In embodiments, the controller 1012 (e.g. processor(s) running a software application) on the external device (mobile device 1016) is configured to determine the training sequence of visual stimuli which are to be provided to the user, and to transmit instructions (via a transmission/reception module 1011) to the head mounted device (e.g. to a transmission/reception module 1005 of the head mounted device) accordingly. The controller 1012 may also provide instructions for controlling the provision of an accompanying audio soundtrack and/or instructions via speakers 1010, and any other desired e.g. tactile, audio or visual feedback based on the trainee's responses.
Thus, in embodiments, a specialist software app running on a mobile device 1016 controls the training session. However in alternative embodiments this could include any kind of remote control device.
The transmission of instructions from the external device (mobile device 1016) to the head-mounted device may be done using any suitable and desired technology, e.g. such as wireless (e.g. Bluetooth, Wifi, etc) or wired communication. The headset is preferably controlled via Bluetooth or other wireless technology that connects with the receiver module 1005.
One or more inputs used for determining the sequence of visual stimuli may be received by the controller 1012 of the external device 1016.
For example, the breathing data from the breathing sensor 1014 (or other sensor indicative providing data indicative of a user's level of relaxation) may be transmitted from the head mounted control system 1015 to the external device 1016, with the external device 1016 then determining the appropriate sequence of visual stimuli to be provided.
Alternatively, input data, e.g. from such sensors, may be transmitted (directly) to the external device 1016 (without being first received by the head mounted control device 1015) The external device may also comprise a touch screen 1017 or other input or output device(s) for allowing the user to interact with the external device (e.g. such as a keyboard, button, gesture or movement sensor, camera, microphone or other suitable and desired input device). Figure 13 shows an example user interface for a touch screen of a mobile device, which may be provided during a training session to provide information to a user and receive user input.
Figure 10 shows a system in which the training sequence of visual stimuli to be provided to the user is determined by an external device comprising a mobile device 1016 (e.g. mobile phone or tablet). The external device could also or instead be any other suitable and desired device, e.g. a laptop, smart watch, wearable electronic device, desktop computer, cloud or internet-based computing service, or other suitable and desired external device.
Alternatively, the head mounted device itself may have an integrated controller (processor) which is configured to determine the training sequence of visual stimuli to be provided to the user, such that the head mounted device can be operated in isolation (without requiring an external controller).
Whilst Figure 10 shows the input devices, e.g. breath sensor 1014, being communicatively coupled to the control system of the head mounted device 1015" one or more input devices could instead be communicatively coupled to the external device (mobile device 1016).
As noted above, when performing a training session, the head mounted device is controlled (e.g. by way of external controller 1012 and on-board controller 1001) so as to provide a sequence of visual stimuli (a 'training sequence').
Throughout the training sequence, left and right visual stimuli are provided at various angular positions in turn, the left and right visual stimuli being provided simultaneously at the same angular position as one another. One or more qualities (e.g. colour) of the visual stimuli provided may vary (e.g. at an angular position and/or among the different angular positions).
Referring back to Figures 11 and 12 for example, the training sequence may comprise activating right and left visual elements (LED lights) simultaneously at any of the positions A to J. Preferably, during a training session, a spacing between left and right visual increases over time and/or based on a user response. In this way, visual stimuli are provided further apart (wider in the peripheral field) as a user becomes more relaxed and/or aware of their peripheral vision. As will be seen below, this progression from visual stimuli which are relatively close together to relatively further apart can be embodied in any and all examples described herein.
In this regard, the spacing between the left and right visual stimuli is preferably measured along the horizontal meridian, and so corresponds to the sum of the angular positions a, p of the visual stimuli. A larger spacing accordingly corresponds to a visual stimuli provided a larger angle a, p from the centre of the user's vision (and thus further backwards B', and further to the left L 'and right R').
The spacing between left and right visual stimuli in the training sequence could increase over time in a predetermined manner (and not depend on any user input during the training session). Alternatively, the spacing between left and right visual stimuli could increase depending on a user input.
An example predetermined training sequence in which the spacing between visual stimuli increases could be, for example:
EXAMPLE 1:
A, B, C, D, E, F, G, H, I, J (In the example sequences described herein, each of A to J indicates the left and right visual stimuli (LEDs) at that respective position being illuminated simultaneously. With reference to Figure 11, for example, this may comprise LEDs from one or both rows being illuminated at that position. Positions separated by a "," indicate visual stimuli being shown in turn, in a consecutive period of time).
Example 1 shows a possible sequence of visual stimuli of increasing spacing forming a single cycle of visual stimuli comprising positions in the range A to J. Whilst Example 1 shows the visual stimuli being provided at each and every position in the range A to J, some positions could be skipped if desired. For example, another predetermined training sequence could be:
EXAMPLE 1A:
A, B, C, E, G, J Preferably left and right visual stimuli provided at a position (e.g. a position from A to J) are provided for a period of time which is long enough for the user to be able to discern the visual stimuli.
Whilst at any particular angular position and/or among angular positions, one or more qualifies (e.g. colour) of the visual stimuli may vary. For example, at position A, the colour of the left and right stimuli could progress through one or more colours such as blue, green, purple etc. in turn before progressing to position B. Visual stimuli having a variety of colours could likewise be provided at other positions, such as B, C, D, etc. The left and right visual stimuli could have the same or mis-matched. The particular colours provided could be selected by the system on a randomised basis, such that a user cannot predict which colour(s) will be shown.
Compared to Examples 1 and 1A, an increase in spacing can be performed more gradually by performing plural cycles of providing visual stimuli, wherein in each cycle visual stimuli are provided at positions within a range of one or more positions.
A sequence in embodiments of the present invention using plural cycles of visual stimuli is for example:
EXAMPLE 2:
A, A, A, B, A, B, A, B, C, A, B, C, A, B, C, D, A, B, C, D.. etc...
I
Cycle: 1 2 3 4 Thus, in the 1 cycle, visual stimuli are provided at positions within the range of positions consisting of position A. In the 2" cycle the range of positions is A and B. In the 3' cycle the range of positions is A and B and C. In the 4'h cycle the range of positions is A and B and C and D. In later cycles the range could additionally include positions such as E or F or G etc. Similarly to the discussion above, at any particular angular position and/or between angular positions, one or more qualities (e.g. colour) of the visual stimuli may vary.
Thus, in Example 2, the range of positions differs in each cycle, and particularly a furthest spacing between positions of right and left visual stimuli is increased in each cycle, whilst the closest spacing in the cycle remains the same. In this example, in the 1' cycle the furthest spacing corresponds to right and left visual stimuli being provided at position A, whereas in the 2' cycle the furthest spacing is at position B, in the 3' cycle the furthest spacing is at position C, in the 41h cycle the furthest spacing is at position D, and so on. The closest spacing which is the same for each cycle, is at position A. In embodiments the closest (smallest) and/or furthest (largest) spacing between right and left visual stimuli can be altered in each cycle. For example, another training sequence in embodiments of the present invention could be:
EXAMPLE 3:
Cycle 1: A, B, C, A, B, C, A, B, C...
Cycle 2: B, C, D, B, C, D, B, C, D...
Cycle 3: C, D, E, C, D, E, C, D, E... etc.
In Example 3, in each successive cycle, both the closest and furthest spacing between right and left visual stimuli is altered. In this example, the closest spacing in the 1" cycle corresponds to position A, in the 2nd cycle is position B, and in the 3" cycle is position C. The furthest spacing in 1st cycle corresponds to position C, in the 2" cycle is position D, and in the 3' cycle is position E. The one or more positions forming the range of positions may overlap for successive cycles (e.g. as in Examples 2 and 3 above), such that one or more of the same positions appear in successive cycles. Alternatively, the one or more positions forming the range of positions could be non-overlapping for successive cycles, for example a 1s1 cycle could have a range of positions being A and B, a 2 cycle having C and D, a -rd cycle having E and F, etc. In embodiments, the position(s) of (e.g. and spacing between) visual stimuli could be varied within the range of possible positions for a cycles (in addition to or alternatively to changing the closest and/or furthest spacing for the cycle). An example of where spacing between visual stimuli is also changed between cycles is:
EXAMPLE 3A:
Cycle 1: A, B, C, A, B, C, A, B, C... Cycle 2: A, C, E, A, C, E, A, C, E... Cycle 3: B, D, G, B, D, G, B, D, G In Example 3A, range of positions in cycle 1 is A to C, whereas in cycle 2 the range of positions is A to E (such that the furthest spacing between left and right visual stimuli in the range has increased to position E), and in cycle 3 the range of positions is B to G (such that the closest and furthest spacing between left and right visual stimuli in the range have increased to B and G respectively).
Furthermore, in Example 3A, cycle 1 comprises stimuli at adjacent positions only. In comparison, in cycle 2 the spacing between visual stimuli is increased, such that the positions at which stimuli are provided are not adjacent within the available positions for the head mounted device, i.e. such that positions within the range for the cycle are missed out On cycle 2 a single position B or D is missed out between visual stimuli). In cycle 3, the spacing between the visual stimuli is further increased (with a single position C being 'missed out' between the visual stimuli at positions B and D, and with two positions E and F being missed out between the visual stimuli at positions D and G).
In embodiments, the relative position(s) of (e.g. and spacing between) visual stimuli could be varied within the range of possible positions for a cycle so that the average angular position increases in one or more successive cycles. An example of this is:
EXAMPLE 3B:
Cycle 1: A, B, E, A, B, E, A, B, E... Cycle 2: A, C, E, A, C, E, A, C, E... Cycle 3: A, D, F, A, D, F, A, D, F...
As illustrated in the above examples, preferably in each cycle a 'wave 'of visual stimuli is provided which progresses from relatively smaller angular positions (relatively closer spacings) to relatively larger angular positions (relatively further spacings). Preferably the 'wave 'of visual stimuli progresses from a smallest angular position (smallest spacing) to a largest angular position (largest spacing) of the position(s) at which visual stimuli are provided in the range of position(s) for the cycle.
For example, the 'wave 'in Example 1 comprises the positions A through J in turn. In Example 2, the 'wave 'comprises positions A, B in turn in cycle 2, and positions A, B, C in turn in cycle 3. In Example 3A the 'wave' comprises positions A, C, E, in turn in cycle 2, etc. Consistent with the above discussion, whilst the 'wave 'could be formed of visual stimuli at each possible angular position within the range for a cycle (e.g. at each of A to J for Example 1), alternatively the wave may comprise a selection of the possible positions from the range (e.g. comprising A, B, H, I, J) such that some positions are skipped.
Within each cycle, plural 'waves 'of stimuli could be provided (e.g. repeated).
The range of positions forming a (each) cycle, and the time spent in a (each) cycle (e.g. the number of 'waves 'of stimuli in each cycle) could be predetermined, such that the range of positions at which visual stimuli are provided changes over time without any user input, and in embodiments this is done.
Alternatively, the positions at which visual stimuli are provided could be selected based on a user input.
The user input could be a user selecting one or more parameters for a training sequence prior to commencing the training session, e.g. a user selecting a minimum and maximum position for the visual stimuli to be provided during the training session, and selecting a rate at which the spacing between visual stimuli is to increase during the training session. Alternatively, the user input could be a user selecting a training program (e.g. a 'relaxation' program, e.g. 'relaxation level 1' or 'relaxation level 2', or an 'energising' program), the training program having one or more pre-configured training sequences with pre-configured parameters (e.g. such as the ranges of and spacing between visual stimuli in each cycle in the sequence). Based on this (or other suitable input parameters), the system may be configured to determine the sequence of stimuli to be provided (e.g. to determine the range of one or more positions forming each cycle, and the amount of time spent in each cycle).
Alternatively the user input, based on which the positions of the visual stimuli are selected for the training sequence, could be a user input during a training session, e.g. a user input indicative of a user's level of relaxation and/or a user input indicative of a user's perceptiveness to the visual stimuli during the training session.
Thus, in embodiments, a controller of the system (e.g. on-board controller 1001 or external controller 1012) may be configured to receive input data indicative of (and to determine) a user's level of relaxation and/or a user's perceptiveness to visual stimuli, and to adjust the positions at which visual stimuli are provided accordingly.
Figure 30 is a flow chart showing steps for adjusting the positions of visual stimuli during a training session based on the user's level of relaxation.
Upon starting the training session (step 2801) the system provides visual stimuli within an initial range of one or more positions (step 2802). The system then receives an input indicative of a user's level of relaxation, e.g. comprising data from a breathing sensor (step 2803), and determines a user's level of relaxation from the breathing data (step 2804).
Based on the user's determined level of relaxation, the system then adjusts the one or more positions at which visual stimuli are provided (step 2805). Step 2804 (determining the level of relaxation) and step 2805 (adjusting one or more positions of visual stimuli) are performed throughout the training session, e.g. at predetermined time intervals.
The training session can be ended (step 2806) at any suitable and desired time. For example, the training session could end when a user indicates they wish to end the training session. Alternatively, the training session could end after predetermined amount of training time has elapsed, or a particular level of relaxation of the user has been reached, or a particular set of one or more positions for the visual stimuli is reached (e.g. a set of one or more positions for visual stimuli which includes a furthest apart spacing of visual stimuli permitted by the training device).
As indicated in Figure 30, an input from a breath sensor could be used to determine a user's level of relaxation. Preferably, the system is configured to correlate shorter and/or faster breaths to a lower level of relaxation (the user being less relaxed), and to correlate longer and/or slower breaths to a higher level of relaxation (the user being more relaxed).
Thus, the system may be configured to determine a user's level of relaxation based on a duration of a user's breaths.
Additionally, or alternatively, the system may be configured to allow a user to self-report their in-breaths and/or out-breaths. This may encourage user mindfulness and engagement with the system. The duration of self-reported in-breaths and/or out-breaths could additionally (or alternatively) be used to determine the user's level of relaxation.
The user's level of relaxation could alternatively be determined based on any suitable and desired input indicative of a user's level of relaxation, e.g. heart rate, blood pressure, etc. In embodiments, selecting the range of one or more positions at which to provide visual stimuli based on the user's level of relaxation comprises increasing the separation between left and right visual stimuli when it is determined that the user has a higher level of relaxation Cis more relaxed), and conversely decreasing the spacing between left and right visual stimuli when it is determined that the user has a lower level of relaxation Os less relaxed).
For example, a training sequence may comprise one or more cycles of operations, wherein in each cycle visual stimuli are provided within a range of one or more positions (e.g. such as Examples 2, 3, 3A and 33 above). In such cases, increasing the spacing between visual stimuli may comprise increasing the closest and/or furthest spacing of positions forming a cycle of visual stimuli and/or adjusting the relative positions at which visual stimuli -z13-are provided in a cycle (e.g. compared to an immediately preceding cycle of visual stimuli). Conversely, decreasing the spacing between visual stimuli may comprise decreasing the closest and/or furthest spacing of positions forming a cycle of visual stimuli and/or adjusting the relative positions at which visual stimuli provided during a cycle (e.g. compared to an immediately preceding cycle of visual stimuli).
For example, with reference to Example 2, the training sequence may comprise a predetermined set of cycles 1, 2, 3, etc. each having one or more positions at which visuals stimuli are shown within a range of one or more positions. Upon determining the user's level of relaxation, the system may select the appropriate cycle based on the user's level of relaxation.
In this regard, the system may progress to a next cycle once a threshold level of relaxation has been met. The system may revert to a previous cycle if the level of relaxation drops below a threshold.
In embodiments where the system is configured to receive breathing data for a user, e.g. from a breathing sensor, the position of visual stimuli is varied (e.g. within each cycle) in synchronisation with the user's breath.
Preferably, visual stimuli are provided at decreasing angular positions during a user's in-breath, and increasing angular positions during a user's out-breath (since the out-breath correlates to a relaxation of the user).
For example, for a cycle of operation comprising positions A and B and C, during a user's out-breath a wave of visual stimuli comprising positions A, B, C in turn may be provided, whilst during a user's in-breath a wave of visual stimuli comprising positions C, B, A in turn may be provided. Thus, the order of visual stimuli provided in synchronisation with the user's breath during that cycle of operation may be: A, B, C, B, A, B, C, B, A, etc...
Alternatively, positions could be skipped in the cycle (e.g. as described above), with the cycle for example having positions A, C, E in turn. In such as case, the order of visual stimuli provided in synchronisation with the user's breath during that cycle of operation may be: A, C, E, C, A, C, E, C, A, etc...
An example training session using a user's level of relaxation to control the position of visual stimuli could progress, for example as below:
Example 4
* Start * Cycle 1: A, A, A, A...etc * Detect user level of relaxation being greater than threshold relaxation associated with Cycle 1, and progress to Cycle 2 * Cycle 2: A, B, A, B, A, B, A, B... .etc -44- * Detect user level of relaxation being greater than threshold relaxation associated with Cycle 2, and progress to Cycle 3 * Cycle 3: A, B, C, BA, BC, B, A....
* Detect user level of relaxation falling within the relaxation range associated with Cycle 2, and revert to Cycle 2 * Cycle 2: A, B, A, B, * Etc...
Preferably, the system is configured to control a colour of the visual stimuli provided based on the user's level of relaxation (e.g. breathing), with colours towards the redder end of the visual spectrum being provided when a user is less relaxed (breathing faster), and colour towards the blue/violet end of the visual spectrum being provided when a user is more relaxed (breathing slower).
Thus, in Example 4 above, visual stimuli in cycle 1 may be a red colour, cycle 2 may be an orange colour, cycle 3 a yellow colour, etc. (with cycles of positions yet further apart being green, blue, violet etc.).
Alternatively, the colour of the visual stimuli could be controlled based on (synchronised) with the user's in-breaths and out-breaths. For example, on the out-breath redder stimuli could be provided, and on the in-breath bluer stimuli provided.
Other qualities of the visual stimuli could also or alternatively change based on the user's level of relaxation (e.g. based on the user's breathing), such as a colour, intensity, texture, size, shape, or localised motion of the visual stimuli.
As noted above, the position of visual stimuli provided could alternatively be controlled based on a user's perceptiveness to visual stimuli provided during a training session. In embodiments, a user's perceptiveness is determined based on the user's accuracy in identifying target characteristics of the visual stimuli.
Figure 31 is a flow chart showing steps for adjusting the positions of visual stimuli during a training session based on a user's perceptiveness to visual stimuli.
Upon starting the training session (step 2901) the system provides visual stimuli within an initial range of one or more positions, with target characteristic intermittently shown (step 2902).
The target characteristic could be any suitable and desired quality of the visual stimuli provided. In embodiments it is a target quality (e.g. a target colour, e.g. green) provided to the left, right or both monocular regions of a user's vision. Alternatively, the target characteristic could be a matched or mis-matched quality (e.g. colour) between visual stimuli provided to the left and right monocular regions of the user's vision.
The target characteristic is shown intermittently, so that the target quality (or matched, or mis-matched quality) occurs less often than other qualities (or mis-matched, or matched qualities). The target characteristic is preferably shown at times which are randomised so that a user cannot predict when it will occur.
For example, the quality (e.g. colour) of visual stimuli could be changed at regular intervals in time, but with the quality (e.g. colour) varied in a randomised manner (e.g. by selecting a weighted randomised colour). The system may be configured to change the quality (e.g. colour) in unison and/or differently for the left and right sides. For a percentage of the time, the quality (e.g. colour) on both right and left may match, and for a percentage of the time the quality (e.g. colour) may differ on the right and left.
For example, for a target characteristic which is the presence of a green colour, for a cycle of operation comprising waves through positions A and B and C, a sequence of visual stimuli provided to the left and right visual stimuli could comprise in turn: A (blue, blue), B (purple, purple), C (purple, green) D (blue, blue), A (blue, blue), B (yellow, yellow), C green green) D (blue, blue), A (blue, blue), B (purple, purple), C (blue, blue), D (blue, blue).
Where A(green, green) indicates green stimuli being shown simultaneously to the left and right at position A, whereas for example A(purple, green) indicates purple to the left and green to the right at position A. At step 2903, the system determines whether the user has correctly perceived the target characteristic. Preferably, the system is configured to receive a user response indicating that the user has perceived the target characteristic.
Preferably, the user response comprises a user pressing a button on a screen of a mobile device when the user believes they have seen the target characteristic (e.g. such as the button 1301 shown on the screen 1300 of Figure 13. The button should be large enough that the user can press it without having to direct their gaze away from the forwards direction.
Alternatively, any other suitable and desired user response could be used, e.g. a user actively (consciously) or passively (subconsciously) interacting with any suitable and input means of the system, e.g. a button or microphone or gesture detector or other input means.
Referring back to Figure 31, determining whether the user has correctly perceived the target characteristic (step 2903) may comprise, determining whether the user has provided a response whilst the target characteristic is being shown (or within a particular time window after the characteristic has started being shown) . Conversely, it may be determined that the user has not correctly perceived the target characteristic (step 2903) if the user provides a response whilst the target characteristic is not being shown (e.g. before the target characteristic is shown, without a target characteristic being shown, or after the target characteristic has stopped being shown), or outside the above mentioned time window. . One or more positions at which visual stimuli are provided is then adjusted based on whether the user has correctly perceived the target characteristic (step 2904).
In this regard, the one or more positions could be adjusted immediately in response to a correct (or incorrect) identification of a single occurrence of a target characteristic. Alternatively, the range of positions could be adjusted after a threshold number of correct (or incorrect) identifications, or based on the proportion of correctly (or incorrectly) identified target characteristics.
Preferably, adjusting one or more positions at which to provide visual stimuli based on the user's perceptiveness to visual stimuli comprises increasing the separation between visual stimuli when the user correctly identifies one or more occurrences of the target characteristic (and may conversely comprise decreasing the spacing between left and right visual stimuli when the user incorrectly identifies one or more occurrences of the target characteristic).
Similarly to the discussion above, for a training sequence which comprises one or more cycles of operations, increasing the spacing between visual stimuli may comprise increasing the closest and/or furthest spacing between left and right positions within a cycle of visual stimuli and/or adjusting the relative positions of visual stimuli provided during a cycle (e.g. compared to an immediately preceding cycle). Conversely, decreasing the spacing between visual stimuli may comprise decreasing the closest and/or furthest spacing between left and right positions within a cycle of visual stimuli and/or adjusting the relative positions of visual stimuli provided during a cycle (e.g. compared to an immediately preceding cycle) . An example training sequence controlled based on user's perceptiveness to visual stimuli, with a target characteristic being a green colour, could progress for example as below:
Example 5
* Start * Cycle 1: A (blue, blue), A (purple, purple), A (purple, blue), A (blue green) * Determine that user has correctly perceived green target characteristic, and progress to Cycle 2 * Cycle 2: A (purple, purple), B (blue, blue), A (blue, blue), B (blue, blue), A (blue, blue), B (green blue) * Determine that user has correctly perceived target characteristic, and progress to Cycle 3 * Cycle 3: A (blue, blue), B (blue, blue), D (blue, blue), A (blue, blue), B (blue, blue), D (green blue) * Determine that user has correctly perceived target characteristic, and progress to Cycle 4 * Cycle 4: A (blue, blue), C (purple, purple), D (purple, purple), A (blue, blue), C (blue, blue), D (blue, blue), A (purple, purple), C (purple, purple), D (blue green) * Determine that user has not perceived target characteristic, and revert to Cycle 3 * Etc...
Thus, if correct identifications continue to be made by the trainee, the range of positions for the cycles provided encompass progressively wider positions such as B or C or D etc. up to position J, and/or the average angular position of visual stimuli increases. Conversely, after a configurable number (or proportion) of incorrect matches, the changes to positions of visual stimuli may pause and reverse so that stimuli move closer back together. Alternatively, the positions of visual stimuli could be changed irrespective of whether correct (or incorrect) identifications are received (such that step 2904 is omitted). In this regard, in embodiments, correct (or incorrect) identifications by the user may be recorded (e.g. and displayed on a screen of a mobile device during the training session, or communicated to the user after the session is complete as a training report), without being used to control the spacing of visual stimuli during the training session.
The training session can be ended (step 2805) at any suitable and desired time. For example, the training session could end when a user indicates they wish to end the training session. Alternatively, the training session could end after a predetermined amount of training time has elapsed, or a particular number of correct identifications of the target characteristic have been made, or a particular set of one or more positions for the visual stimuli is reached (e.g. a set of one or more positions for visual stimuli which includes a furthest apart spacing of visual stimuli permitted by the training device).
Thus, as can be seen, in embodiments the progression of visual stimuli towards positions which are further apart is controlled based on the success of the trainee in correctly identifying specified characteristics of the visual stimuli.
Other features of the training sequence of visual stimuli could change in response to correct (or incorrect) identifications by the user, e.g. could change when the range or one or more positions changes. For example, the rate of change of the quality (e.g. colour) of visual stimuli, or the rate of provision of visual stimuli, the rate of occurrence of the target quality, could also increase in response to correct (or incorrect) identifications by the user.
Qualities (e.g. other than colour, e.g. such as pattern, texture, localised movement) could also change in response to correct (or incorrect) identifications by the user. For example, a degree of contrast between stimuli on the left and right could be changed, for example subtler shades of colour may be introduced in response to correct identifications.
The system may allow a user to select one or more parameters for the training session. For example the user may select which quality (or qualifies) are to be the target characteristics during a training session (e.g. allowing a user to select one or more target colours). The system may also be configured to receive a user selection as to the rate at which the target characteristics occur, and/or the rate at which the positions of the visual stimuli change. Alternatively, the user may be able to control various parameters (e.g. which quality (or qualities) are to be the target characteristics and/or a rate at which the target characteristics occur, and/or the rate at which the positions of the visual stimuli change) by selecting a training program from a plurality of pre-configured training programs (e.g. an 'energising' program or a 'relaxing program').
During the training session, the system may keep a record of the user's perception of visual stimuli, which may be provided as a training report once a training session is complete. For example, as shown in Figure 13, a mobile device of the system may display an indication of the proportion of target characteristics correctly identified 1302, and the average time it took the user to identify each target characteristic 1303. Thus, the speed and accuracy of identification of target characteristics can be measured and recorded by a mobile app 1012. If the training session ends after a predetermined amount of training time, then a final (most outwards) position of the visual stimuli at the end of the training session may provide a metric to indicate the trainee's success rate.
Although changes in colour (or other quality) of visual stimuli are described above in the context of providing target characteristics for a user to identify to determine user perceptiveness, the system could change the quality of visual stimuli regardless of whether user perceptiveness is being monitored. The Applicant has recognised that, generally, changing the quality of visual stimuli throughout a training session may improve user attention and stop a user from losing interest. This may be particularly the case when changes in visual quality (e.g. colour) are randomised, so that a user cannot predict the quality (e.g. colour) that will next appear.
Whilst the visual quality (e.g. colour) can differ between the left and right visual stimuli provided simultaneously, the Applicant has found that left and right visual stimuli having the same qualifies are more relaxing. Therefore, in embodiments left and right visual stimuli provided simultaneously preferably have identical qualities for a majority of the training session.
Regarding the position of the visual stimuli, in embodiments, the position at which the left and right visual stimuli are provided is changed in synchronisation with a rhythmic beat of a soundtrack, the soundtrack being provided e.g. by means of a suitable speaker, e.g. integrated into the head-mounted device. For example for a sequence of positions which is A, B, A, B, A, B, etc. each position may be provided on the beat of the soundtrack. Such synchronisation with a soundtrack may enhance the relaxing effect of the training sequence, and therefore facilitate relaxing of a user's gaze away from a central focus to a wider field of peripheral vision.
The system may permit the user to select a soundtrack for a training session, e.g. from a music library stored on a mobile device or a music streaming service. The soundtrack could be, for example, binaural beats, nature sounds, music or other soundtrack. The system may be configured to provide a sequence of visual stimuli based on the selected soundtrack, e.g. with slower tempo soundtracks being used for slower paced sequences (where positions and/or qualities of visual stimuli change less often) compared to faster tempo soundtracks which are used for faster paced sequences (where positions and/or qualifies of visual stimuli change more often). For example, the soundtrack may form an integral role in the selection of the training programme' on an interface (e.g. of a mobile app), with the user being able to select a soundtrack e.g. Relaxing Rainforest' or 'Upbeat Dance'.
As will be apparent from the above discussion, the technology described herein comprises systems and methods for relaxing a user's gaze away from a central focus to a wider field of peripheral vision. In addition to relaxing gaze and training peripheral vision, the use of such a system may provide a generally relaxing effect on the user. This is shown, for example in Figure 30 which shows example brainwave data measured by an EEG device whilst a user is performing a training session in accordance with the present disclosure (in this case, the EEG device is a MuseTM 2 headband, and the date is graphed using "Mindmonitor" software). The graph in Figure 30 shows the relative strength of brain waves on the y (vertical) axis normalised such that the total strength at any point is 1, and shows time in minutes on the horizontal (x) axis. Generally, Figure 30 shows that after starting a training session, alpha wave activity (associated with a more relaxed state of the user) increases, whilst beta wave activity (associated with a less relaxed state of the user) decreases.
Although the present disclosure has been described with reference to various embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made without departing from the scope of the invention as set forth in the accompanying claims.

Claims (25)

  1. CLAIMS1. A system for relaxing gaze and/or training attention to peripheral vision comprising: a head mounted device configured to provide visual stimuli simultaneously to the left and right monocular regions of a user's peripheral vision.
  2. 2. The system of claim 1, wherein the head mounted device is configured to provide visual stimuli to the left and right monocular regions simultaneously at an equal angular position to the left and right from the centre of a user's vision.
  3. 3. The system of claim 2, wherein the head mounted device is configured to provide visual stimuli at a plurality of angular positions to the left and right from the centre of a user's vision.
  4. 4. The system of claim 1 or claim 2, wherein the head mounted device comprises one or more light elements, and is configured to activate the one or more light elements to provide the visual stimuli.
  5. 5. The system of claim 4, comprising a pair of elongate members, wherein the one or more light elements are provided on each elongate member of the pair.
  6. 6. The system of claim 5, wherein the pair of elongate members are formed integrally with or mountable to one or more of: a pair of over-head headphones, a headband, a hat, or a pair of glasses.
  7. 7. The system of any preceding claim, wherein when performing a training session, a spacing between the left and right visual stimuli provided simultaneously increases with increasing time, and/or based on a user response.
  8. 8. The system of claim 7, wherein when performing a training session, the head mounted device is configured to perform one or more cycles of providing visual stimuli, wherein in each cycle the visual stimuli are provided at one or more positions within a defined range of one or more positions, wherein the one or more positions at which visual stimuli are provided is permitted to vary between cycles by altering either or both of: a closest and/or a furthest spacing between left and right visual stimuli in the range of one or more positions for a cycle; and the one or more positions at which visual elements are provided within the range of one or more positions for a cycle.
  9. 9. The system of any preceding claim, wherein during a training session, the head-mounted device is configured to vary one or more qualities of the visual stimuli provided, the one or more qualities of the visual stimuli comprising one or more of: colour, intensity, texture, size, shape, or localised motion.
  10. 10. The system of claim 9, wherein the system is configured to set one or more target characteristics of the visual stimuli, the one or more target characteristics comprising a target quality for a visual stimulus or a mismatched quality between visual stimuli provided simultaneously; and wherein during a training session, the system is configured to: provide left and right stimuli having a target characteristic; receive a user input responsive to a user perceiving the target characteristic; determine whether the user has correctly perceived the target characteristic; and when the user has correctly perceived the target characteristic, alter one or more positions at which left and right stimuli are provided by the head-mounted device.
  11. 11. The system of any preceding claim, wherein the system is configured to control one or more positions at which the left and right visual stimuli are provided by the head-mounted device based on an indicated or detected level of relaxation of a user.
  12. 12. The system of any preceding claim, comprising a breath sensor for detecting the breathing of a user; and wherein the system is configured to control one or more positions at which the left and right visual stimuli are provided by the head-mounted device based on the breathing of the user.
  13. 13. The system of any preceding claim, wherein the head-mounted device is configured to change a position at which the left and right visual stimuli are provided in synchronisation with a rhythmic beat of a soundtrack.
  14. 14. A method for relaxing gaze and/or training attention to peripheral vision comprising: providing visual stimuli simultaneously to the left and right monocular regions of a subject's peripheral vision.
  15. 15. The method of claim 14, comprising providing the visual stimuli by activating one or more visual elements of a training device, preferably wherein the one or more visual elements are one or more light elements of a head mounted device.
  16. 16. The method of claim 15, comprising positioning or configuring the one or more visual elements so as to present the visual stimuli to the left and right monocular regions of a subject's peripheral vision.
  17. 17. The method of claim 15 or 16, comprising providing the head mounted device as part of or mounting the head mounted device to any of: a pair of glasses, a hat, a headband, or a pair of over-head headphones.
  18. 18. The method of any of claims 14 to 17, comprising providing left and right visual stimuli simultaneously at an equal angular position to the left and right from the centre of the subject's vision, at a sequence of positions.
  19. 19. The method of any of claims 14 to 18, comprising during a training session, increasing a spacing between the left and right visual stimuli provided simultaneously with increasing time, and/or based on a user response.
  20. 20. The method of any of claims 14 to 19, comprising during a training session, performing one or more cycles of providing visual stimuli, wherein in each cycle the visual stimuli are activated at one or more positions within a defined range of one or more positions, and comprising varying the one or more positions at which visual stimuli are provided between cycles by altering either or both of: a closest and/or furthest spacing between left and right visual stimuli in the range of one or more positions for a cycle; and the one or more positions at which visual elements are provided within the range of one or more positions for a cycle.
  21. 21. The method of any of claims 14 to 20, comprising varying one or more qualities of the visual stimuli provided, the one or more qualities of the visual stimuli comprising one or more of: colour, intensity, texture, size, shape, or localised motion.
  22. 22. The method of claim 21, comprising setting one or more target characteristics of the visual stimuli, the one or more target characteristics comprising a target quality for a visual stimulus or a mismatched quality between visual stimuli provided simultaneously; providing left and right visual stimuli having the target characteristic; receiving a user input responsive to a user perceiving the target characteristic; determining whether the user has correctly perceived the target characteristic; and when the user has correctly perceived the target characteristic, altering one or more positions at which left and right stimuli are provided.
  23. 23. The method of any of claims 14 to 22 comprising controlling one or more positions at which the left and right visual stimuli are provided based on an indicated or detected level of relaxation of a user, preferably based on a breathing of a user.
  24. 24. The method of any of claims 14 to 23 comprising changing a position at which the left and right visual stimuli are provided in synchronisation with a rhythmic beat of a soundtrack.
  25. 25. A computer program comprising computer software code for performing the method of any one of claims 14 to 16, and 18 to 24, when the program is run on one or more data 15 processors.
GB2300972.3A 2022-08-31 2023-01-23 Head mounted device and methods for training peripheral vision Pending GB2622119A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/GB2023/052230 WO2024047338A1 (en) 2022-08-31 2023-08-30 Head mounted device and methods for training peripheral vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GBGB2212691.6A GB202212691D0 (en) 2022-08-31 2022-08-31 Head worn device and methods for training wide peripheral vision

Publications (3)

Publication Number Publication Date
GB202300972D0 GB202300972D0 (en) 2023-03-08
GB2622119A true GB2622119A (en) 2024-03-06
GB2622119A8 GB2622119A8 (en) 2024-03-27

Family

ID=83931845

Family Applications (2)

Application Number Title Priority Date Filing Date
GBGB2212691.6A Ceased GB202212691D0 (en) 2022-08-31 2022-08-31 Head worn device and methods for training wide peripheral vision
GB2300972.3A Pending GB2622119A (en) 2022-08-31 2023-01-23 Head mounted device and methods for training peripheral vision

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GBGB2212691.6A Ceased GB202212691D0 (en) 2022-08-31 2022-08-31 Head worn device and methods for training wide peripheral vision

Country Status (1)

Country Link
GB (2) GB202212691D0 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4315502A (en) * 1979-10-11 1982-02-16 Gorges Denis E Learning-relaxation device
DE29820468U1 (en) * 1998-11-16 2000-04-06 Gelsen Karl Heinz Device for influencing the mental state
EP2075035A1 (en) * 2007-12-24 2009-07-01 Peter Carr Photic stimulation for eyes
CN201453585U (en) * 2009-06-01 2010-05-12 黄维克 Eye movement spectacle frame
WO2018183399A1 (en) * 2017-03-28 2018-10-04 Nextvr Inc. Methods and apparatus which use or include brain activity sensors

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4315502A (en) * 1979-10-11 1982-02-16 Gorges Denis E Learning-relaxation device
DE29820468U1 (en) * 1998-11-16 2000-04-06 Gelsen Karl Heinz Device for influencing the mental state
EP2075035A1 (en) * 2007-12-24 2009-07-01 Peter Carr Photic stimulation for eyes
CN201453585U (en) * 2009-06-01 2010-05-12 黄维克 Eye movement spectacle frame
WO2018183399A1 (en) * 2017-03-28 2018-10-04 Nextvr Inc. Methods and apparatus which use or include brain activity sensors

Also Published As

Publication number Publication date
GB202300972D0 (en) 2023-03-08
GB202212691D0 (en) 2022-10-12
GB2622119A8 (en) 2024-03-27

Similar Documents

Publication Publication Date Title
US20230240599A1 (en) Sensory stimulation or monitoring apparatus for the back of neck
AU2018226818B2 (en) Methods and systems for modulating stimuli to the brain with biosensors
CN205597906U (en) Wearable physiological detector
US9872968B2 (en) Biofeedback virtual reality sleep assistant
JP3217017U (en) Wearable physiological examination equipment
US20200201434A1 (en) Bioresponsive virtual reality system and method of operating the same
US11000669B2 (en) Method of virtual reality system and implementing such method
US8511820B2 (en) Device to measure functions of the eye directly
TWM553987U (en) Glasses structure and glasses combination having physiological signal capture function
WO2016119665A1 (en) Wearable physiological detection device
TWI669102B (en) Wearable physiological detection device
WO2017125082A1 (en) Wearable physiological activity sensing device and system
CN204839505U (en) Wearing formula physiology detection device
WO2017125081A1 (en) Glasses-type physiological sensing device, glasses structure having physiological signal acquisition function, and glasses combination
US20230296895A1 (en) Methods, apparatus, and articles to enhance brain function via presentation of visual effects in far and/or ultra-far peripheral field
TWI631933B (en) Physiological resonance stimulation method and wearable system using the same
WO2024047338A1 (en) Head mounted device and methods for training peripheral vision
GB2622119A (en) Head mounted device and methods for training peripheral vision
CN204839483U (en) Wearing formula physiology detection device
TWI650105B (en) Wearable physiological detection device
TWI701016B (en) Multi-purpose physiological detection device
TW201626950A (en) Wearable electrocardiogram detector
TWM582375U (en) Multi-purpose physiological examination system
JP7039681B1 (en) Face wearer
TWI766826B (en) Eye mask for sleeping comfortably and control method thereof