WO2024047338A1 - Head mounted device and methods for training peripheral vision - Google Patents

Head mounted device and methods for training peripheral vision Download PDF

Info

Publication number
WO2024047338A1
WO2024047338A1 PCT/GB2023/052230 GB2023052230W WO2024047338A1 WO 2024047338 A1 WO2024047338 A1 WO 2024047338A1 GB 2023052230 W GB2023052230 W GB 2023052230W WO 2024047338 A1 WO2024047338 A1 WO 2024047338A1
Authority
WO
WIPO (PCT)
Prior art keywords
visual stimuli
user
positions
visual
stimuli
Prior art date
Application number
PCT/GB2023/052230
Other languages
French (fr)
Inventor
Geoffrey FALK
Original Assignee
Falk Geoffrey
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GBGB2212691.6A external-priority patent/GB202212691D0/en
Application filed by Falk Geoffrey filed Critical Falk Geoffrey
Publication of WO2024047338A1 publication Critical patent/WO2024047338A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H5/00Exercisers for the eyes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/16Physical interface with patient
    • A61H2201/1602Physical interface with patient kind of interface, e.g. head rest, knee support or lumbar support
    • A61H2201/1604Head
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/16Physical interface with patient
    • A61H2201/1602Physical interface with patient kind of interface, e.g. head rest, knee support or lumbar support
    • A61H2201/165Wearable interfaces
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0022Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the tactile sense, e.g. vibrations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0027Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0044Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2209/00Ancillary equipment
    • A61M2209/08Supports for equipment
    • A61M2209/088Supports for equipment on the body

Definitions

  • the present invention relates to devices for training relaxation of gaze by directing attention to peripheral vision.
  • the present invention provides a system for relaxing gaze and/or training attention to peripheral vision comprising:
  • a head mounted device configured to provide visual stimuli simultaneously to the left and right monocular regions of a user’s peripheral vision, wherein the device provides visual stimuli at an equal angular distance from the centre of a user’s vision on left and right sides, at a plurality of angular positions as measured on a horizontal plane.
  • the present invention provides a method for relaxing gaze and/or training attention to peripheral vision comprising:
  • the left monocular region of a user’s peripheral vision is the region which can be viewed by the left eye only (and not the right eye).
  • the right monocular region of a user’s peripheral vision is the region which can be viewed by the right eye only (and not the left eye). This is in contrast to the binocular region which can be viewed by both eyes.
  • Having a relaxed gaze and open awareness of events in the peripheral visual field is important for many activities where there is a need for heightened awareness of one's general surroundings as opposed to singular focus on a tight central point, or foveal vision.
  • sport activities may require an awareness of movement, shape and colour in the extremities of vision, for instance the movement of other players, whilst keeping gaze anchored on the main focus of attention, for instance the nearest opponent, net or ball.
  • a tracker in a combat or hunting situation may wish to keep an open visual awareness to detect small changes in colour or movement in the widest possible visual field.
  • NLP Neuro Linguistic Programming
  • the Applicant has recognised that providing visual stimuli simultaneously to the left and right monocular regions at equal angular distance from the user’s central point of focus can assist with training the user’s peripheral vision whilst encouraging the user to keep a central focus.
  • the left and right visual stimuli can only be seen by a respective right and left eye, the user is unlikely to improve their perception of both these stimuli simultaneously by shifting their gaze away from a central focus.
  • a central focus corresponds to a user’s focus being directed generally towards their centre of vision, i.e. in the forwards direction).
  • a user is therefore not necessary to measure a user’s compliance with maintaining a central focus (e.g. by eye tracking or by a user self-reporting compliance), and accordingly in embodiments a user’s compliance with maintaining a central focus is not measured.
  • providing visual stimuli simultaneously to the left and right monocular regions comprises providing a visual stimulus to the left monocular region of a user’s vision at a same time as providing a visual stimuli to the right monocular region of the user’s vision, e.g. such that respective time intervals at which the respective left and right visual stimuli are provided at least partially overlap, and in some embodiments fully overlap, (e.g., such that the right and left stimuli are provided for exactly the same period of time).
  • the visual stimuli provided simultaneously to the left and right monocular regions of a user’s vision may also be referred to herein as a “pair of left and right visual stimuli” or a “pair of visual stimuli”.
  • the monocular region of vision comprises angular positions from about 60 to about 110 degrees from the centre of vision (in the left and right directions, for the left and right monocular regions respectively).
  • providing visual stimuli in the left and right monocular regions respectively of the user’s peripheral vision comprises providing visual stimuli at one or more angular positions to the left and right respectively from the centre of a user’s vision, the one or more angular positions preferably being from about 60 degrees and about 110 degrees from the centre of vision of the user in the left and right directions. This allows the widest possible area of peripheral vision to be trained.
  • the centre of vision can be taken as the direction pointing forwards from the bridge of the user’s nose.
  • the angular position of a visual stimulus in the left or right direction relative to the centre of a user’s vision is measured as the angle between the forwards direction from the bridge of the user’s nose, and the visual stimulus, as measured along a horizontal plane (i.e. being the angle along the horizontal meridian). Accordingly, the direction straight ahead (forwards) of the user corresponds to an angular position of zero degrees, and positions to the left and right have angular positions greater than zero degrees.
  • a vertical position of the visual stimuli provided is close to the vertical position (height) of the user’s eyes, preferably being within 5 cm (above or below) of the vertical position user’s eyes, preferably within 2 cm, preferably within 1 cm (preferably as measured in the vertical direction from the bridge of the user’s nose, which generally aligns with the middle of a user’s eye).
  • the angular position of a visual stimulus provided in the left or right direction can be measured as above, by measuring the angle between the forwards direction from the bridge of the user’s nose, and the visual stimulus, along a horizontal plane (so as to measure the angular position along that horizontal plane which the visual stimulus lies directly above or below).
  • visual stimuli provided to the left and right monocular regions simultaneously are provided at an equal angular position to the left and right from the centre of a user’s vision (such that the left and right visual stimuli are provided at the same angular position as one another).
  • the left and right visual stimuli provided simultaneously are preferably provided at a same distance to the left and right of the bridge of the user’s nose, and at a same distance forwards or backwards of the bridge of the user’s nose as one another.
  • the Applicant has recognised in this regard, that providing visual stimuli simultaneously at an equal angular position to the left and right from the centre of the user’s vision can help the user to retain a relaxed centred gaze.
  • Providing visual stimuli at equal angular positions may be generally more relaxing than providing visual stimuli which are mis-matched in angular position.
  • visual stimuli provided to the left and right monocular regions simultaneously are provided at a same vertical position as one another.
  • Visual stimuli at a same vertical position may be generally more relaxing than visual stimuli which are mis-matched in vertical position.
  • the angular position of left and right visual stimuli provided simultaneously may vary.
  • the vertical position (height) of the left and right visual stimuli provided simultaneously may (also) vary.
  • each pair of left and right visual stimuli provided simultaneously could be provided at a same vertical position (height).
  • aspects of the present invention comprise a head mounted training device for providing visual stimuli.
  • the method of the invention may be performed using a training device such as a head mounted device.
  • the visual stimuli provided may be any suitable and desired stimuli which are visually discernible by a user.
  • Each visual stimulus preferably comprises provision of one or more of: a colour, intensity, texture, size, shape, localised movement or other visual quality by a visual element.
  • Different visual stimuli may be provided by changing one or more such qualities (e.g. colour) of the visual element.
  • the visual elements could be mechanical elements. However, preferably, the visual elements are light elements.
  • the training device (head mounted device) comprises one or more visual elements, more preferably comprising one or more light elements.
  • providing visual stimuli comprises activating one or more visual elements, preferably activating one or more light elements (of the head mounted device), e.g. at a desired angular position as to provide visual stimuli at that angular position.
  • Activating one or more light elements preferably comprises illuminating the one or more light elements (e.g. with a coloured light).
  • light elements may be particularly effective for providing visual stimuli in the left and right monocular regions of a user’s vision, as these can be readily discernible despite the user having low visual acuity in these regions.
  • the one or more light elements could be any suitable and desired light elements.
  • the one or more light elements could comprise a continuous light element (which spans a range of angular positions), for example such as a screen, e.g. an LCD or plasma screen or projection onto a screen.
  • providing a visual stimulus preferably comprises illuminating a portion of the continuous light element, e.g. in a particular colour, shape or pattern.
  • the one or more light elements could comprise discrete light elements (which are provided at discrete angular positions), for example such as individual or groups of lights, e.g. light emitting diode (LED) lights.
  • providing a visual stimulus preferably comprises illuminating one or more of the discrete light elements (by illuminating individual or groups of the discrete light elements).
  • the one or more light elements comprises one or more (variable colour) LED lights.
  • the one or more light elements are preferably activated (illuminated) to provide visual stimuli within the left and right monocular regions of a user’s vision, as discussed above.
  • the one or more light elements preferably span a range of (e.g. are provided at plural) angular positions within the left and right monocular regions of a user’s vision, preferably within about 60 to about 110 degrees from the centre of vision in the left and right directions. Accordingly, the one or more light elements preferably span a range of (e.g. are provided at plural) positions forwards and/or backwards relative to the bridge of the user’s nose.
  • the one or more light elements are present only within the left and right monocular regions of a user’s vision (and preferably are positionable so as to be present only within the left and right monocular regions of a user’s vision).
  • light elements could also be present outside of the left and right monocular regions (e.g. in the binocular region) of a user’s vision, but preferably light elements are not activated (visual stimuli are not provided) at angular positions outside the left and right monocular regions during a training session (in a training sequence of visual stimuli).
  • the system may be configurable to determine (the method may include determining) which angular positions (e.g. which discrete light elements) fall within the left and right monocular regions of a user’s vision, and during a training session (in a training sequence) activate light elements at those angular positions only.
  • the system may be configured to perform a calibration routine or receive a user input in order to identify the angular positions falling within the user’s left and right monocular region of vision, and accordingly determine which angular positions light elements should be activated at during a training session and/or for a training sequence.
  • the discrete light elements preferably comprise an array of light elements, the light elements being provided at a plurality of left and right angular positions (when the training device is in a training position).
  • the array of discrete light elements could form a single row of light elements, having a same vertical position (height) such that each row extends horizontally, e.g. at a vertical position close to that of the user’s eyes.
  • the discrete light elements could form plural (e.g. two, three, or more, e.g. up to five) horizontally extending rows of light elements, each row at a different vertical position (height) close to the height of the user’s eyes.
  • other grouping or patterns of discrete light elements could be provided within the array of light elements.
  • the Applicant has found that visual elements, e.g. such as LED light elements can suitably be incorporated into a head mounted device.
  • a head mounted device can provide a compact and portable form for providing visual stimuli, which is accessible to every-day users. In this way, the head mounted device can be used in any environment throughout the day as desired to provide training of peripheral vision and relaxation of gaze.
  • the present device does not necessarily require large static components, such as PC’s or cameras, or complex hardware which needs to be finely tuned in a laboratory setting.
  • the head mounted device can be of equal use to professional athletes who require short training sessions interspersed throughout the day, as well as to office workers who require a screen break and rest for their eyes after an intensive period of attention on screens, and would benefit from a short session where focus is softened and attention moved to the periphery.
  • the head mounted device may be any suitable and desired device which is configured to be mounted to a user’s head.
  • the head mounted device is mountable (indirectly) to a user’s head by (removably) mounting on an item of headwear, such as for example a pair of over-head headphones, a headband, a hat, or a pair of glasses or the like.
  • the visual elements e.g. LED lights
  • the visual elements are (accordingly) provided in proximity to a user’s head (and eyes), preferably within a distance of about 150 mm from a user’s left and right eyes respectively, preferably within a distance of about 100 mm, preferably within about 80 mm, preferably within about 70 mm. In embodiments the distance is from about 40 mm to about 60 mm.
  • one or more (or all) of the visual elements are provided (are configurable to be provided) at a distance of at least about 5 mm from the user’s left and right eyes respectively, preferably at a distance of at least about 10 mm, preferable at least about 20 mm, preferably at least about 30 mm.
  • the head mounted device comprises a pair of elongate members, wherein one or more visual elements (light elements) are provided on each elongate member of the pair.
  • the head mounted device comprises a left elongate member comprising one or more left visual elements, and a right elongate member comprising one or more right visual elements.
  • the one or more visual elements on an elongate member together form a visual display unit.
  • the pair of elongate members are preferably formed integrally with or mountable (attachable) to an item of headwear, such as one or more of: a pair of over-head headphones, a headband, a hat, a pair of glasses or the like.
  • the pair of elongate members may be mountable to a rim of and/or the arms of a pair of glasses.
  • the pair of elongate members are mountable or integrated within a brim of a cap.
  • the pair of elongate members are formed integrally with or are attachable to a pair of over-head headphones.
  • each elongate member typically indicates that each elongate member is longer than it is wide (has a length which is greater than its width).
  • the visual display unit on each elongate member is preferably also elongate.
  • each elongate member (and visual display unit) is at least 2 times as long as it is wide (has a length which is at least twice its width), preferably at least 3 times as long as it is wide (has a length which is at least three times its width).
  • the length corresponds to the average length of the elongate member (or visual display unit) measured along the member (or visual display unit) from one end to the other
  • the width corresponds to the average width of the elongate member (or visual display unit) measured across the member (of visual display unit) from one side to the other.
  • the length of the visual display unit is at least about 50 mm, preferably at least about 50mm, preferably at least about 60 mm, preferably at least about 70 mm.
  • the length of the visual display unit may be less than about 150 mm, preferably less than about 100 mm.
  • the width of the visual display unit is at least about 1 mm, and in embodiments less than about 50 mm, preferably less than about 20 mm. For example, for visual display unit having (only) a single row of LED lights, the width of the visual display unit may be about 6 mm, whereas for two rows of LED lights the width may be about 11mm.
  • each of the elongate members is, in embodiments, greater than or equal to the length of the visual display unit. In embodiments the length of each elongate member is up to about 150mm, preferably up to about 130 mm. For example, elongate members attached or mountable to a set of headphones the length of the elongate members may be about 130 mm (however, other lengths could be used if desired).
  • the length and width are measured along (so as to include) the curve.
  • each elongate member comprises an attachment means by which the elongate member is attached to or mountable to an item of headwear.
  • the attachment means point of attachment
  • the elongate members could be attached or mountable to an item of headwear in any other suitable and desired manner.
  • each elongate member may comprise a continuous visual element which extends along at least part of the elongate member.
  • each elongate member may comprise an array of discrete light elements (e.g. LEDs), the array of discrete light elements (e.g. LEDs) extending along at least part of the elongate member.
  • the one or more visual elements on the left and right elongate member are provided at the same relative positions along the elongate members (and thus can be activated to provide left and right visual stimuli at a same angular position as one another).
  • the one or more visual elements on the left and right elongate members are mirror images of one another.
  • the elongate members When mounted on a user’s head in a training orientation, for performing a training session, the elongate members are preferably oriented so as to extend substantially horizontally (in a substantially horizontal plane), and are preferably positioned substantially at the height of the user’s eyes.
  • the one or more visual elements on each elongate member are positioned to allow provision of visual stimuli at a plurality of angular positions to the left and right of the centre of the user’s field of vision (and accordingly at a plurality of positions forwards and/or backwards of the bridge of the user’s nose).
  • the rows of light elements extend substantially horizontally.
  • each elongate member is curved (along its length), such that in the training configuration each elongate member is curved in the horizontal plane, so as to at least partly wrap around the user’s head. This may facilitate positioning (and in embodiments the elongate members are configured such that) one or more (or all) visual elements on the left and right elongate members at approximately a same distance from the user’s left and right eyes respectively, if desired for viewing comfort.
  • the pair of elongate members are positionable (moveable) so as to position the one or more visual elements within (and preferably only within) the left and right monocular regions of the user’s vision.
  • the elongate members are extendible and retractable (along their length) so as to alter the position the one or more visual elements, e.g. via a telescopic mechanism or other suitable mechanism.
  • the pair of elongate members may be mountable at (and movable to) different positions (e.g. forwards and backwards) on a head worn device (e.g. different positions along the brim of a cap) so as to alter the position the one or more visual elements.
  • the pair of elongate members may be rotatable (e.g. about or near their attachment means) and/or distortable (bendable) along at least part of their length (e.g. between the attachment means and the visual display unit). This adjustability allows the head mounted device to be adapted for providing visual stimuli in the monocular region, for example for users with different vision ranges and nose shapes.
  • the elongate members do not extend into the binocular region of the user’s vision.
  • there is a gap between the elongate members and preferably an angular gap (measured along the horizontal meridian from a bridge of user’s nose) of at least 20 degrees, preferably at least 45 degrees, preferably at least 60 degrees, preferably at least 90 degrees (and preferably up to 120 degrees).
  • the pair of elongate members are movable relative to the head-worn device, so as to move the elongate members away from a training orientation (and preferably into a stowed orientation, preferably where the elongate members substantially cannot be seen by a user).
  • the elongate members may be rotatable upwards away from a training orientation when training is not being performed, and rotatable downwards into a training orientation when a training session is desired to be performed.
  • the elongate members may be configured to be rotated upwards to align with (e.g. to be stowed within) the head-band portion of the headphones.
  • the elongate members may additionally (or alternatively) be foldable, or otherwise collapsible into a smaller form when a training session is not being performed.
  • a pair of elongate members each comprising one or more visual elements (e.g. light elements) having one or more of the features discussed above, may provide a lightweight and adjustable means for providing visual elements to the left and right monocular regions of a user’s vision.
  • the elongate members may be less intrusive than, e.g. a conventional virtual reality or augmented reality display headset which is designed to fill the user’s entire field of vision. Accordingly, a user may be able continue wearing the head-mounted device having the pair of elongate members for a desired period of time for the training session, and also between training sessions without the pair of elongate members causing distraction or discomfort.
  • the methods and systems disclosed herein allow left and right visual stimuli to be provided simultaneously, preferably at a same (angular) position as one another.
  • the visual stimuli can preferably be provided at a range of one or more (angular) positions (e.g. by activating left and right visual elements at a desired (angular) position).
  • a sequence of left and right visual stimuli are provided.
  • the sequence may be referred to herein as a ‘training sequence’, since it is provided for the purpose of training the user’s peripheral vision.
  • left and right visual stimuli provided simultaneously are provided at a plurality of (angular) positions in turn (and accordingly preferably at a plurality of positions in the forwards and/or backwards direction, relative to the position of the bridge of the user’s nose).
  • the plurality of positions at which visual stimuli are provided form a sequence of positions.
  • One or more (different) visual stimuli may be provided at each (angular) position, in the training sequence, for example by changing one or more qualities of visual stimuli provided at a position and/or among the positions.
  • one or more qualities of visual stimuli provided are permitted to vary (and preferably do vary).
  • a user may (the system may be configured to allow a user to) control various parameters for the training sequence (e.g. the positions at which visual stimuli are provided and/or the qualities of the visual stimuli to be provided). This may allow fine, granular, user control of the training sequence.
  • a user may interact with the system at a (more) abstracted level.
  • a user may (the system is configured to allow a user to) select a training program from a plurality of training programs.
  • Each training program may comprise (differ in) one or more ‘training’ sequences of visual stimuli that it provides (e.g. with respect to the order of positions and/or the qualities of stimuli provided, e.g. as will be described in more detail below).
  • a training program could comprise a plurality of different ‘training sequences’, each forming an ‘exercise’ for training the user.
  • the training sequence(s) of a training program could be (and in embodiments are) provided with a (particular) soundtrack and/or with a sequence of (e.g. audio) instructions (e.g. directing the user to interact with the system in a particular way during the training sequence).
  • the training program could be an energising program choreographed to upbeat dance music, or a relaxing programme choreographed to forest sounds, or a session choreographed to a recorded (e.g. meditative) instruction soundtrack.
  • the one or more qualities of the visual stimuli which are varied may comprise one or more of: a colour, intensity, texture, size, shape, localised motion of the visual stimulus.
  • a ‘texture’ of a visual stimulus may correspond to a texture or pattern of light formed by the light element.
  • Intensity may be a colour intensity (saturation) and/or a brightness of a light element when activated.
  • Localised motion may be motion about (e.g. centred) on a particular position.
  • a quality (and in embodiments the only quality) of the visual stimuli which is permitted to vary is a colour.
  • one or more qualities may be permitted to differ (be mis-matched) between left and right visual stimuli provided simultaneously.
  • the system may be configured to control the one or more qualities of the left and right visual stimuli independently.
  • a left light element may be activated to be a particular colour (e.g. green), whilst a right light element may be activated to be a different colour (e.g. blue).
  • the left and right visual stimuli provided simultaneously are preferably provided at a same angular position as one another.
  • the Applicant has found that providing left and right visual stimuli having a same quality (and preferably having identical qualities) as one another (e.g. having the same colour) is generally more relaxing than having mis-matched qualities (e.g. having different colours).
  • the left and right visual stimuli are provided with one or more (or preferably all) qualities being the same (e.g. having a same colour).
  • the same quality (or qualities) are provided more often than left and right visual stimuli with a differing quality (or qualities).
  • a user is permitted to choose (the system is configured to receive a user selection for) the one or more qualities which are to be varied during a training session.
  • the user may be permitted to choose that colour is to be varied, and to choose which colours are to be provided.
  • a user could select, e.g., blue, green, and purple visual stimuli to be provided (and not red and orange visual stimuli).
  • one or more qualities of the visual stimuli may depend on a training program selected by the user (e.g. an ‘energising’ program or a ‘relaxing program’).
  • one or more of the qualities of (e.g. the colour of) visual stimuli provided vary randomly (e.g. being selected according to a weighted random selection). In this way, the quality (e.g. colour) which is to be provided is not predictable by the user, which may improve user attention when using the device.
  • each visual stimulus is provided for a discernible period of time, to allow the user to perceive the visual stimulus.
  • each visual stimulus is provided for a period of at least about 0.1 seconds, preferably at least about 0.5 seconds, preferably at least about 1 second.
  • each visual stimulus (of the training sequence) is provided for at most 60 seconds, preferably at most about 20 seconds, preferably at most about 10 seconds, preferably at most about 5 seconds (such that the user does not lose attention to the visual stimuli).
  • each visual stimulus is provided for a time from about 0.5 seconds to about 20 seconds.
  • the amount of time that a visual stimulus is provided for is the same for each visual stimulus (in the ‘training’ sequence).
  • the quality (or qualities) of visual stimuli preferably change at regular intervals in time. This can provide a relaxing effect.
  • the amount of time that a visual stimulus is provided for may be permitted to vary (varies), e.g. varying randomly (however, left and right visual stimuli presented simultaneously will preferably be provided for the same amount of time as each other).
  • the amount of time that a visual stimulus is provided for could be selected according to a weighted randomised amount of time. In that case, the amount of time for which a visual stimulus is presented will not be predictable by the user, which may help to improve user attention when using the training device.
  • the user is able to control (select) a duration of the visual stimuli. In embodiments, this is achieved by a user controlling (selecting) the amount of time between (rate of) visual stimuli. This duration (or rate) may be selected as desired by the user, for comfortable use of the system.
  • the user may be able to control a duration of the visual stimuli by selecting a training program (e.g. an ‘energising’ program or a ‘relaxing program’) and/or by selecting a rhythmic (e.g. musical) soundtrack to be provided with the training sequence (wherein the system may be configured to provide the visual stimuli in synchronisation with the beat of the soundtrack).
  • a training program e.g. an ‘energising’ program or a ‘relaxing program’
  • a rhythmic (e.g. musical) soundtrack to be provided with the training sequence
  • the system is preferably configured to receive a user input for controlling the duration (or rate) of visual stimuli.
  • a spacing between the left and right visual stimuli provided simultaneously increases with increasing time, and/or based on a user response.
  • the spacing between the left and right visual stimuli provided simultaneously preferably corresponds to the angular spacing (angular distance) between the left and right visual stimuli as measured to the left and right from the centre of the user’s vision (from the bridge of the user’s nose, along the horizontal meridian).
  • the spacing between the left and right visual stimuli thus corresponds to the sum of the angular positions of the left and right visual stimuli.
  • increasing the spacing between left and right visual stimuli comprises providing left and right stimuli which are further apart from one another along the horizontal meridian.
  • increasing the spacing between the left and right visual stimuli provided comprises increasing the angular position of the left and right stimuli (and correspondingly providing left and right stimuli which are further backwards).
  • decreasing the spacing between the left and right visual stimuli provided preferably comprises decreasing the angular position of the left and right stimuli (and correspondingly providing left and right stimuli which are further forwards).
  • the Applicant has recognised that, during the course of a training session, the user may become more relaxed and may become receptive to left and right visual stimuli which are deeper within their peripheral vision (and accordingly at wider angles, and further backwards within the peripheral vision). By increasing the spacing between the left and right visual stimuli, an increasingly wide visual field of the user can be trained.
  • increasing the spacing between (the angular position of) the left and right visual stimuli provided is performed in a defined, preferably predetermined manner (automatically, without receiving user input during the training session).
  • the spacing between the left and right visual stimuli provided may be increased according to a defined (e.g. predetermined) sequence of positions.
  • the predetermined sequence of positions may progress from a predetermined initial (minimum) angular position of the left and right visual stimuli, to a final (maximum) angular position of the left and right visual stimuli, e.g. according to a predetermined pattern of positions.
  • a user may be able to select in advance of a training session (the system is configured to receive a user input for) one or more of: a minimum angular position, a maximum angular position, and a pattern of positions (for example by the user selecting these parameters directly, or by selecting a desired training program).
  • the spacing between (angular position of) the left and right visual stimuli could be increased based on a user input during the training session.
  • the spacing between (angular position of) the left and right visual stimuli could also be decreased based on a user input during the training session.
  • the user input which is used to increase and/or decrease spacing between left and right visual stimuli may comprise an active (conscious) user input, comprising a user actively interacting with the system, e.g. to select appropriate parameters.
  • the user input may comprise a passive (subconscious) user input, for example an input detected by a suitable sensor.
  • the user input may comprise a user selecting (e.g. adjusting) one or more positions at which the user desires visual stimuli to be provided, and the system may accordingly provide visual stimuli at positions among those one or more positions.
  • the user input could be a sensed or user-reported level of relaxation of the user, and/or an input indicative of the user’s perceptiveness to the visual stimuli.
  • the spacing between (angular position of) left and right visual stimuli is preferably increased when it is determined that the user has a higher level of relaxation and/or better perceptiveness to the visual stimuli (and conversely the spacing between left and right visual stimuli is preferably decreased when it is determined that the user has a lower level of relaxation and/or worse perceptiveness to the visual stimuli).
  • Other user input(s) could also or instead be used, if desired.
  • increasing (or conversely decreasing) the spacing between the left and right visual stimuli comprises providing a pair of left and right stimuli which have a larger (or conversely smaller) spacing compared to one or more previous (preferably immediately preceding) pairs of left and right stimuli.
  • increasing (or decreasing) the spacing between left and right visual stimuli is done gradually such that there is an overall trend of increasing (or decreasing) the spacing between left and right visual stimuli.
  • the positions at which left and right visual stimuli are provided could be determined (selected) on a weighted basis, and increasing the spacing between left and right visual stimuli could comprise increasing the weighting (and therefore the rate of occurrence) of positions which have a larger angular spacing (and are positioned further backwards). Conversely, decreasing the spacing between left and right visual stimuli could comprise increasing the weighting of positions which have a smaller angular spacing (and are positioned further forwards).
  • a gradual increase or decrease in the spacing between left and right visual stimuli is achieved by providing one or more cycles of visual stimuli (by performing one or more cycles of operation), wherein in each cycle visual stimuli are provided at one or more positions within a defined range of one or more positions.
  • the position(s) at which left and right stimuli are provided is permitted to vary (can be altered) between cycles of activation, preferably by altering either or both of: the closest and/or furthest spacing between the left and right visual stimuli in the range of one or more positions for a cycle; and the one or more positions at which visual elements are provided within the range of one or more positions for a cycle.
  • varying the position(s) at which visual stimuli are provided comprises altering the position(s) for a cycle compared to a previous (preferably immediately preceding) cycle of visual stimuli.
  • a spacing between left and right visual stimuli can be increased or decreased in graduated steps by changing position(s) for visual stimuli across one or more cycles. This Applicant has found that this allows a user to soften their gaze gradually, promoting a heightened sense of relaxation and calm.
  • increasing (or conversely decreasing) the spacing between left and right visual stimuli comprises either or both of: increasing (or conversely decreasing) the closest and/or furthest spacing of left and right visual stimuli within the range of one or more positions for a cycle; or increasing (or conversely decreasing) an (angular) position of one or more of the position(s) at which visual stimuli are provided within the range of one or more positions for a cycle.
  • the defined range of one or more positions for a cycle of visual stimuli preferably comprises a range of one or more angular positions (and accordingly a range of positions in the forwards and/or backwards directions).
  • the range of one or more positions for a cycle comprises a range of one or more left positions for left visual stimuli, and one or more right positions for right visual stimuli.
  • the range of left and right position(s) are a mirror image of one another relative to the centre of the user’s vision, preferably such that left and right visual stimuli provided at positions in the range can be (and are) provided simultaneously at an equal angular position relative to the centre of the user’s vision.
  • the left and right visual stimuli are preferably provided by activating appropriate visual element(s), e.g. such as those visual element(s) described above.
  • appropriate visual element(s) e.g. such as those visual element(s) described above.
  • discrete visual elements e.g. LEDs
  • the range of one or more positions for a cycle preferably encompasses one or more discrete visual elements (at one or more different angular positions).
  • visual stimuli at each and every possible (angular) position within the range of one or more positions for a cycle (e.g. to activate each and every discrete visual element falling in the range during a cycle), and in embodiments this is done.
  • visual stimuli may be provided at position(s) which are (all) adjacent one another.
  • visual elements may be provided at one or more positions within the range of positions comprising (forming) a sub-set of the possible positions in the range (e.g. such that a sub-set of visual elements falling in the range are activated during a cycle).
  • one or more positions in the range of positions for a cycle may be skipped and no visual stimuli provided at those positions.
  • one or more visual stimuli may be provided at (angular) positions which are spaced apart from one another (in the left direction or the right direction respectively for left or right visual stimuli).
  • one or more positions at which visual stimuli are provided may be varied between cycles of visual stimuli by altering the one or more positions at which visual elements are provided within the range of one or more positions for a cycle.
  • This may comprise adding or removing one or more positions at which visual elements are provided.
  • this may comprise altering (e.g. adding or removing) one or more positions in the range of positions for a cycle which are skipped and at which no visual stimuli are provided.
  • this may comprise increasing a spacing between the (angular) positions of one or more of the (respective left or right) visual stimuli provided in the cycle.
  • one or more positions at which visual stimuli are provided may additionally or alternatively be varied between cycles of visual stimuli by altering the closest and/or furthest (angular) spacing of left and right visual stimuli in the range of one or more positions forming a cycle.
  • altering the closest spacing comprises altering a smallest angular position within the range of left and/or right positions (altering the furthest forward position)
  • conversely altering the furthest spacing comprises altering the largest angular position within the range of left and/or right positions (altering the furthest backwards position).
  • the range of position(s) for a first cycle of operation may comprise a closest possible spacing in the monocular region (of the possible positions at which the system is able to provide left and right visual stimuli within the monocular region), e.g. corresponding to an angular position of about 60 degrees.
  • the range of position(s) for later cycles of operation may comprise position(s) which are further apart.
  • the defined ranges of one or more positions for different (e.g. successive) cycles of visual stimuli could be non-overlapping, or could overlap.
  • the discrete visual element(s) which fall within the range of position(s) for different (e.g. successive) cycles could include none, or one or more of the same discrete visual elements.
  • each range of one or more positions forming a cycle could have a same closest spacing between left and right visual stimuli, but could differ in the furthest spacing between left and right visual stimuli.
  • each range of one or more positions could have a different closest spacing between left and right visual stimuli and a different furthest spacing between left and right visual stimuli compared to a preceding cycle.
  • Other permutations are also possible.
  • increasing (or decreasing) the spacing between left and right visual stimuli may be achieved by increasing (or decreasing) the closest and/or furthest spacing between the left and right visual stimuli in the range of one or more positions for a cycle.
  • Increasing (or decreasing) the spacing between left and right visual stimuli may also (or instead) be achieved by altering the one or more positions at which visual stimuli are provided within a cycle.
  • increasing (or conversely decreasing) the spacing between left and right visual stimuli is achieved by moving one or more visual stimuli to a larger (or conversely smaller) angular position within the range of positions for the cycle stimuli.
  • increasing (or conversely decreasing) the spacing between left and right visual stimuli comprises increasing (or conversely decreasing) the average angular position of visual stimuli within the cycle (wherein the average angular position of visual stimuli can be calculated as the sum of the magnitude of the angular positions at which left and right visual stimuli are provided during a cycle, divided by the number of positions at which visual stimuli are provided during a cycle).
  • increasing (or conversely decreasing) the spacing between left and right visual stimuli between cycles comprises increasing (or conversely decreasing) the closest and/or furthest spacing between the left and right visual stimuli in the range of one or more positions for a cycle is increased, and also increasing (or conversely decreasing) the spacing between the positions of one or more of the visual stimuli provided in the cycle. This may have an overall effect of widening (or conversely narrowing) the cycle. Other variations for altering the positions at which visual stimuli are provided are also possible.
  • left and right visual stimuli are provided at according to a sequence of positions.
  • left and right visual stimuli are provided at a sequence of positions of progressively increasing spacing (are provided at progressively increasing angular positions, and accordingly progressively further back), the sequence preferably progressing from a closest spacing (smallest angular position, furthest forward position) in the range of one or more positions to a furthest spacing (largest angular position, furthest backwards position) in the range of one or more positions.
  • a ‘wave’ of visual stimuli of increasing spacing is provided.
  • the stimuli will have preferably reached the furthest extreme of peripheral vision that the user desires or that the program dictates at that time.
  • the sequence of visual stimuli provided at increasing positions may be repeated one or more times within a cycle.
  • the Applicant has recognised that, regardless of whether the spacing of visual stimuli is increased or decreased between successive cycles, by providing ‘waves’ of visual stimuli which increase in spacing within each cycle, a relaxing effect which encourages user awareness to the peripheral vision can still be achieved.
  • visual stimuli could be provided at each and every possible position within the defined range of position(s) for the cycle, or at a selection of positions within the range. In either case, the ‘wave’ of stimuli may progress through the relevant positions at which visual stimuli are to be provided in the cycle.
  • a user perceptiveness to visual stimuli is determined and is used to increase (or decrease) the spacing between left and right visual stimuli.
  • This increase (or decrease) in spacing may be done in any suitable and desired manner, e.g. such as using cycles of stimuli as described herein.
  • Determining a user’s perceptiveness to the visual stimuli may alternatively be advantageous in its own right, without being used to increase or decrease the spacing between left and right visual stimuli (which may proceed, for example according to a predefined sequence of positions, or may for example be responsive to a different user input, e.g. indicative of a level of user relaxation).
  • a user’s perceptiveness is determined (the system is configured to determine the user’s perceptiveness) based on a user identifying a target characteristic of left and/or right visual stimuli provided.
  • the target characteristic preferably comprises a target quality for a visual stimulus, a matched (identical) quality between left and right visual stimuli provided simultaneously, or a mismatched quality between left and right visual stimuli provided simultaneously.
  • the target, matched, or mismatched quality may be any one or more of the qualities of visual stimuli described above.
  • a target quality could be a particular colour (e.g. green) visual stimulus.
  • a matched quality could be a matched colour (e.g. a green left visual stimulus provided simultaneously with a green right visual stimulus).
  • a mis-matched quality could be a mis-matched colour (e.g. a green left visual stimulus provided simultaneously with a blue right visual stimulus).
  • a user is permitted to choose (the system is configured to receive a user selection for) one or more target characteristics for the visual stimuli.
  • the user may be permitted to choose a target quality (e.g. a green colour) or a quality which is to be matched or mis-matched (e.g. a colour being matched or mis-matched, rather than e.g. a shape).
  • left and right visual stimuli having the one or more target characteristics are provided (for example, being provided one or more times within a ‘training sequence’ of visual stimuli).
  • visual stimuli having the target characteristic(s) are shown less often than visual stimuli not having the target characteristic(s).
  • visual stimuli having the one or more target characteristic(s) are shown intermittently, such that the time between occurrences of the target characteristic(s) is variable and preferably randomised such that occurrences of the target characteristic(s) are not predictable by a user.
  • the Applicant has recognised that varying the time between occurrences of the target characteristic may improve user attention when performing a training system.
  • a user is permitted to select (the system is configured to receive a user selection for) a rate at which the one or more target characteristics appear (e.g. so as to select a rate which is comfortable and relaxing for the user). In embodiments, this is achieved by a user controlling (selecting) the amount of time between (rate of) target characteristics.
  • the rate of provision of target characteristics may vary based on a training program selected by the user (e.g. being relatively less frequent for a ‘relaxing’ program, and relatively more frequent for an ‘energising’ program).
  • the system is preferably configured to receive (comprises a user input means for receiving) a user input indicative of whether a user has perceived a target characteristic. It is then determined whether the user has correctly perceived the target characteristic. In embodiments, if the user has correctly perceived the target characteristic, then the position or range of positions at which left and right stimuli are provided by the head-mounted device are altered.
  • the user input comprises a response (if a user response is received) indicating that the user has perceived the target characteristic within a predefined period of time after the target characteristic has started being shown.
  • the predefined period of time in embodiments corresponds to the amount of time for which the visual stimulus is provided (such that it is determined that a user has correctly perceived a target characteristic if the user input comprises a response whilst the target characteristic is being shown).
  • the predefined period of time could be longer or shorter than the period of time for which the target characteristic is shown.
  • the predefined period of time could be less than about 10 seconds, or less than about 5 seconds, or less than about 2 seconds, or less than about 1 second from the target characteristic starting being shown.
  • the user response may comprise a user identifying (confirming) that a target characteristic has occurred. If there are plural target characteristics (e.g. a blue colour, and a purple colour), correctly perceiving a target characteristic could require the user to provide a response (and correspondingly receiving a user response) which correctly identifies which of the plural target characteristics were shown (e.g. which of blue or purple were shown).
  • plural target characteristics e.g. a blue colour, and a purple colour
  • plural target characteristics e.g. a blue colour, and a purple colour
  • a spacing between the left and right visual stimuli provided simultaneously is increased.
  • the spacing of the left and right visual stimuli could be decreased. Increasing/decreasing the spacing of the left and right visual stimuli may be done, for example, in any of the ways described above (e.g., by changing a range of one or more positions forming a cycle of visual stimuli).
  • the system is configured to: receive a user input in response to a user perceiving a target characteristic; determine whether the user has correctly perceived the target characteristic; and when the user has correctly perceived the target characteristic, alter one or more positions at which left and right stimuli are provided.
  • altering one or more positions at which visual stimuli are provided comprises altering the range of one or more positions forming a cycle of visual stimuli provided and/or altering the one or more positions at which visual elements are provided within the range of one or more positions for a cycle.
  • the system is also configured to determine whether the user has incorrectly perceived the target characteristic, and to alter one or more positions at which left and right stimuli are provided correspondingly (preferably by altering the range of one or more positions forming a cycle of visual stimuli provided, and/or altering the one or more positions at which visual elements are provided within the range of one or more positions for a cycle).
  • left and right visual stimuli could be altered immediately in response to a user correctly (or incorrectly) perceiving the target characteristic, such that it is altered based on a single occurrence of the target characteristic.
  • the spacing of left and right visual stimuli could be altered after a predetermined (e.g. threshold) number of (e.g. successive) correctly or (e.g. successive) incorrectly perceived occurrences of a target characteristic, or responsive to the proportion of correctly or incorrectly perceived target characteristic occurrences (e.g. corresponding to a success rate of the user).
  • This may allow a more subtle change to the spacing of the left and right visual stimuli, such that the spacing of the left and right visual stimuli is changed in a way that does not immediately follow a single correct (or incorrect) perceived target characteristic.
  • a user is unlikely to associate their individual responses with changes to the spacing of visual stimuli, which may help to avoid a user having a stress response to correct
  • the system may increase the spacing of left and right visual stimuli irrespective whether the user has correctly (or incorrectly) perceived target characteristics (such that the correct (or incorrect) perception of target characteristics is determined but not used to adjust the spacing of the left and right visual stimuli). Determination of a user’s perceptiveness to target characteristics, in of itself, may still provide a useful output indicating a user’s awareness to visual stimuli in the peripheral field of vision.
  • Other parameters of the system could additionally (or alternatively) be changed in response to a user correctly (or incorrectly) perceiving the target characteristic, for example such as one or more of: the particular target characteristic (e.g. the target colour), the rate of occurrence of the target characteristic, and the rate that visual stimuli are provided.
  • the target characteristic may change to a more subtle characteristic (e.g. a more subtle colour difference, or intensity difference, or shape difference etc. compared to other visual stimuli provided), and/or the target characteristics may be provided more or less often, and/or the visual stimuli may be provided at a faster rate.
  • the system is configured to provide positive feedback to a user when it is determined that a user has correctly perceived a target characteristic.
  • the system could also (or instead) provide negative feedback when it is determined that a user has incorrectly perceived a target characteristic (although in embodiments no negative feedback is provided to avoid causing a stress response from the user).
  • the positive (or negative) feedback could be given immediately, and preferably each time, a user correctly (or incorrectly) perceives a target characteristic.
  • the positive (or negative) feedback could be given based on proportion of correct (or incorrect) user responses (e.g. based on a determined success rate of the user).
  • the positive (or negative) feedback could comprise any suitable and desired feedback, such as a visual, audible, or other sensory stimulus.
  • positive feedback could comprise, a sequence of visual stimuli forming a ‘success’ sequence, e.g. a single wave of stimuli progressing from the forwards-most to the backwards-most visual stimuli of the head mounted device.
  • the system is configured to (and the method comprises) keeping a record of the user’s perception of visual stimuli, preferably by recording one or more of: a number or proportion of correctly perceived stimuli; a number or proportion of incorrectly perceived stimuli; and an average time which the user took to respond to stimuli.
  • the record of the user’s perception is provided to the user as a training report, once a training session is complete.
  • the user response could comprise a response provided consciously (actively) by the user (e.g. by the user interacting with a suitable input means when the user perceives, of believes they have perceived, the target characteristic).
  • the user response could be provided subconsciously (passively) (e.g. by a user input means sensing a state of a user).
  • a user response could comprise, for example, a user pressing a button or other touch sensitive input device (e.g. touching a button on a screen of a mobile phone), making a movement (e.g. gesture), making a sound (e.g. verbal input), actively thinking a measurable thought, or performing any other action measurable by a user input means.
  • a button or other touch sensitive input device e.g. touching a button on a screen of a mobile phone
  • making a movement e.g. gesture
  • making a sound e.g. verbal input
  • the system may accordingly comprise a suitable input means for receiving a user response, for example comprise any one or more of: a button or other touch sensitive input, a movement sensor (e.g. motion detector or accelerometer), a sound sensor (microphone), an electromyography (EMG) sensor (a sensor responsive by muscular motion), an Electroencephalography (EEG) sensor (a sensor responsive to brain wave activity), or other desired sensor.
  • a movement sensor e.g. motion detector or accelerometer
  • a sound sensor microphone
  • EMG electromyography
  • EEG Electroencephalography
  • the user input means could be provided as part of the head-mounted device, or by handheld device (e.g. such as a controller or joystick), or by a portable electronic device (e.g. such as a mobile phone, tablet, laptop or the like).
  • the input means (which receives the user input), is configured to be operated without the user shifting their gaze.
  • the input means is a relatively large button displayed on the screen of a portable electronic device (e.g. within an app on a mobile phone or tablet), the button having an area at least 1 cm 2 , preferably at least 2cm 2 , preferably at least 3cm 2 , and/or occupying at least 10%, preferably at least 20%, preferably at least 30% of the area of a screen of the portable electronic device.
  • the system may (also) be configured to receive responses and accordingly comprise a user input means (e.g. such as those described above) for other purposes, for example for configuring one or more parameters in advance of or during a training session.
  • a user input means e.g. such as those described above
  • the system is (additionally or alternatively) configured to (and the method comprises) alter the spacing between left and right visual stimuli provided simultaneously based on a level of relaxation of the user.
  • a position or range of positions at which the left and right visual stimuli are provided is controlled based on a level of relaxation of a user.
  • the level of relaxation of the user may be an indicated level of relaxation (e.g. based on a user self-reporting a level of relaxation), or may be a detected level of relaxation (e.g. being sensed by a sensor).
  • the system is configured to receive (and the method comprises receiving) a self-reported level of relaxation provided actively (consciously) by a user (e.g. via the user interacting with a suitable user input device, such as any of the input devices discussed above).
  • the system is (additionally or alternatively) configured to receive (and the method comprises receiving) a sensor output sensing a physical state of the user, the sensor output indicative of a level of relaxation of a user.
  • the sensor may be configured to sense, and to provide an output indicative of one or more of a user’s: motion, breathing, heart rate, blood pressure, brain wave activity, or other physical property.
  • the system is configured to determine a level of user relaxation from the sensor output. For example, one or more of more agitated movements, shorter breaths, higher blood pressure, certain patterns of brain wave activity, or other sensor inputs, are preferably used to indicate (are preferably correlated to) a lower level of relaxation (the user being less relaxed). Conversely, preferably one or more of slower user movements, longer breaths, lower blood pressure, certain patterns of brain wave activity or other sensor inputs are preferably used to indicate (are preferably correlated to) a higher level of relaxation (the user being relatively more relaxed).
  • the spacing between (angular position of) left and right visual stimuli is preferably increased when it is determined that the user has a higher level of relaxation (and conversely the spacing between left and right visual stimuli is preferably decreased when it is determined that the user has a lower level of relaxation).
  • Increasing and/or decreasing the spacing of the left and right visual stimuli in response to the user’s level of relaxation may be done, for example, in any of the ways described above (e.g., by changing a range of one or more positions forming a cycle of visual stimuli). Accordingly, preferably the range of one or more positions at which right and left stimuli are provided (e.g. for a cycle of visual stimuli) is preferably selected based on the level of relaxation of the user.
  • the visual stimuli are provided (activated) in synchronisation with a rhythmic beat of a soundtrack.
  • the position of successive visual stimuli provided changes in synchronisation the beat of the soundtrack.
  • the position of successive visual stimuli could change at (exactly) on the beat of the soundtrack, or the rate of change of position of visual stimuli could be correlated to the speed of the beat.
  • one or more qualities of the visual stimuli could be configured to change in synchronisation with a rhythmic beat of a soundtrack.
  • the Applicant has found that providing the visual stimuli in synchronisation with a rhythmic beat of a soundtrack has a synergistic effect of improving relaxation and allowing the user to become aware of visual stimuli provided wider within their peripheral vision.
  • the soundtrack provided could be a musical and/or verbal soundtrack.
  • a verbal soundtrack provided simultaneously with the visual stimuli may comprise instructions guiding a user though the training session (e.g. through a training program), e.g. comprising any of: informing a user of a target characteristic(s) to be identified, encouraging a user to breathe, providing guided meditation, or any other suitable and desired instructions.
  • the system is configured to play a soundtrack.
  • the system is configured to play the soundtrack by controlling a speaker integrated into the head mounted device or a speaker external to the head mounted device (e.g. the speakers of a mobile phone) via a suitable wired or wireless communication (e.g. Bluetooth).
  • a speaker integrated into the head mounted device or a speaker external to the head mounted device e.g. the speakers of a mobile phone
  • a suitable wired or wireless communication e.g. Bluetooth
  • the system is preferably configured to control the over-head headphones to play the soundtrack.
  • the user is permitted to select (the system is configured to receive a user selection for) the soundtrack which is to be played, e.g. from a library of plural different soundtracks, e.g. stored on the head-mounted device, or a portable electronic device coupled thereto, or a cloud-based music service.
  • the system is configured to receive a user selection for the soundtrack which is to be played, e.g. from a library of plural different soundtracks, e.g. stored on the head-mounted device, or a portable electronic device coupled thereto, or a cloud-based music service.
  • the position at which visual stimuli are provided may vary over time, or in response to a user input (e.g. a level of user relaxation or a user’s perceptiveness to visual stimuli).
  • Various triggers could be used to end a training session, e.g. such as a predetermined period of time for a training session time having elapsed (e.g. a soundtrack finishing), receiving a user input indicating that a user wishes to end the training session, determining that a user has reached a particular level or relaxation, reaching a cycle of visual stimuli which is a final cycle (e.g. being a cycle with the largest spacing of visual stimuli among a predetermined set of cycles of visual stimuli), or other suitable and desired triggers for ending a training session.
  • one or more (preferably a majority of, preferably all) of the visual stimuli are provided within the left and right monocular regions of the user’s vision, and preferably one or more (preferably a majority of, preferably all) of the visual stimuli are provided simultaneously to the left and right of the centre of the user’s vision preferably at a same angular and/or vertical position as one another.
  • visual stimuli forming only the ‘training' sequence of visual stimuli are provided.
  • other visual stimuli which are not part of the ‘training’ sequence of visual stimuli could be provided (e.g. for the purposes of conveying information to a user), however such visual stimuli which are not part of the ‘training’ sequence are preferably provided in a manner which does not distract from the ‘training’ sequence.
  • visual stimuli could be provided (the head mounted device could be configured to provide visual stimuli) in a different manner to that described herein for the ‘training sequence’ (e.g. for the purposes of conveying information to a user), and in embodiments this is done.
  • visual stimuli could be provided (the head mounted device may be configured to provide visual stimuli) which are one or more of: provided to the left and right individually (not simultaneously); provided to the binocular region of a user’s vision; provided simultaneously to the left and right at different angular positions; provided simultaneously to the left and right at different heights relative to the user’s eyes, etc.
  • the system described herein, including the head mounted device may operate under the control of any suitable and desired controller or controllers, for example comprising one or more processors.
  • the one or more processors may comprise a microprocessor, a programmable FPGA (field programmable gate array), etc..
  • a controller may be integrated into the head mounted device, e.g. for controlling the activation of visual elements to provide visual stimuli.
  • a controller (processor) integrated into the head mounted device may operate to perform the methods of the present invention independently (such that the head mounted device is configured to operate as an isolated system, without any external control).
  • the head mounted device may be configured to communicate with one or more other (external) devices having processors thereon for the purposes of implementing the methods described herein and controlling the head mounted device.
  • the external device (which in embodiments forms part of the present system) may comprise, e.g. a portable electronic device (e.g. mobile phone or tablet), laptop, desktop computer, cloud computing service, or other device.
  • the head mounted device is configured to communicate with (and the system comprises) a portable electronic device (e.g. mobile phone or tablet) for implementing the methods described herein.
  • a portable electronic device e.g. mobile phone or tablet
  • the methods in accordance with the present disclosure may be implemented at least partially using software e.g. computer programs. It will thus be seen that the present disclosure herein may provide computer software code for performing the methods described herein when run on one or more data processors.
  • the computer program may be executed by a processor integrated within the head mounted device.
  • one or more external devices e.g. a mobile phone
  • a computer program e.g. an application, e.g. a mobile phone app
  • the present disclosure may suitably be embodied as a computer program product for use with the present system.
  • the computer program product may comprise a series of computer readable instructions either fixed on a tangible, non-transitory medium, such as a computer readable medium, for example, diskette, CDROM, ROM, RAM, flash memory, or hard disk. It could also comprise a series of computer readable instructions transmittable to a computer system, via a modem or other interface device, over either a tangible medium, including but not limited to optical or analogue communications lines, or intangibly using wireless techniques, including but not limited to microwave, infrared or other transmission techniques.
  • the present system preferably comprises one or more input means for receiving user inputs.
  • the user input means could be provided as part of (integrated into) the head-mounted device.
  • the input means could be an external device, such as a handheld device (e.g. such as a controller or joystick), or a portable electronic device (e.g. such as a mobile phone, tablet, laptop or the like), or a sensor device, or other external device.
  • the input means may be configured to receive user inputs provided actively (consciously) or passively (subconsciously) by the user.
  • the system may be configured to receive a user input comprising one or more of, for example a user: pressing a button or other touch sensitive input device (e.g. a button displayed on the screen of a portable electronic device), making a movement (e.g. gesture), making a sound (e.g. verbal input), actively thinking a measurable thought, or performing any other action measurable by a user input means.
  • a user input comprising one or more of, for example a user: pressing a button or other touch sensitive input device (e.g. a button displayed on the screen of a portable electronic device), making a movement (e.g. gesture), making a sound (e.g. verbal input), actively thinking a measurable thought, or performing any other action measurable by a user input means.
  • the user input means may comprise, for example, any one or more of: a button or other touch sensitive input, a movement sensor (e.g. motion detector or accelerometer), a sound sensor (microphone), an electromyography (EMG) sensor (a sensor responsive by muscular motion), an Electroencephalography (EEG) sensor (a sensor responsive to brain wave activity), a breath sensor, or other desired sensor e.g. such as those described herein.
  • a movement sensor e.g. motion detector or accelerometer
  • EMG electromyography
  • EEG Electroencephalography
  • breath sensor or other desired sensor e.g. such as those described herein.
  • the controller(s) (processor(s)) of the present system are preferably configured to receive input data from the one or more input means, and to use the input data to implement the methods described herein.
  • the present system preferably also comprises one or more output means for providing an output to a user.
  • the output means could be provided as part of (integrated into) the head-mounted device.
  • the input means could be an external device, such as a handheld device (e.g. such as a controller or joystick), or a portable electronic device (e.g. such as a mobile phone, tablet, laptop or the like), or a sensor device, or other external device.
  • the output means could comprise one or more of: a visual element of the head mounted device, an external display (e.g. a display of a portable electronic device), a speaker, or any other suitable and desired output device.
  • the output means may provide an output to a user comprising one or more of: an auditory output, a haptic output, a visual output, or other suitable and desired output.
  • the controller(s) (processor(s)) of the present system are preferably configured to control the one or more output means, to provide an output to a user as indicated in the methods described herein.
  • the output means is controlled so as to provide instructions to a user for using the system of the present invention.
  • one or more auditory instructions are provided to a user when using the head mounted device.
  • the head mounted devices and the one or more external devices are preferably configured to share data via a suitable wired or wireless connection, e.g. such as Bluetooth, or WiFi.
  • a suitable wired or wireless connection e.g. such as Bluetooth, or WiFi.
  • the head mounted device is configured with wireless connection capability for connection to one or more external devices.
  • the system may comprise one or more memories for storing data for implementing the methods described herein, e.g. such as for storing computer software code, calibration data, user inputs, a record of user relaxation levels and/or user perceptiveness to visual stimuli provided during a training session, or other suitable and desired data.
  • the present system preferably comprises a suitable power source for powering the head mounted device.
  • the power source may comprise a wired or wireless connection from the head mounted device to a power source, or preferably an integrated power source (e.g. battery).
  • FIG. 1 shows a head-mounted training device in accordance with embodiments of the present invention, the training device integrated into a set of over-head headphones, and comprising left and right arms which are shown rotated downwards in a training position, each arm comprising light elements for providing visual stimuli in a monocular region of a user’s vision.
  • FIG. 1 shows a top view of the head-mounted training device of .
  • FIG. 1 is a rear view of the head mounted device of illustrating example relative positions of light elements on left and right visual displays during a training session.
  • FIG. 1 shows an example screen display of a mobile app in embodiments of the present invention, during a training session.
  • FIG. 1 is a schematic diagram of a system in accordance with embodiments of the invention.
  • FIG. 1 shows an alternative embodiment of the head mounted training device in accordance with the present invention, the training device being mounted on the brim of a cap.
  • the training device comprising a pair of arms mountable to the arms of a pair of glasses.
  • FIG. 1 shows an alternative embodiment of the head mounted training device in accordance with the present invention in a training position, the training device being attached to a pair of headphones.
  • EEG Electroencephalography
  • FIG. 1 is a flowchart showing an embodiment for controlling the positions of visual stimuli based on a user’s perceptiveness to visual stimuli.
  • the technology disclosed herein relates to methods and systems for training peripheral vision, particularly by providing visual stimuli simultaneously to the left and right monocular regions of a user’s (trainee’s) vision.
  • Figures 1 to 3 show various views of a head-mounted training device 100 for providing visual stimuli, in accordance with embodiments of the present invention.
  • the training device 100 shown in Figures 1 to 3 is shown integrated into a headset in the form of over-head headphones 105.
  • the training device could however be incorporated into or mountable to the brim of a cap, an in embodiments this is done, as shown for example in which shows a training device 800 having elongate members 101, 102 mounted to a brim 801 of a cap.
  • the training device could also or instead be incorporated into or mountable to a pair of glasses for example as shown in .
  • Another configuration using headphones is also shown in .
  • Like features among these various embodiments are indicated with like reference numerals.
  • the training device 100 comprises a left elongate member 102 in the form of an arm which extends along the left-hand side of a user’s head, and a right elongate member 101 in the form of an arm which extends along the right-hand side of a user’s head.
  • the left L’, right R’, forwards F’ and backwards B’ horizontal directions are shown in , along with the upwards U’ and downwards D’ vertical directions.
  • directions are preferably defined with respect to the orientation of the user’s head, such that the left L’, right R’, forwards F’, and backwards B’ horizontal directions (and accordingly the horizontal plane), and likewise the upwards U’ and downwards D’ vertical directions, move as the user’s head moves.
  • the horizontal and vertical directions correspond to a world orientation when a user’s head is in its usual upright position, but deviate from the world orientation if a user tilts their head.
  • the elongate members 101, 102 are attached (or attachable) to an item of headwear, e.g. headphones, by an attachment means 110 at one end of each elongate member, with the other end of the elongate member being free so that it is cantilevered.
  • an attachment means 110 at one end of each elongate member, with the other end of the elongate member being free so that it is cantilevered.
  • This is similarly the case in the embodiment shown in .
  • This may be similarly the case for elongate members attached (or attachable to) other items of headwear, e.g. a cap, such as shown in .
  • Each of the elongate members 101, 102 respectively comprises a visual display unit 103, 104 towards the distal (forwards) end of the elongate member which is operable to provide visual stimuli to the user.
  • the visual display unit comprises a plurality of discrete visual elements 106, 107 which can each be activated to provide a visual stimulus.
  • the visual elements comprise light elements in the form of an array of colour LED lights.
  • the colour LED lights 106, 107 are preferably configured to provide visual stimuli with differing colour. Other qualities of the visual stimuli could also be variable, e.g. such as the intensity (brightness) of visual stimuli. In embodiments where LED lights are grouped together, then different patterns or shapes of LED lights could be illuminated simultaneously to provide different qualities of visual stimuli.
  • Visual elements could be used instead of LED lights, such as for example a continuous visual element on each elongate member 101, 102, e.g. an LCD or plasma screen or light projection on each elongate member.
  • Such visual elements may similarly provide visual stimuli with variable qualities, e.g. such as colour, intensity, texture, size, shape, or localised motion.
  • the elongate members 101, 102 have a length L which is larger than their width W.
  • the visual display units 103, 104 are also elongate, having a length L' which is larger than their width W’.
  • Figures 1 to 4 show a head mounted device in a training position.
  • the elongate members 101, 102 (and likewise the visual display units 103, 104 and visual elements 106, 107 thereon) extend substantially horizontally and substantially at the height (vertical position) of the user’s eyes. This is illustrated in Figures 1 and 2 for example, and also at least in .
  • visual display units 103, 104 (and accordingly the visual elements, LED lights 106, 107) of the elongate members 101, 102 are provided only in the right monocular region 301 and left monocular region 302 of the user’s vision.
  • the right and left elongate members 101, 102 (and likewise the right and left visual elements 106, 107) do not extend into the binocular region of the user’s vision 303. This is shown, for example, in the top view of .
  • the visual elements could extend into the binocular region.
  • the head-mounted training device is controlled so as to activate visual elements only in the monocular region during a training session.
  • Visual elements falling within the monocular region are preferably identified in a calibration routine, or based on a user identifying which visual elements can be seen by a single eye only).
  • the right monocular region 301 is the region of a user’s vision which is visible only to the user’s right eye
  • the left monocular region 302 is the region visible only to the user’s left eye (as compared to the binocular region 303 of the user’s vision which is visible to both right and left eyes).
  • the right monocular region 301 for a human typically includes positions at angles ⁇ from about 60 to about 110 degrees to the right of the centre of the user’s vision 304.
  • the left monocular region 302 for a human likewise typically includes positions at angles ⁇ from about 60 to about 110 degrees to the left of the centre of the user’s vision 304.
  • the centre of the user’s vision in this regard can be taken to be the direction directly forwards from the bridge 305 of the user’s nose, and the angles ⁇ , ⁇ can be measured from the bridge of the user’s nose in the right and left directions respectively along a horizontal plane (i.e. being the angle along the horizontal meridian 306).
  • the visual elements on the right and left elongate members 101, 102 are present within a range of angular positions from about 60 to about 110 degrees in the right and left monocular regions. More preferably, the visual elements span a segment along the horizontal meridian of at least 30 degrees (thus preferably, the visual elements span angular positions between 60 and at least 90 degrees to the left and right of the centre of a user’s vision).
  • the head mounted device is adjustable so as to position the visual display units 103, 104 of the elongate members 101, 102 (and accordingly the visual elements e.g. LED lights 106, 107) in the right and left unshared monocular regions of the user’s vision only.
  • the elongate members 101, 102 may be extendible and retractable along their length (so as to be extendible and retractable forwards and backwards in the horizontal direction when in the training position).
  • Other mechanisms could instead be used if desired.
  • the elongate members could be bendable, for example as shown in , in which the elongate members 101, 102, have a bendable section 1001 between their attachment means 110 and visual display unit 103, 104.
  • the head mounted device may also be adjustable to fit a user’s head, e.g. having an adjustable main body, e.g. a head band 105 with telescoping mechanism 1002.
  • the headphones may comprise on-ear speakers 201, 202 as shown in for example, or over-ear speakers 1004, 1005 as shown in for example, or alternatively speakers that user bone conduction technology, or other suitable and desired speaker technology.
  • the position of the visual display units 103, 104 and visual elements 106, 107 of the head mounted device could also (or instead) be adjusted by changing a mounting position of the elongate members 101, 102.
  • This may be particularly suitable for a head mounted device that is mountable to a pair of glasses such as shown in .
  • the elongate members 101, 102 are mountable to respective right and left arms 901, 902 of a pair of glasses, and can be moved forwards and backwards relative to the arms of the pair of glasses.
  • the elongate members could be extendible and retractable, and/or bendable, and/or mountable at different positions when provided with any suitable and desired item of headwear, such as headphones, a cap, etc.
  • the elongate members 101, 102 are movable between a training position for performing training, and a stowed position when training is no longer desired to be performed.
  • the elongate members and/or visual elements are not readily visible by the user (e.g. are positioned outside of the user’s field of vision).
  • the elongate members 101, 102 are movable (e.g. rotatable) upwards into the stowed position, and downwards into the training position.
  • a rotatable joint 110 e.g. at a proximal (rearwards) end of each elongate member, e.g. connecting the elongate member to the headwear (e.g. headphones), as shown for example in Figures 1 to 3.
  • a male connector (e.g. jack) 1002 and female receiver (e.g. socket) 1003 form the attachment means 110, and allow rotation of the elongate members 102, 103 when attached.
  • Other mechanisms could instead be provided.
  • Figures 1 to 3 show a head mounted device in the form of over-head headphones
  • the elongate members could equally be movable into a stowed position when mounted on or incorporated into other items of headwear, such as a cap.
  • the elongate members 101, 102 when in the training position, are configured to electrically connect with a controller (processor(s)) for controlling activation of the visual stimuli (LEDs) and/or to a power source for providing power for activating the visual stimuli.
  • a controller processor(s)
  • the elongate members are electrically disconnected when in the stowed position.
  • the elongate members 101, 102 could be electrically connected with a controller (processor(s)) and/or power source regardless of their position.
  • the controller (processor(s)) and/or power source could be integrated within the elongate members.
  • a single member e.g. a single elongate member, or e.g. a VR headset comprising a single continuous screen, or other suitable and desired display
  • a single member could instead be provided with one or more visual elements activatable at (controlled so as to active at) positions within the left and right monocular regions simultaneously.
  • the visual elements (LED lights) 106, 107 are activatable at a plurality of angular positions ⁇ , ⁇ , as illustrated for example in Figures 4 and 5.
  • an array of discrete visual elements in the form of LED lights 106, 107 are provided at a plurality of angular positions to the left and right of the centre of the user’s vision. This is achieved in embodiments by using one or more rows of LED lights, each row extending substantially in the horizontal plane close to the vertical level of the user’s eyes. In the embodiment shown in Figures 1-5, and 10, one row of LED lights are provided. Alternatively, two rows of LED lights could be provided (so as to form a ten-by-two array of coloured LEDs on each elongate member), as shown in .
  • Positions of visual elements in an example embodiment are shown in and labelled A to J, with a selection of the angular positions of the visual elements relative to the centre of the user’s vision shown in .
  • visual elements which are at a larger angular position are further backwards and preferably further apart in the left L’ and right R’ directions.
  • each visual element in the row has approximately the same vertical position
  • other patterns of discrete visual elements e.g. LED lights
  • a group of visual elements could be provided at each angular position e.g. A to J.
  • visual stimuli could be provided by activating continuous visual element, e.g. an LCD screen, at different angular positions.
  • the visual stimuli e.g. discrete visual elements or activated positions of a continuous visual element
  • the visual stimuli could differ in height among the angular positions.
  • the positions of visual elements (LED lights) 106, 107 are preferably mirror images of one another relative to the centre of a user’s vision. This allows the system described herein to activate visual elements at a same angular position in the left and right monocular regions of a user’s vision simultaneously.
  • the head mounted device when performing a training session, is controlled so as to provide a sequence of visual stimuli (a ‘training sequence’) at plural angular positions in turn.
  • a training sequence a sequence of visual stimuli
  • left and right visual stimuli are provided simultaneously at the same angular position as one another.
  • the system of the present invention may comprise any suitable and desired arrangement for controlling the head mounted device to activate the visual elements to provide a ‘training sequence’.
  • the system may also comprise one or more input devices, e.g. based on which the sequence of visual stimuli can be controlled.
  • a control system 715 integrated within the head mounted device comprises an on-board controller 701 integrated within the head mounted device (e.g. within one of both of the elongate members 101, 102) is configured to control an output module 702 which controls the left display unit 104 and right display unit 103 of the elongate members so as to provide visual stimuli.
  • the on-board controller 701 may be considered as a central control unit, and may run updateable firmware for controlling the outputs of the left and right display units 103, 104.
  • the output module 702 may also control other output devices which are integrated within the head mounted device such as vibrational motors 709, and earphone speakers 710, and any other suitable and desired output devices. Such output devices may be used for providing useful outputs to the user, such as audio instructions, tactile feedback, an accompanying soundtrack for the training session or any other suitable and desired outputs.
  • the input module 713 may receive input data from one or more input devices indicative of a user’s level of relaxation, e.g. such as a brainwave (EEG) sensor, heart rate sensor, blood pressure sensor or other sensor.
  • a brainwave (EEG) sensor e.g., a Bosch Sensortec BMA150 sensor
  • heart rate sensor e.g., a Bosch Sensortec BMA150 sensor
  • blood pressure sensor e.g., a blood pressure sensor or other sensor.
  • the controller 701 may receive input data from input devices via any suitable and desired wired or wireless connection.
  • the control system 715 is configured to draw power from a power supply integrated within the head mounted device.
  • the power supply may be any suitable and desired power source, e.g., a rechargeable battery 704 chargeable via a USB charging port 708.
  • the controller 701 integrated within the head mounted device controls the activation of the visual elements and controls the output devices associated with the head mounted device, based on instructions received from an external controller (controller app 712) executing a computer program (e.g. application or “app”).
  • the external controller is provided as part of an external device, e.g. a portable electronic device (mobile device 716)
  • the controller 712 e.g. processor(s) running a software application on the external device (mobile device 716) is configured to determine the training sequence of visual stimuli which are to be provided to the user, and to transmit instructions (via a transmission/reception module 711) to the head mounted device (e.g. to a transmission/reception module 705 of the head mounted device) accordingly.
  • the controller 712 may also provide instructions for controlling the provision of an accompanying audio soundtrack and/or instructions via speakers 710, and any other desired e.g. tactile, audio or visual feedback based on the trainee’s responses.
  • a specialist software app running on a mobile device 716 controls the training session.
  • this could include any kind of remote control device.
  • the transmission of instructions from the external device (mobile device 716) to the head-mounted device may be done using any suitable and desired technology, e.g. such as wireless (e.g. Bluetooth, Wifi, etc) or wired communication.
  • the headset is preferably controlled via Bluetooth or other wireless technology that connects with the receiver module 705.
  • One or more inputs used for determining the sequence of visual stimuli may be received by the controller 712 of the external device 716.
  • Input data may be transmitted (directly) to the external device 716 (without being first received by the head mounted control device 715)
  • the external device may also comprise a touch screen 717 or other input or output device(s) for allowing the user to interact with the external device (e.g. such as a keyboard, button, gesture or movement sensor, camera, microphone or other suitable and desired input device).
  • a touch screen 717 or other input or output device(s) for allowing the user to interact with the external device (e.g. such as a keyboard, button, gesture or movement sensor, camera, microphone or other suitable and desired input device).
  • an external device comprising a mobile device 716 (e.g. mobile phone or tablet).
  • the external device could also or instead be any other suitable and desired device, e.g. a laptop, smart watch, wearable electronic device, desktop computer, cloud or internet-based computing service, or other suitable and desired external device.
  • the head mounted device itself may have an integrated controller (processor) which is configured to determine the training sequence of visual stimuli to be provided to the user, such that the head mounted device can be operated in isolation (without requiring an external controller).
  • processor processor
  • the head mounted device is controlled (e.g. by way of external controller 712 and on-board controller 701) so as to provide a sequence of visual stimuli (a ‘training sequence’).
  • left and right visual stimuli are provided at various angular positions in turn, the left and right visual stimuli being provided simultaneously at the same angular position as one another.
  • One or more qualities (e.g. colour) of the visual stimuli provided may vary (e.g. at an angular position and/or among the different angular positions).
  • the training sequence may comprise activating right and left visual elements (LED lights) simultaneously at any of the positions A to J.
  • LED lights right and left visual elements
  • a spacing between left and right visual increases over time and/or based on a user response.
  • visual stimuli are provided further apart (wider in the peripheral field) as a user becomes more relaxed and/or aware of their peripheral vision.
  • this progression from visual stimuli which are relatively close together to relatively further apart can be embodied in any and all examples described herein.
  • the spacing between the left and right visual stimuli is preferably measured along the horizontal meridian, and so corresponds to the sum of the angular positions ⁇ , ⁇ of the visual stimuli.
  • a larger spacing accordingly corresponds to a visual stimuli provided a larger angle ⁇ , ⁇ from the centre of the user’s vision (and thus further backwards B’, and further to the left L’ and right R’).
  • the spacing between left and right visual stimuli in the training sequence could increase over time in a predetermined manner (and not depend on any user input during the training session). Alternatively, the spacing between left and right visual stimuli could increase depending on a user input.
  • An example predetermined training sequence in which the spacing between visual stimuli increases could be, for example: A, B, C, D, E, F, G, H, I, J (Example 1)
  • each of A to J indicates the left and right visual stimuli (LEDs) at that respective position being illuminated simultaneously. Positions separated by a “,” indicate visual stimuli being shown in turn, in a consecutive period of time).
  • Example 1 shows a possible sequence of visual stimuli of increasing spacing forming a single cycle of visual stimuli comprising positions in the range A to J.
  • Example 1 shows the visual stimuli being provided at each and every position in the range A to J, some positions could be skipped if desired.
  • another predetermined training sequence could be: A, B, C, E, G, J (Example 1A)
  • left and right visual stimuli provided at a position are provided for a period of time which is long enough for the user to be able to discern the visual stimuli.
  • one or more qualities (e.g. colour) of the visual stimuli may vary.
  • the colour of the left and right stimuli could progress through one or more colours such as blue, green, purple etc. in turn before progressing to position B.
  • Visual stimuli having a variety of colours could likewise be provided at other positions, such as B, C, D, etc.
  • the left and right visual stimuli could have the same or mis-matched.
  • the particular colours provided could be selected by the system on a randomised basis, such that a user cannot predict which colour(s) will be shown.
  • an increase in spacing can be performed more gradually by performing plural cycles of providing visual stimuli, wherein in each cycle visual stimuli are provided at positions within a range of one or more positions.
  • a sequence in embodiments of the present invention using plural cycles of visual stimuli is for example: Cycle 1 (A, A); Cycle 2 (A, B, A, B); Cycle 3 (A, B, C, A, B, C); Cycle 4 (A, B, C, D, A, B, C, D) (Example 2)
  • visual stimuli are provided at positions within the range of positions consisting of position A.
  • the range of positions is A and B.
  • the range of positions is A and B and C.
  • the range of positions is A and B and C and D.
  • the range could additionally include positions such as E or F or G etc.
  • one or more qualities e.g. colour
  • one or more qualities e.g. colour
  • Example 2 the range of positions differs in each cycle, and particularly a furthest spacing between positions of right and left visual stimuli is increased in each cycle, whilst the closest spacing in the cycle remains the same.
  • the furthest spacing corresponds to right and left visual stimuli being provided at position A
  • the furthest spacing is at position B
  • the furthest spacing is at position C
  • the furthest spacing is at position D
  • the closest spacing which is the same for each cycle is at position A.
  • the closest (smallest) and/or furthest (largest) spacing between right and left visual stimuli can be altered in each cycle.
  • another training sequence in embodiments of the present invention could be: A, B, C, A, B, C, A, B, C... (Cycle 1); B, C, D, B, C, D, B, C, D... (Cycle 2); C, D, E, C, D, E, C, D, E... (Cycle 3) etc. (Example 3)
  • Example 3 in each successive cycle, both the closest and furthest spacing between right and left visual stimuli is altered.
  • the closest spacing in the 1 st cycle corresponds to position A
  • in the 2 nd cycle is position B
  • in the 3 rd cycle is position C
  • the furthest spacing in 1 st cycle corresponds to position C
  • in the 2 nd cycle is position D
  • in the 3 rd cycle is position E.
  • the one or more positions forming the range of positions may overlap for successive cycles (e.g. as in Examples 2 and 3 above), such that one or more of the same positions appear in successive cycles.
  • the one or more positions forming the range of positions could be non-overlapping for successive cycles, for example a 1 st cycle could have a range of positions being A and B, a 2 nd cycle having C and D, a 3 rd cycle having E and F, etc.
  • the position(s) of (e.g. and spacing between) visual stimuli could be varied within the range of possible positions for a cycles (in addition to or alternatively to changing the closest and/or furthest spacing for the cycle).
  • An example of where spacing between visual stimuli is also changed between cycles is: Cycle 1: A, B, C, A, B, C, A, B, C... ; Cycle 2: A, C, E, A, C, E, A, C, E... ; Cycle 3: B, D, G, B, D, G, B, D, G, B, D, G (Example 3A)
  • Example 3A range of positions in cycle 1 is A to C, whereas in cycle 2 the range of positions is A to E (such that the furthest spacing between left and right visual stimuli in the range has increased to position E), and in cycle 3 the range of positions is B to G (such that the closest and furthest spacing between left and right visual stimuli in the range have increased to B and G respectively).
  • cycle 1 comprises stimuli at adjacent positions only.
  • the spacing between visual stimuli is increased, such that the positions at which stimuli are provided are not adjacent within the available positions for the head mounted device, i.e. such that positions within the range for the cycle are missed out (in cycle 2 a single position B or D is missed out between visual stimuli).
  • the spacing between the visual stimuli is further increased (with a single position C being ‘missed out’ between the visual stimuli at positions B and D, and with two positions E and F being missed out between the visual stimuli at positions D and G).
  • the relative position(s) of (e.g. and spacing between) visual stimuli could be varied within the range of possible positions for a cycle so that the average angular position increases in one or more successive cycles.
  • An example of this is: Cycle 1: A, B, E, A, B, E, A, B, E... ; Cycle 2: A, C, E, A, C, E, A, C, E... ; Cycle 3: A, D, F, A, D, F, A, D, F... (Example 3B)
  • a ‘wave’ of visual stimuli is provided which progresses from relatively smaller angular positions (relatively closer spacings) to relatively larger angular positions (relatively further spacings).
  • the ‘wave’ of visual stimuli progresses from a smallest angular position (smallest spacing) to a largest angular position (largest spacing) of the position(s) at which visual stimuli are provided in the range of position(s) for the cycle.
  • the ‘wave' in Example 1 comprises the positions A through J in turn.
  • the ‘wave’ comprises positions A, B in turn in cycle 2, and positions A, B, C in turn in cycle 3.
  • the ‘wave’ comprises positions A, C, E, in turn in cycle 2, etc.
  • the ‘wave’ could be formed of visual stimuli at each possible angular position within the range for a cycle (e.g. at each of A to J for Example 1)
  • the wave may comprise a selection of the possible positions from the range (e.g. comprising A, B, H, I, J) such that some positions are skipped.
  • the range of positions forming a (each) cycle, and the time spent in a (each) cycle could be predetermined, such that the range of positions at which visual stimuli are provided changes over time without any user input, and in embodiments this is done.
  • the positions at which visual stimuli are provided could be selected based on a user input.
  • the user input could be a user selecting one or more parameters for a training sequence prior to commencing the training session, e.g. a user selecting a minimum and maximum position for the visual stimuli to be provided during the training session, and selecting a rate at which the spacing between visual stimuli is to increase during the training session.
  • the user input could be a user selecting a training program (e.g. a ‘relaxation’ program, e.g. ‘relaxation level 1’ or ‘relaxation level 2’, or an ‘energising’ program), the training program having one or more pre-configured training sequences with pre-configured parameters (e.g. such as the ranges of and spacing between visual stimuli in each cycle in the sequence).
  • the system may be configured to determine the sequence of stimuli to be provided (e.g. to determine the range of one or more positions forming each cycle, and the amount of time spent in each cycle).
  • the user input based on which the positions of the visual stimuli are selected for the training sequence, could be a user input during a training session, e.g. a user input indicative of a user’s level of relaxation and/or a user input indicative of a user’s perceptiveness to the visual stimuli during the training session.
  • a controller of the system may be configured to receive input data indicative of (and to determine) a user’s level of relaxation and/or a user’s perceptiveness to visual stimuli, and to adjust the positions at which visual stimuli are provided accordingly.
  • the position of visual stimuli provided can be controlled based on a user’s perceptiveness to visual stimuli provided during a training session.
  • a user’s perceptiveness is determined based on the user’s accuracy in identifying target characteristics of the visual stimuli.
  • FIG. 1 is a flow chart showing steps for adjusting the positions of visual stimuli during a training session based on a user’s perceptiveness to visual stimuli.
  • the system Upon starting the training session the system provides visual stimuli within an initial range of one or more positions, with target characteristic intermittently shown.
  • the target characteristic could be any suitable and desired quality of the visual stimuli provided. In embodiments it is a target quality (e.g. a target colour, e.g. green) provided to the left, right or both monocular regions of a user’s vision. Alternatively, the target characteristic could be a matched or mis-matched quality (e.g. colour) between visual stimuli provided to the left and right monocular regions of the user’s vision.
  • a target quality e.g. a target colour, e.g. green
  • mis-matched quality e.g. colour
  • the target characteristic is shown intermittently, so that the target quality (or matched, or mis-matched quality) occurs less often than other qualities (or mis-matched, or matched qualities).
  • the target characteristic is preferably shown at times which are randomised so that a user cannot predict when it will occur.
  • the quality (e.g. colour) of visual stimuli could be changed at regular intervals in time, but with the quality (e.g. colour) varied in a randomised manner (e.g. by selecting a weighted randomised colour).
  • the system may be configured to change the quality (e.g. colour) in unison and/or differently for the left and right sides. For a percentage of the time, the quality (e.g. colour) on both right and left may match, and for a percentage of the time the quality (e.g. colour) may differ on the right and left.
  • a sequence of visual stimuli provided to the left and right visual stimuli could comprise in turn:
  • A(green, green) indicates green stimuli being shown simultaneously to the left and right at position A, whereas for example A(purple, green) indicates purple to the left and green to the right at position A.
  • the system determines whether the user has correctly perceived the target characteristic.
  • the system is configured to receive a user response indicating that the user has perceived the target characteristic.
  • the user response comprises a user pressing a button on a screen of a mobile device when the user believes they have seen the target characteristic (e.g. such as the button 601 shown on the screen 600 of .
  • the button should be large enough that the user can press it without having to direct their gaze away from the forwards direction.
  • any other suitable and desired user response could be used, e.g. a user actively (consciously) or passively (subconsciously) interacting with any suitable and input means of the system, e.g. a button or microphone or gesture detector or other input means.
  • Determining whether the user has correctly perceived the target characteristic may comprise, determining whether the user has provided a response whilst the target characteristic is being shown (or within a particular time window after the characteristic has started being shown).
  • the user may be determined that the user has not correctly perceived the target characteristic if the user provides a response whilst the target characteristic is not being shown (e.g. before the target characteristic is shown, without a target characteristic being shown, or after the target characteristic has stopped being shown), or outside the above mentioned time window.
  • One or more positions at which visual stimuli are provided may then be adjusted based on whether the user has correctly perceived the target characteristic.
  • the one or more positions could be adjusted immediately in response to a correct (or incorrect) identification of a single occurrence of a target characteristic.
  • the range of positions could be adjusted after a threshold number of correct (or incorrect) identifications, or based on the proportion of correctly (or incorrectly) identified target characteristics.
  • adjusting one or more positions at which to provide visual stimuli based on the user’s perceptiveness to visual stimuli comprises increasing the separation between visual stimuli when the user correctly identifies one or more occurrences of the target characteristic (and may conversely comprise decreasing the spacing between left and right visual stimuli when the user incorrectly identifies one or more occurrences of the target characteristic).
  • increasing the spacing between visual stimuli may comprise increasing the closest and/or furthest spacing between left and right positions within a cycle of visual stimuli and/or adjusting the relative positions of visual stimuli provided during a cycle (e.g. compared to an immediately preceding cycle).
  • decreasing the spacing between visual stimuli may comprise decreasing the closest and/or furthest spacing between left and right positions within a cycle of visual stimuli and/or adjusting the relative positions of visual stimuli provided during a cycle (e.g. compared to an immediately preceding cycle).
  • correct (or incorrect) identifications by the user may be recorded (e.g. and displayed on a screen of a mobile device during the training session, or communicated to the user after the session is complete as a training report), without being used to control the spacing of visual stimuli during the training session.
  • the training session can be ended at any suitable and desired time.
  • the training session could end when a user indicates they wish to end the training session.
  • the training session could end after a predetermined amount of training time has elapsed, or a particular number of correct identifications of the target characteristic have been made, or a particular set of one or more positions for the visual stimuli is reached (e.g. a set of one or more positions for visual stimuli which includes a furthest apart spacing of visual stimuli permitted by the training device).
  • the progression of visual stimuli towards positions which are further apart is controlled based on the success of the trainee in correctly identifying specified characteristics of the visual stimuli.
  • Other features of the training sequence of visual stimuli could change in response to correct (or incorrect) identifications by the user, e.g. could change when the range or one or more positions changes.
  • the rate of change of the quality (e.g. colour) of visual stimuli, or the rate of provision of visual stimuli, the rate of occurrence of the target quality could also increase in response to correct (or incorrect) identifications by the user.
  • Qualities e.g. other than colour, e.g. such as pattern, texture, localised movement
  • Qualities could also change in response to correct (or incorrect) identifications by the user. For example, a degree of contrast between stimuli on the left and right could be changed, for example subtler shades of colour may be introduced in response to correct identifications.
  • the system may allow a user to select one or more parameters for the training session. For example the user may select which quality (or qualities) are to be the target characteristics during a training session (e.g. allowing a user to select one or more target colours).
  • the system may also be configured to receive a user selection as to the rate at which the target characteristics occur, and/or the rate at which the positions of the visual stimuli change.
  • the user may be able to control various parameters (e.g. which quality (or qualities) are to be the target characteristics and/or a rate at which the target characteristics occur, and/or the rate at which the positions of the visual stimuli change) by selecting a training program from a plurality of pre-configured training programs (e.g. an ‘energising’ program or a ‘relaxing program’).
  • the system may keep a record of the user’s perception of visual stimuli, which may be provided as a training report once a training session is complete.
  • a mobile device of the system may display an indication of the proportion of target characteristics correctly identified 602, and the average time it took the user to identify each target characteristic 603.
  • the speed and accuracy of identification of target characteristics can be measured and recorded by a mobile app 712.
  • a final (most outwards) position of the visual stimuli at the end of the training session may provide a metric to indicate the trainee’s success rate.
  • left and right visual stimuli provided simultaneously Whilst the visual quality (e.g. colour) can differ between the left and right visual stimuli provided simultaneously, the Applicant has found that left and right visual stimuli having the same qualities are more relaxing. Therefore, in embodiments left and right visual stimuli provided simultaneously preferably have identical qualities for a majority of the training session.
  • visual quality e.g. colour
  • the position at which the left and right visual stimuli are provided is changed in synchronisation with a rhythmic beat of a soundtrack, the soundtrack being provided e.g. by means of a suitable speaker, e.g. integrated into the head-mounted device.
  • a suitable speaker e.g. integrated into the head-mounted device.
  • each position may be provided on the beat of the soundtrack.
  • Such synchronisation with a soundtrack may enhance the relaxing effect of the training sequence, and therefore facilitate relaxing of a user’s gaze away from a central focus to a wider field of peripheral vision.
  • the system may permit the user to select a soundtrack for a training session, e.g. from a music library stored on a mobile device or a music streaming service.
  • the soundtrack could be, for example, binaural beats, nature sounds, music or other soundtrack.
  • the system may be configured to provide a sequence of visual stimuli based on the selected soundtrack, e.g. with slower tempo soundtracks being used for slower paced sequences (where positions and/or qualities of visual stimuli change less often) compared to faster tempo soundtracks which are used for faster paced sequences (where positions and/or qualities of visual stimuli change more often).
  • the soundtrack may form an integral role in the selection of the training ‘programme’ on an interface (e.g. of a mobile app), with the user being able to select a soundtrack e.g. ‘Relaxing Rainforest’ or ‘Upbeat Dance’.
  • the technology described herein comprises systems and methods for relaxing a user’s gaze away from a central focus to a wider field of peripheral vision.
  • the use of such a system may provide a generally relaxing effect on the user.
  • This is shown, for example in which shows example brainwave data measured by an EEG device whilst a user is performing a training session in accordance with the present disclosure (in this case, the EEG device is a Muse TM 2 headband, and the date is graphed using “Mindmonitor” software).
  • the graph in shows the relative strength of brain waves on the y (vertical) axis normalised such that the total strength at any point is 1, and shows time in minutes on the horizontal (x) axis.
  • alpha wave activity associated with a more relaxed state of the user
  • beta wave activity associated with a less relaxed state of the user

Landscapes

  • Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Epidemiology (AREA)
  • Pain & Pain Management (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Rehabilitation Tools (AREA)

Abstract

A method for relaxing gaze and/or training attention to peripheral vision, comprising providing visual stimuli simultaneously to the left and right monocular regions of a subject's peripheral vision, and a system comprising a head mounted device configured to provide such visual stimuli.

Description

HEAD MOUNTED DEVICE AND METHODS FOR TRAINING PERIPHERAL VISION
The present invention relates to devices for training relaxation of gaze by directing attention to peripheral vision.
In one aspect, the present invention provides a system for relaxing gaze and/or training attention to peripheral vision comprising:
a head mounted device configured to provide visual stimuli simultaneously to the left and right monocular regions of a user’s peripheral vision, wherein the device provides visual stimuli at an equal angular distance from the centre of a user’s vision on left and right sides, at a plurality of angular positions as measured on a horizontal plane.
In another aspect, the present invention provides a method for relaxing gaze and/or training attention to peripheral vision comprising:
providing visual stimuli simultaneously to the left and right monocular regions of a subject’s peripheral vision, at equal angular distance from the centre of a user’s vision on left and right sides, at a plurality of angular positions as measured on a horizontal plane.
In this regard, the left monocular region of a user’s peripheral vision is the region which can be viewed by the left eye only (and not the right eye). Likewise, the right monocular region of a user’s peripheral vision is the region which can be viewed by the right eye only (and not the left eye). This is in contrast to the binocular region which can be viewed by both eyes.
Having a relaxed gaze and open awareness of events in the peripheral visual field is important for many activities where there is a need for heightened awareness of one's general surroundings as opposed to singular focus on a tight central point, or foveal vision. For example, sport activities may require an awareness of movement, shape and colour in the extremities of vision, for instance the movement of other players, whilst keeping gaze anchored on the main focus of attention, for instance the nearest opponent, net or ball. Similarly, a tracker in a combat or hunting situation may wish to keep an open visual awareness to detect small changes in colour or movement in the widest possible visual field.
Techniques to soften focus and move attention away from a singular focus to the wider peripheral field are also used by therapists and Neuro Linguistic Programming (NLP) counsellors as a method to promote relaxation, reduce negative thought patterns and increase feelings of presence and calm in patients.
Finally, as an increasing proportion of people’s lives are spent with attention tightly focused on small screens, there is a need to actively help people disengage from this behaviour, which has been linked to tension and stress, at regular times throughout the day.
Previous technology in the field of peripheral vision training has presented various drawbacks which has prevented it from becoming accessible for wider use, for example outside a laboratory or other controlled setting.
The Applicant accordingly believes that there remains scope for improvements to technologies for training relaxation of gaze by directing attention to peripheral vision.
The Applicant has recognised that providing visual stimuli simultaneously to the left and right monocular regions at equal angular distance from the user’s central point of focus can assist with training the user’s peripheral vision whilst encouraging the user to keep a central focus. In such configurations, because the left and right visual stimuli can only be seen by a respective right and left eye, the user is unlikely to improve their perception of both these stimuli simultaneously by shifting their gaze away from a central focus. Thus, to succeed the user will naturally anchor their gaze centrally, without requiring any additional instructions, in order to keep in view both left and right stimuli simultaneously. (In this regard, a central focus corresponds to a user’s focus being directed generally towards their centre of vision, i.e. in the forwards direction).
Advantageously, it is therefore not necessary to measure a user’s compliance with maintaining a central focus (e.g. by eye tracking or by a user self-reporting compliance), and accordingly in embodiments a user’s compliance with maintaining a central focus is not measured.
In this regard, providing visual stimuli simultaneously to the left and right monocular regions comprises providing a visual stimulus to the left monocular region of a user’s vision at a same time as providing a visual stimuli to the right monocular region of the user’s vision, e.g. such that respective time intervals at which the respective left and right visual stimuli are provided at least partially overlap, and in some embodiments fully overlap, (e.g., such that the right and left stimuli are provided for exactly the same period of time).
The visual stimuli provided simultaneously to the left and right monocular regions of a user’s vision may also be referred to herein as a “pair of left and right visual stimuli” or a “pair of visual stimuli”.
For most humans, the monocular region of vision comprises angular positions from about 60 to about 110 degrees from the centre of vision (in the left and right directions, for the left and right monocular regions respectively). Thus, in embodiments, providing visual stimuli in the left and right monocular regions respectively of the user’s peripheral vision comprises providing visual stimuli at one or more angular positions to the left and right respectively from the centre of a user’s vision, the one or more angular positions preferably being from about 60 degrees and about 110 degrees from the centre of vision of the user in the left and right directions. This allows the widest possible area of peripheral vision to be trained.
The centre of vision can be taken as the direction pointing forwards from the bridge of the user’s nose. The angular position of a visual stimulus in the left or right direction relative to the centre of a user’s vision, is measured as the angle between the forwards direction from the bridge of the user’s nose, and the visual stimulus, as measured along a horizontal plane (i.e. being the angle along the horizontal meridian). Accordingly, the direction straight ahead (forwards) of the user corresponds to an angular position of zero degrees, and positions to the left and right have angular positions greater than zero degrees.
Preferably (during a training session) a vertical position of the visual stimuli provided is close to the vertical position (height) of the user’s eyes, preferably being within 5 cm (above or below) of the vertical position user’s eyes, preferably within 2 cm, preferably within 1 cm (preferably as measured in the vertical direction from the bridge of the user’s nose, which generally aligns with the middle of a user’s eye). Whether above, below, or exactly in line with the bridge of the user’s nose, the angular position of a visual stimulus provided in the left or right direction can be measured as above, by measuring the angle between the forwards direction from the bridge of the user’s nose, and the visual stimulus, along a horizontal plane (so as to measure the angular position along that horizontal plane which the visual stimulus lies directly above or below).
In embodiments, (during a training session) visual stimuli provided to the left and right monocular regions simultaneously are provided at an equal angular position to the left and right from the centre of a user’s vision (such that the left and right visual stimuli are provided at the same angular position as one another).
In other words, the left and right visual stimuli provided simultaneously are preferably provided at a same distance to the left and right of the bridge of the user’s nose, and at a same distance forwards or backwards of the bridge of the user’s nose as one another.
The Applicant has recognised in this regard, that providing visual stimuli simultaneously at an equal angular position to the left and right from the centre of the user’s vision can help the user to retain a relaxed centred gaze. Providing visual stimuli at equal angular positions may be generally more relaxing than providing visual stimuli which are mis-matched in angular position.
In embodiments, (during a training session) visual stimuli provided to the left and right monocular regions simultaneously are provided at a same vertical position as one another. Visual stimuli at a same vertical position may be generally more relaxing than visual stimuli which are mis-matched in vertical position.
As will be discussed in more detail below, during a training session, the angular position of left and right visual stimuli provided simultaneously may vary.
During a training session, the vertical position (height) of the left and right visual stimuli provided simultaneously may (also) vary. Alternatively (and preferably), each pair of left and right visual stimuli provided simultaneously could be provided at a same vertical position (height).
Aspects of the present invention comprise a head mounted training device for providing visual stimuli. Likewise, the method of the invention may be performed using a training device such as a head mounted device.
The visual stimuli provided may be any suitable and desired stimuli which are visually discernible by a user. Each visual stimulus preferably comprises provision of one or more of: a colour, intensity, texture, size, shape, localised movement or other visual quality by a visual element. Different visual stimuli may be provided by changing one or more such qualities (e.g. colour) of the visual element.
The visual elements could be mechanical elements. However, preferably, the visual elements are light elements.
Thus, preferably the training device (head mounted device) comprises one or more visual elements, more preferably comprising one or more light elements.
Preferably, providing visual stimuli comprises activating one or more visual elements, preferably activating one or more light elements (of the head mounted device), e.g. at a desired angular position as to provide visual stimuli at that angular position. Activating one or more light elements preferably comprises illuminating the one or more light elements (e.g. with a coloured light).
The Applicant has found that light elements may be particularly effective for providing visual stimuli in the left and right monocular regions of a user’s vision, as these can be readily discernible despite the user having low visual acuity in these regions.
The one or more light elements could be any suitable and desired light elements. The one or more light elements could comprise a continuous light element (which spans a range of angular positions), for example such as a screen, e.g. an LCD or plasma screen or projection onto a screen. In this case, providing a visual stimulus preferably comprises illuminating a portion of the continuous light element, e.g. in a particular colour, shape or pattern.
Alternatively, the one or more light elements could comprise discrete light elements (which are provided at discrete angular positions), for example such as individual or groups of lights, e.g. light emitting diode (LED) lights. In this case, providing a visual stimulus preferably comprises illuminating one or more of the discrete light elements (by illuminating individual or groups of the discrete light elements).
In preferred embodiments, the one or more light elements comprises one or more (variable colour) LED lights.
The one or more light elements are preferably activated (illuminated) to provide visual stimuli within the left and right monocular regions of a user’s vision, as discussed above.
Thus, (when the training device is in a training position) the one or more light elements preferably span a range of (e.g. are provided at plural) angular positions within the left and right monocular regions of a user’s vision, preferably within about 60 to about 110 degrees from the centre of vision in the left and right directions. Accordingly, the one or more light elements preferably span a range of (e.g. are provided at plural) positions forwards and/or backwards relative to the bridge of the user’s nose.
In embodiments, the one or more light elements are present only within the left and right monocular regions of a user’s vision (and preferably are positionable so as to be present only within the left and right monocular regions of a user’s vision).
Alternatively, in embodiments, light elements could also be present outside of the left and right monocular regions (e.g. in the binocular region) of a user’s vision, but preferably light elements are not activated (visual stimuli are not provided) at angular positions outside the left and right monocular regions during a training session (in a training sequence of visual stimuli). In such embodiments, the system may be configurable to determine (the method may include determining) which angular positions (e.g. which discrete light elements) fall within the left and right monocular regions of a user’s vision, and during a training session (in a training sequence) activate light elements at those angular positions only. In this regard, the system may be configured to perform a calibration routine or receive a user input in order to identify the angular positions falling within the user’s left and right monocular region of vision, and accordingly determine which angular positions light elements should be activated at during a training session and/or for a training sequence.
For discrete light elements, e.g. LEDs, the discrete light elements preferably comprise an array of light elements, the light elements being provided at a plurality of left and right angular positions (when the training device is in a training position).
For example the array of discrete light elements, e.g. LEDs, could form a single row of light elements, having a same vertical position (height) such that each row extends horizontally, e.g. at a vertical position close to that of the user’s eyes. Alternatively, the discrete light elements could form plural (e.g. two, three, or more, e.g. up to five) horizontally extending rows of light elements, each row at a different vertical position (height) close to the height of the user’s eyes. Alternatively, other grouping or patterns of discrete light elements could be provided within the array of light elements.
The Applicant has found that visual elements, e.g. such as LED light elements can suitably be incorporated into a head mounted device. The Applicant has furthermore found that a head mounted device can provide a compact and portable form for providing visual stimuli, which is accessible to every-day users. In this way, the head mounted device can be used in any environment throughout the day as desired to provide training of peripheral vision and relaxation of gaze. In this regard, the present device does not necessarily require large static components, such as PC’s or cameras, or complex hardware which needs to be finely tuned in a laboratory setting. Due to the compact and portable form, the head mounted device can be of equal use to professional athletes who require short training sessions interspersed throughout the day, as well as to office workers who require a screen break and rest for their eyes after an intensive period of attention on screens, and would benefit from a short session where focus is softened and attention moved to the periphery.
In this regard, the head mounted device may be any suitable and desired device which is configured to be mounted to a user’s head. In embodiments, the head mounted device is mountable (indirectly) to a user’s head by (removably) mounting on an item of headwear, such as for example a pair of over-head headphones, a headband, a hat, or a pair of glasses or the like.
During a training session, the visual elements (e.g. LED lights) (of the head mounted device) are (accordingly) provided in proximity to a user’s head (and eyes), preferably within a distance of about 150 mm from a user’s left and right eyes respectively, preferably within a distance of about 100 mm, preferably within about 80 mm, preferably within about 70 mm. In embodiments the distance is from about 40 mm to about 60 mm.
In embodiments, one or more (or all) of the visual elements (which are to be activated during a training session) are provided (are configurable to be provided) at a distance of at least about 5 mm from the user’s left and right eyes respectively, preferably at a distance of at least about 10 mm, preferable at least about 20 mm, preferably at least about 30 mm.
Preferably the head mounted device comprises a pair of elongate members, wherein one or more visual elements (light elements) are provided on each elongate member of the pair. Thus, preferably the head mounted device comprises a left elongate member comprising one or more left visual elements, and a right elongate member comprising one or more right visual elements. The one or more visual elements on an elongate member together form a visual display unit.
The pair of elongate members are preferably formed integrally with or mountable (attachable) to an item of headwear, such as one or more of: a pair of over-head headphones, a headband, a hat, a pair of glasses or the like. In an embodiment, the pair of elongate members may be mountable to a rim of and/or the arms of a pair of glasses. In another embodiment, the pair of elongate members are mountable or integrated within a brim of a cap. In another embodiment, the pair of elongate members are formed integrally with or are attachable to a pair of over-head headphones.
In this regard, the term 'elongate’ typically indicates that each elongate member is longer than it is wide (has a length which is greater than its width). The visual display unit on each elongate member is preferably also elongate. Preferably, each elongate member (and visual display unit) is at least 2 times as long as it is wide (has a length which is at least twice its width), preferably at least 3 times as long as it is wide (has a length which is at least three times its width). In this regard, the length corresponds to the average length of the elongate member (or visual display unit) measured along the member (or visual display unit) from one end to the other, and the width corresponds to the average width of the elongate member (or visual display unit) measured across the member (of visual display unit) from one side to the other.
In embodiments, the length of the visual display unit is at least about 50 mm, preferably at least about 50mm, preferably at least about 60 mm, preferably at least about 70 mm. The length of the visual display unit may be less than about 150 mm, preferably less than about 100 mm. In embodiments the width of the visual display unit is at least about 1 mm, and in embodiments less than about 50 mm, preferably less than about 20 mm. For example, for visual display unit having (only) a single row of LED lights, the width of the visual display unit may be about 6 mm, whereas for two rows of LED lights the width may be about 11mm.
The length of each of the elongate members is, in embodiments, greater than or equal to the length of the visual display unit. In embodiments the length of each elongate member is up to about 150mm, preferably up to about 130 mm. For example, elongate members attached or mountable to a set of headphones the length of the elongate members may be about 130 mm (however, other lengths could be used if desired).
If the elongate members (or visual display unit) are curved, then the length and width are measured along (so as to include) the curve.
In embodiments, each elongate member comprises an attachment means by which the elongate member is attached to or mountable to an item of headwear. In embodiments, the attachment means (point of attachment) is towards one end of the elongate member (in the length direction), preferably with the other end (in the length direction) of the elongate member being free (such that the elongate member is cantilevered). Alternatively, the elongate members could be attached or mountable to an item of headwear in any other suitable and desired manner.
The visual elements provided on each elongate member may be of the form discussed above. For example, each elongate member may comprise a continuous visual element which extends along at least part of the elongate member. Alternatively (and preferably), each elongate member may comprise an array of discrete light elements (e.g. LEDs), the array of discrete light elements (e.g. LEDs) extending along at least part of the elongate member.
Preferably the one or more visual elements on the left and right elongate member are provided at the same relative positions along the elongate members (and thus can be activated to provide left and right visual stimuli at a same angular position as one another). Thus preferably, the one or more visual elements on the left and right elongate members are mirror images of one another.
When mounted on a user’s head in a training orientation, for performing a training session, the elongate members are preferably oriented so as to extend substantially horizontally (in a substantially horizontal plane), and are preferably positioned substantially at the height of the user’s eyes. In the training orientation, preferably the one or more visual elements on each elongate member are positioned to allow provision of visual stimuli at a plurality of angular positions to the left and right of the centre of the user’s field of vision (and accordingly at a plurality of positions forwards and/or backwards of the bridge of the user’s nose). In the case of one or more rows of discrete light elements provided on an elongate member, in the training orientation, the rows of light elements extend substantially horizontally.
In embodiments, each elongate member is curved (along its length), such that in the training configuration each elongate member is curved in the horizontal plane, so as to at least partly wrap around the user’s head. This may facilitate positioning (and in embodiments the elongate members are configured such that) one or more (or all) visual elements on the left and right elongate members at approximately a same distance from the user’s left and right eyes respectively, if desired for viewing comfort.
In embodiments, the pair of elongate members are positionable (moveable) so as to position the one or more visual elements within (and preferably only within) the left and right monocular regions of the user’s vision. In embodiments, the elongate members are extendible and retractable (along their length) so as to alter the position the one or more visual elements, e.g. via a telescopic mechanism or other suitable mechanism. Alternatively, the pair of elongate members may be mountable at (and movable to) different positions (e.g. forwards and backwards) on a head worn device (e.g. different positions along the brim of a cap) so as to alter the position the one or more visual elements. In embodiments, the pair of elongate members may be rotatable (e.g. about or near their attachment means) and/or distortable (bendable) along at least part of their length (e.g. between the attachment means and the visual display unit). This adjustability allows the head mounted device to be adapted for providing visual stimuli in the monocular region, for example for users with different vision ranges and nose shapes.
Preferably, in the training orientation, the elongate members do not extend into the binocular region of the user’s vision. Thus, preferably, in the training orientation there is a gap between the elongate members, and preferably an angular gap (measured along the horizontal meridian from a bridge of user’s nose) of at least 20 degrees, preferably at least 45 degrees, preferably at least 60 degrees, preferably at least 90 degrees (and preferably up to 120 degrees).
In embodiments, the pair of elongate members are movable relative to the head-worn device, so as to move the elongate members away from a training orientation (and preferably into a stowed orientation, preferably where the elongate members substantially cannot be seen by a user). For example, in embodiments (e.g. where the elongate members are integrated into a pair of headphones), the elongate members may be rotatable upwards away from a training orientation when training is not being performed, and rotatable downwards into a training orientation when a training session is desired to be performed. In embodiments where the elongate members are integrated within or attachable to a pair of over-head headphones, the elongate members may be configured to be rotated upwards to align with (e.g. to be stowed within) the head-band portion of the headphones. The elongate members may additionally (or alternatively) be foldable, or otherwise collapsible into a smaller form when a training session is not being performed.
The Applicant has recognised that a pair of elongate members each comprising one or more visual elements (e.g. light elements) having one or more of the features discussed above, may provide a lightweight and adjustable means for providing visual elements to the left and right monocular regions of a user’s vision. The elongate members may be less intrusive than, e.g. a conventional virtual reality or augmented reality display headset which is designed to fill the user’s entire field of vision. Accordingly, a user may be able continue wearing the head-mounted device having the pair of elongate members for a desired period of time for the training session, and also between training sessions without the pair of elongate members causing distraction or discomfort.
As discussed above, the methods and systems disclosed herein allow left and right visual stimuli to be provided simultaneously, preferably at a same (angular) position as one another. The visual stimuli can preferably be provided at a range of one or more (angular) positions (e.g. by activating left and right visual elements at a desired (angular) position).
Preferably (during a training session), a sequence of left and right visual stimuli are provided. The sequence may be referred to herein as a ‘training sequence’, since it is provided for the purpose of training the user’s peripheral vision.
In embodiments, (in the ‘training’ sequence) left and right visual stimuli provided simultaneously are provided at a plurality of (angular) positions in turn (and accordingly preferably at a plurality of positions in the forwards and/or backwards direction, relative to the position of the bridge of the user’s nose). The plurality of positions at which visual stimuli are provided form a sequence of positions.
One or more (different) visual stimuli may be provided at each (angular) position, in the training sequence, for example by changing one or more qualities of visual stimuli provided at a position and/or among the positions. Thus, in embodiments, during a training session, one or more qualities of visual stimuli provided are permitted to vary (and preferably do vary).
As will be discussed in more detail below, a user may (the system may be configured to allow a user to) control various parameters for the training sequence (e.g. the positions at which visual stimuli are provided and/or the qualities of the visual stimuli to be provided). This may allow fine, granular, user control of the training sequence.
Alternatively, or additionally, the user may interact with the system at a (more) abstracted level. For example, in embodiments, a user may (the system is configured to allow a user to) select a training program from a plurality of training programs. Each training program may comprise (differ in) one or more ‘training’ sequences of visual stimuli that it provides (e.g. with respect to the order of positions and/or the qualities of stimuli provided, e.g. as will be described in more detail below). A training program could comprise a plurality of different ‘training sequences’, each forming an ‘exercise’ for training the user. The training sequence(s) of a training program could be (and in embodiments are) provided with a (particular) soundtrack and/or with a sequence of (e.g. audio) instructions (e.g. directing the user to interact with the system in a particular way during the training sequence). For example, the training program could be an energising program choreographed to upbeat dance music, or a relaxing programme choreographed to forest sounds, or a session choreographed to a recorded (e.g. meditative) instruction soundtrack.
During a training session, the one or more qualities of the visual stimuli which are varied may comprise one or more of: a colour, intensity, texture, size, shape, localised motion of the visual stimulus.
For example, for a visual stimulus provided by a light element, a ‘texture’ of a visual stimulus may correspond to a texture or pattern of light formed by the light element. Intensity may be a colour intensity (saturation) and/or a brightness of a light element when activated. Localised motion may be motion about (e.g. centred) on a particular position.
In embodiments where the visual stimuli are provided by light elements (e.g. LED lights), a quality (and in embodiments the only quality) of the visual stimuli which is permitted to vary is a colour.
During a training session, one or more qualities may be permitted to differ (be mis-matched) between left and right visual stimuli provided simultaneously. Accordingly, the system may be configured to control the one or more qualities of the left and right visual stimuli independently. For example, in the case of light elements having a quality which is the colour, a left light element may be activated to be a particular colour (e.g. green), whilst a right light element may be activated to be a different colour (e.g. blue). However, preferably regardless of whether or not one or more qualities differ (are mis-matched), the left and right visual stimuli provided simultaneously are preferably provided at a same angular position as one another.
The Applicant has found that providing left and right visual stimuli having a same quality (and preferably having identical qualities) as one another (e.g. having the same colour) is generally more relaxing than having mis-matched qualities (e.g. having different colours).
Accordingly, preferably for the majority of time during a training session, the left and right visual stimuli are provided with one or more (or preferably all) qualities being the same (e.g. having a same colour). Thus, preferably left and right visual stimuli with the same quality (or qualities) are provided more often than left and right visual stimuli with a differing quality (or qualities).
In embodiments, e.g. in advance of commencing a training session, a user is permitted to choose (the system is configured to receive a user selection for) the one or more qualities which are to be varied during a training session. For example, the user may be permitted to choose that colour is to be varied, and to choose which colours are to be provided. For example, a user could select, e.g., blue, green, and purple visual stimuli to be provided (and not red and orange visual stimuli). Alternatively, one or more qualities of the visual stimuli may depend on a training program selected by the user (e.g. an ‘energising’ program or a ‘relaxing program’).
In embodiments, during a training session, one or more of the qualities of (e.g. the colour of) visual stimuli provided vary randomly (e.g. being selected according to a weighted random selection). In this way, the quality (e.g. colour) which is to be provided is not predictable by the user, which may improve user attention when using the device.
Preferably, during a training session, each visual stimulus (of the training sequence) is provided for a discernible period of time, to allow the user to perceive the visual stimulus. Preferably each visual stimulus is provided for a period of at least about 0.1 seconds, preferably at least about 0.5 seconds, preferably at least about 1 second. Preferably, each visual stimulus (of the training sequence) is provided for at most 60 seconds, preferably at most about 20 seconds, preferably at most about 10 seconds, preferably at most about 5 seconds (such that the user does not lose attention to the visual stimuli). In embodiments, each visual stimulus is provided for a time from about 0.5 seconds to about 20 seconds.
Preferably the amount of time that a visual stimulus is provided for is the same for each visual stimulus (in the ‘training’ sequence). In other words, the quality (or qualities) of visual stimuli preferably change at regular intervals in time. This can provide a relaxing effect.
Alternatively, in embodiments, the amount of time that a visual stimulus is provided for may be permitted to vary (varies), e.g. varying randomly (however, left and right visual stimuli presented simultaneously will preferably be provided for the same amount of time as each other). In this regard, the amount of time that a visual stimulus is provided for could be selected according to a weighted randomised amount of time. In that case, the amount of time for which a visual stimulus is presented will not be predictable by the user, which may help to improve user attention when using the training device.
In embodiments, the user is able to control (select) a duration of the visual stimuli. In embodiments, this is achieved by a user controlling (selecting) the amount of time between (rate of) visual stimuli. This duration (or rate) may be selected as desired by the user, for comfortable use of the system. Alternatively, the user may be able to control a duration of the visual stimuli by selecting a training program (e.g. an ‘energising’ program or a ‘relaxing program’) and/or by selecting a rhythmic (e.g. musical) soundtrack to be provided with the training sequence (wherein the system may be configured to provide the visual stimuli in synchronisation with the beat of the soundtrack). Accordingly, the system is preferably configured to receive a user input for controlling the duration (or rate) of visual stimuli.
Preferably, when performing a training session (in the ‘training sequence’ of visual stimuli) a spacing between the left and right visual stimuli provided simultaneously increases with increasing time, and/or based on a user response.
In this regard, the spacing between the left and right visual stimuli provided simultaneously preferably corresponds to the angular spacing (angular distance) between the left and right visual stimuli as measured to the left and right from the centre of the user’s vision (from the bridge of the user’s nose, along the horizontal meridian). The spacing between the left and right visual stimuli thus corresponds to the sum of the angular positions of the left and right visual stimuli. Thus, increasing the spacing between left and right visual stimuli comprises providing left and right stimuli which are further apart from one another along the horizontal meridian.
Thus, preferably increasing the spacing between the left and right visual stimuli provided comprises increasing the angular position of the left and right stimuli (and correspondingly providing left and right stimuli which are further backwards). Conversely, decreasing the spacing between the left and right visual stimuli provided preferably comprises decreasing the angular position of the left and right stimuli (and correspondingly providing left and right stimuli which are further forwards).
The Applicant has recognised that, during the course of a training session, the user may become more relaxed and may become receptive to left and right visual stimuli which are deeper within their peripheral vision (and accordingly at wider angles, and further backwards within the peripheral vision). By increasing the spacing between the left and right visual stimuli, an increasingly wide visual field of the user can be trained.
In embodiments, increasing the spacing between (the angular position of) the left and right visual stimuli provided is performed in a defined, preferably predetermined manner (automatically, without receiving user input during the training session). For example, the spacing between the left and right visual stimuli provided may be increased according to a defined (e.g. predetermined) sequence of positions.
For example, the predetermined sequence of positions may progress from a predetermined initial (minimum) angular position of the left and right visual stimuli, to a final (maximum) angular position of the left and right visual stimuli, e.g. according to a predetermined pattern of positions. A user may be able to select in advance of a training session (the system is configured to receive a user input for) one or more of: a minimum angular position, a maximum angular position, and a pattern of positions (for example by the user selecting these parameters directly, or by selecting a desired training program).
Alternatively, the spacing between (angular position of) the left and right visual stimuli could be increased based on a user input during the training session. The spacing between (angular position of) the left and right visual stimuli could also be decreased based on a user input during the training session.
The user input which is used to increase and/or decrease spacing between left and right visual stimuli may comprise an active (conscious) user input, comprising a user actively interacting with the system, e.g. to select appropriate parameters. Alternatively, the user input may comprise a passive (subconscious) user input, for example an input detected by a suitable sensor.
For example, the user input may comprise a user selecting (e.g. adjusting) one or more positions at which the user desires visual stimuli to be provided, and the system may accordingly provide visual stimuli at positions among those one or more positions.
Alternatively, as will be discussed in further detail below, the user input could be a sensed or user-reported level of relaxation of the user, and/or an input indicative of the user’s perceptiveness to the visual stimuli. In this regard, the spacing between (angular position of) left and right visual stimuli is preferably increased when it is determined that the user has a higher level of relaxation and/or better perceptiveness to the visual stimuli (and conversely the spacing between left and right visual stimuli is preferably decreased when it is determined that the user has a lower level of relaxation and/or worse perceptiveness to the visual stimuli). Other user input(s) could also or instead be used, if desired.
In an embodiment, increasing (or conversely decreasing) the spacing between the left and right visual stimuli comprises providing a pair of left and right stimuli which have a larger (or conversely smaller) spacing compared to one or more previous (preferably immediately preceding) pairs of left and right stimuli.
In this regard, it is possible to increase (or decrease) the spacing between left and right visual stimuli compared to immediately preceding pair(s) of stimuli (e.g. after a predetermined period of time and/or responsive to a user input, e.g. responsive to a user input indicative of level of relaxation and/or level of perceptiveness), and in embodiments this is done.
Alternatively, in embodiments, increasing (or decreasing) the spacing between left and right visual stimuli (e.g. over time and/or response to user input) is done gradually such that there is an overall trend of increasing (or decreasing) the spacing between left and right visual stimuli.
For example, the positions at which left and right visual stimuli are provided could be determined (selected) on a weighted basis, and increasing the spacing between left and right visual stimuli could comprise increasing the weighting (and therefore the rate of occurrence) of positions which have a larger angular spacing (and are positioned further backwards). Conversely, decreasing the spacing between left and right visual stimuli could comprise increasing the weighting of positions which have a smaller angular spacing (and are positioned further forwards).
In embodiments, a gradual increase or decrease in the spacing between left and right visual stimuli is achieved by providing one or more cycles of visual stimuli (by performing one or more cycles of operation), wherein in each cycle visual stimuli are provided at one or more positions within a defined range of one or more positions. In embodiments, the position(s) at which left and right stimuli are provided is permitted to vary (can be altered) between cycles of activation, preferably by altering either or both of: the closest and/or furthest spacing between the left and right visual stimuli in the range of one or more positions for a cycle; and the one or more positions at which visual elements are provided within the range of one or more positions for a cycle.
Preferably, varying the position(s) at which visual stimuli are provided comprises altering the position(s) for a cycle compared to a previous (preferably immediately preceding) cycle of visual stimuli.
In this manner, a spacing between left and right visual stimuli can be increased or decreased in graduated steps by changing position(s) for visual stimuli across one or more cycles. This Applicant has found that this allows a user to soften their gaze gradually, promoting a heightened sense of relaxation and calm.
Thus, in embodiments, increasing (or conversely decreasing) the spacing between left and right visual stimuli comprises either or both of: increasing (or conversely decreasing) the closest and/or furthest spacing of left and right visual stimuli within the range of one or more positions for a cycle; or increasing (or conversely decreasing) an (angular) position of one or more of the position(s) at which visual stimuli are provided within the range of one or more positions for a cycle.
The defined range of one or more positions for a cycle of visual stimuli preferably comprises a range of one or more angular positions (and accordingly a range of positions in the forwards and/or backwards directions).
Preferably, the range of one or more positions for a cycle comprises a range of one or more left positions for left visual stimuli, and one or more right positions for right visual stimuli. Preferably, the range of left and right position(s) are a mirror image of one another relative to the centre of the user’s vision, preferably such that left and right visual stimuli provided at positions in the range can be (and are) provided simultaneously at an equal angular position relative to the centre of the user’s vision.
In each cycle, the left and right visual stimuli are preferably provided by activating appropriate visual element(s), e.g. such as those visual element(s) described above. For discrete visual elements, e.g. LEDs, the range of one or more positions for a cycle preferably encompasses one or more discrete visual elements (at one or more different angular positions).
It would be possible to provide visual stimuli at each and every possible (angular) position within the range of one or more positions for a cycle (e.g. to activate each and every discrete visual element falling in the range during a cycle), and in embodiments this is done. In other words, in embodiments, during a cycle, visual stimuli may be provided at position(s) which are (all) adjacent one another.
Alternatively, during a cycle, visual elements may be provided at one or more positions within the range of positions comprising (forming) a sub-set of the possible positions in the range (e.g. such that a sub-set of visual elements falling in the range are activated during a cycle). In other words, one or more positions in the range of positions for a cycle may be skipped and no visual stimuli provided at those positions. In other words, during a cycle, one or more visual stimuli may be provided at (angular) positions which are spaced apart from one another (in the left direction or the right direction respectively for left or right visual stimuli).
As noted above, one or more positions at which visual stimuli are provided may be varied between cycles of visual stimuli by altering the one or more positions at which visual elements are provided within the range of one or more positions for a cycle. This may comprise adding or removing one or more positions at which visual elements are provided. In other words, this may comprise altering (e.g. adding or removing) one or more positions in the range of positions for a cycle which are skipped and at which no visual stimuli are provided. In other words, this may comprise increasing a spacing between the (angular) positions of one or more of the (respective left or right) visual stimuli provided in the cycle.
As noted above, one or more positions at which visual stimuli are provided may additionally or alternatively be varied between cycles of visual stimuli by altering the closest and/or furthest (angular) spacing of left and right visual stimuli in the range of one or more positions forming a cycle. In this regard, altering the closest spacing comprises altering a smallest angular position within the range of left and/or right positions (altering the furthest forward position), and conversely altering the furthest spacing comprises altering the largest angular position within the range of left and/or right positions (altering the furthest backwards position).
For example, the range of position(s) for a first cycle of operation (a first cycle of visual stimuli provided in a training session) may comprise a closest possible spacing in the monocular region (of the possible positions at which the system is able to provide left and right visual stimuli within the monocular region), e.g. corresponding to an angular position of about 60 degrees. The range of position(s) for later cycles of operation may comprise position(s) which are further apart.
The defined ranges of one or more positions for different (e.g. successive) cycles of visual stimuli could be non-overlapping, or could overlap.
Likewise, the discrete visual element(s) which fall within the range of position(s) for different (e.g. successive) cycles could include none, or one or more of the same discrete visual elements.
In embodiments, (e.g. to allow a smooth increase/decrease in the spacing of left and right visual stimuli among the cycles), there is at least some overlap between the range of position(s) for successive cycles of visual stimuli. For example, each range of one or more positions forming a cycle could have a same closest spacing between left and right visual stimuli, but could differ in the furthest spacing between left and right visual stimuli. Alternatively, each range of one or more positions could have a different closest spacing between left and right visual stimuli and a different furthest spacing between left and right visual stimuli compared to a preceding cycle. Other permutations are also possible.
As noted above, increasing (or decreasing) the spacing between left and right visual stimuli may be achieved by increasing (or decreasing) the closest and/or furthest spacing between the left and right visual stimuli in the range of one or more positions for a cycle.
Increasing (or decreasing) the spacing between left and right visual stimuli may also (or instead) be achieved by altering the one or more positions at which visual stimuli are provided within a cycle. In embodiments increasing (or conversely decreasing) the spacing between left and right visual stimuli is achieved by moving one or more visual stimuli to a larger (or conversely smaller) angular position within the range of positions for the cycle stimuli. In embodiments, increasing (or conversely decreasing) the spacing between left and right visual stimuli comprises increasing (or conversely decreasing) the average angular position of visual stimuli within the cycle (wherein the average angular position of visual stimuli can be calculated as the sum of the magnitude of the angular positions at which left and right visual stimuli are provided during a cycle, divided by the number of positions at which visual stimuli are provided during a cycle).
In an embodiment, increasing (or conversely decreasing) the spacing between left and right visual stimuli between cycles comprises increasing (or conversely decreasing) the closest and/or furthest spacing between the left and right visual stimuli in the range of one or more positions for a cycle is increased, and also increasing (or conversely decreasing) the spacing between the positions of one or more of the visual stimuli provided in the cycle. This may have an overall effect of widening (or conversely narrowing) the cycle. Other variations for altering the positions at which visual stimuli are provided are also possible.
Preferably in each cycle of operation (for each cycle of visual stimuli), left and right visual stimuli are provided at according to a sequence of positions.
Preferably, during a cycle, left and right visual stimuli are provided at a sequence of positions of progressively increasing spacing (are provided at progressively increasing angular positions, and accordingly progressively further back), the sequence preferably progressing from a closest spacing (smallest angular position, furthest forward position) in the range of one or more positions to a furthest spacing (largest angular position, furthest backwards position) in the range of one or more positions. In this way, a ‘wave’ of visual stimuli of increasing spacing is provided. At the end of each ‘wave’, the stimuli will have preferably reached the furthest extreme of peripheral vision that the user desires or that the program dictates at that time.
The sequence of visual stimuli provided at increasing positions (the ‘wave’ of stimuli) may be repeated one or more times within a cycle. In this regard, the Applicant has recognised that, regardless of whether the spacing of visual stimuli is increased or decreased between successive cycles, by providing ‘waves’ of visual stimuli which increase in spacing within each cycle, a relaxing effect which encourages user awareness to the peripheral vision can still be achieved.
As noted above, in a cycle, visual stimuli could be provided at each and every possible position within the defined range of position(s) for the cycle, or at a selection of positions within the range. In either case, the ‘wave’ of stimuli may progress through the relevant positions at which visual stimuli are to be provided in the cycle.
As mentioned above, in embodiments, a user’s perceptiveness to visual stimuli is determined and is used to increase (or decrease) the spacing between left and right visual stimuli. This increase (or decrease) in spacing may be done in any suitable and desired manner, e.g. such as using cycles of stimuli as described herein.
Determining a user’s perceptiveness to the visual stimuli may alternatively be advantageous in its own right, without being used to increase or decrease the spacing between left and right visual stimuli (which may proceed, for example according to a predefined sequence of positions, or may for example be responsive to a different user input, e.g. indicative of a level of user relaxation).
In embodiments, a user’s perceptiveness is determined (the system is configured to determine the user’s perceptiveness) based on a user identifying a target characteristic of left and/or right visual stimuli provided. The target characteristic preferably comprises a target quality for a visual stimulus, a matched (identical) quality between left and right visual stimuli provided simultaneously, or a mismatched quality between left and right visual stimuli provided simultaneously.
The target, matched, or mismatched quality may be any one or more of the qualities of visual stimuli described above. For example, a target quality could be a particular colour (e.g. green) visual stimulus. A matched quality could be a matched colour (e.g. a green left visual stimulus provided simultaneously with a green right visual stimulus). A mis-matched quality could be a mis-matched colour (e.g. a green left visual stimulus provided simultaneously with a blue right visual stimulus).
In embodiments, (e.g. in advance of commencing a training session, or as part of selecting a training program), a user is permitted to choose (the system is configured to receive a user selection for) one or more target characteristics for the visual stimuli. For example, the user may be permitted to choose a target quality (e.g. a green colour) or a quality which is to be matched or mis-matched (e.g. a colour being matched or mis-matched, rather than e.g. a shape).
In embodiments, during a training session, left and right visual stimuli having the one or more target characteristics are provided (for example, being provided one or more times within a ‘training sequence’ of visual stimuli). Preferably, visual stimuli having the target characteristic(s) are shown less often than visual stimuli not having the target characteristic(s).
Preferably, visual stimuli having the one or more target characteristic(s) are shown intermittently, such that the time between occurrences of the target characteristic(s) is variable and preferably randomised such that occurrences of the target characteristic(s) are not predictable by a user. The Applicant has recognised that varying the time between occurrences of the target characteristic may improve user attention when performing a training system.
Preferably, e.g. in advance of commencing a training session, a user is permitted to select (the system is configured to receive a user selection for) a rate at which the one or more target characteristics appear (e.g. so as to select a rate which is comfortable and relaxing for the user). In embodiments, this is achieved by a user controlling (selecting) the amount of time between (rate of) target characteristics. Alternatively, the rate of provision of target characteristics may vary based on a training program selected by the user (e.g. being relatively less frequent for a ‘relaxing’ program, and relatively more frequent for an ‘energising’ program).
During a training session, the system is preferably configured to receive (comprises a user input means for receiving) a user input indicative of whether a user has perceived a target characteristic. It is then determined whether the user has correctly perceived the target characteristic. In embodiments, if the user has correctly perceived the target characteristic, then the position or range of positions at which left and right stimuli are provided by the head-mounted device are altered.
In this regard, it is preferably determined that the user has correctly perceived a target characteristic if the user input comprises a response (if a user response is received) indicating that the user has perceived the target characteristic within a predefined period of time after the target characteristic has started being shown. The predefined period of time in embodiments corresponds to the amount of time for which the visual stimulus is provided (such that it is determined that a user has correctly perceived a target characteristic if the user input comprises a response whilst the target characteristic is being shown). Alternatively, the predefined period of time could be longer or shorter than the period of time for which the target characteristic is shown. The predefined period of time could be less than about 10 seconds, or less than about 5 seconds, or less than about 2 seconds, or less than about 1 second from the target characteristic starting being shown.
The user response may comprise a user identifying (confirming) that a target characteristic has occurred. If there are plural target characteristics (e.g. a blue colour, and a purple colour), correctly perceiving a target characteristic could require the user to provide a response (and correspondingly receiving a user response) which correctly identifies which of the plural target characteristics were shown (e.g. which of blue or purple were shown).
It may also be determined whether a user has not correctly perceived a target characteristic that has been shown. Preferably, it is determined that a user has not correctly perceived a target characteristic if a user response is received later than the predefined period of time (disclosed above) after the target characteristic being shown, and/or if a user response is received before or without a target characteristic being shown. For example, in embodiments, it is determined that a user has not correctly perceived a target characteristic if a user response is not received whilst the target characteristic is being shown. If there are plural target characteristics (e.g. a blue colour, and a purple colour), incorrectly perceiving a target characteristic could comprise the user providing a response which incorrectly identifies which of the plural target characteristics were shown (e.g. identifying blue, when in fact purple was shown).
Preferably, in response to a user correctly perceiving (when a user correctly perceives) a target characteristic, a spacing between the left and right visual stimuli provided simultaneously is increased. Conversely, in response to a user incorrectly perceiving (when a user incorrectly perceives) a target characteristic, the spacing of the left and right visual stimuli could be decreased. Increasing/decreasing the spacing of the left and right visual stimuli may be done, for example, in any of the ways described above (e.g., by changing a range of one or more positions forming a cycle of visual stimuli).
According, in embodiments, during a training session, the system is configured to: receive a user input in response to a user perceiving a target characteristic; determine whether the user has correctly perceived the target characteristic; and when the user has correctly perceived the target characteristic, alter one or more positions at which left and right stimuli are provided. Preferably, altering one or more positions at which visual stimuli are provided comprises altering the range of one or more positions forming a cycle of visual stimuli provided and/or altering the one or more positions at which visual elements are provided within the range of one or more positions for a cycle.
In embodiments, the system is also configured to determine whether the user has incorrectly perceived the target characteristic, and to alter one or more positions at which left and right stimuli are provided correspondingly (preferably by altering the range of one or more positions forming a cycle of visual stimuli provided, and/or altering the one or more positions at which visual elements are provided within the range of one or more positions for a cycle).
The spacing of left and right visual stimuli (and preferably one or more positions at which visual stimuli are provided in a cycle) could be altered immediately in response to a user correctly (or incorrectly) perceiving the target characteristic, such that it is altered based on a single occurrence of the target characteristic.
Alternatively, the spacing of left and right visual stimuli could be altered after a predetermined (e.g. threshold) number of (e.g. successive) correctly or (e.g. successive) incorrectly perceived occurrences of a target characteristic, or responsive to the proportion of correctly or incorrectly perceived target characteristic occurrences (e.g. corresponding to a success rate of the user). This may allow a more subtle change to the spacing of the left and right visual stimuli, such that the spacing of the left and right visual stimuli is changed in a way that does not immediately follow a single correct (or incorrect) perceived target characteristic. In this way, a user is unlikely to associate their individual responses with changes to the spacing of visual stimuli, which may help to avoid a user having a stress response to correct (or incorrect) perception of visual stimuli (a stress response would potentially undermine the relaxing effect of the training).
Alternatively (as discussed herein), the system may increase the spacing of left and right visual stimuli irrespective whether the user has correctly (or incorrectly) perceived target characteristics (such that the correct (or incorrect) perception of target characteristics is determined but not used to adjust the spacing of the left and right visual stimuli). Determination of a user’s perceptiveness to target characteristics, in of itself, may still provide a useful output indicating a user’s awareness to visual stimuli in the peripheral field of vision.
Other parameters of the system could additionally (or alternatively) be changed in response to a user correctly (or incorrectly) perceiving the target characteristic, for example such as one or more of: the particular target characteristic (e.g. the target colour), the rate of occurrence of the target characteristic, and the rate that visual stimuli are provided. For example, when a user correctly identifies a target characteristic, then the target characteristic may change to a more subtle characteristic (e.g. a more subtle colour difference, or intensity difference, or shape difference etc. compared to other visual stimuli provided), and/or the target characteristics may be provided more or less often, and/or the visual stimuli may be provided at a faster rate.
In embodiments, during a training session, the system is configured to provide positive feedback to a user when it is determined that a user has correctly perceived a target characteristic. The system could also (or instead) provide negative feedback when it is determined that a user has incorrectly perceived a target characteristic (although in embodiments no negative feedback is provided to avoid causing a stress response from the user). The positive (or negative) feedback could be given immediately, and preferably each time, a user correctly (or incorrectly) perceives a target characteristic. Alternatively, the positive (or negative) feedback could be given based on proportion of correct (or incorrect) user responses (e.g. based on a determined success rate of the user).
The positive (or negative) feedback could comprise any suitable and desired feedback, such as a visual, audible, or other sensory stimulus. For example positive feedback could comprise, a sequence of visual stimuli forming a ‘success’ sequence, e.g. a single wave of stimuli progressing from the forwards-most to the backwards-most visual stimuli of the head mounted device.
In embodiments, the system is configured to (and the method comprises) keeping a record of the user’s perception of visual stimuli, preferably by recording one or more of: a number or proportion of correctly perceived stimuli; a number or proportion of incorrectly perceived stimuli; and an average time which the user took to respond to stimuli. Preferably the record of the user’s perception is provided to the user as a training report, once a training session is complete.
Regarding the user response indicating that a user has perceived the target characteristic(s), the user response could comprise a response provided consciously (actively) by the user (e.g. by the user interacting with a suitable input means when the user perceives, of believes they have perceived, the target characteristic). Alternatively, the user response could be provided subconsciously (passively) (e.g. by a user input means sensing a state of a user).
A user response could comprise, for example, a user pressing a button or other touch sensitive input device (e.g. touching a button on a screen of a mobile phone), making a movement (e.g. gesture), making a sound (e.g. verbal input), actively thinking a measurable thought, or performing any other action measurable by a user input means.
The system may accordingly comprise a suitable input means for receiving a user response, for example comprise any one or more of: a button or other touch sensitive input, a movement sensor (e.g. motion detector or accelerometer), a sound sensor (microphone), an electromyography (EMG) sensor (a sensor responsive by muscular motion), an Electroencephalography (EEG) sensor (a sensor responsive to brain wave activity), or other desired sensor. The user input means could be provided as part of the head-mounted device, or by handheld device (e.g. such as a controller or joystick), or by a portable electronic device (e.g. such as a mobile phone, tablet, laptop or the like).
Preferably, for the purposes of a user identifying a target characteristic, the input means (which receives the user input), is configured to be operated without the user shifting their gaze. Thus, in a preferred embodiment, the input means is a relatively large button displayed on the screen of a portable electronic device (e.g. within an app on a mobile phone or tablet), the button having an area at least 1 cm2, preferably at least 2cm2, preferably at least 3cm2, and/or occupying at least 10%, preferably at least 20%, preferably at least 30% of the area of a screen of the portable electronic device.
The system may (also) be configured to receive responses and accordingly comprise a user input means (e.g. such as those described above) for other purposes, for example for configuring one or more parameters in advance of or during a training session.
As discussed above, in embodiments, the system is (additionally or alternatively) configured to (and the method comprises) alter the spacing between left and right visual stimuli provided simultaneously based on a level of relaxation of the user.
Accordingly, in embodiments, a position or range of positions at which the left and right visual stimuli are provided is controlled based on a level of relaxation of a user.
The level of relaxation of the user may be an indicated level of relaxation (e.g. based on a user self-reporting a level of relaxation), or may be a detected level of relaxation (e.g. being sensed by a sensor).
Thus in embodiments, the system is configured to receive (and the method comprises receiving) a self-reported level of relaxation provided actively (consciously) by a user (e.g. via the user interacting with a suitable user input device, such as any of the input devices discussed above).
In embodiments, the system is (additionally or alternatively) configured to receive (and the method comprises receiving) a sensor output sensing a physical state of the user, the sensor output indicative of a level of relaxation of a user. The sensor may be configured to sense, and to provide an output indicative of one or more of a user’s: motion, breathing, heart rate, blood pressure, brain wave activity, or other physical property.
Preferably, the system is configured to determine a level of user relaxation from the sensor output. For example, one or more of more agitated movements, shorter breaths, higher blood pressure, certain patterns of brain wave activity, or other sensor inputs, are preferably used to indicate (are preferably correlated to) a lower level of relaxation (the user being less relaxed). Conversely, preferably one or more of slower user movements, longer breaths, lower blood pressure, certain patterns of brain wave activity or other sensor inputs are preferably used to indicate (are preferably correlated to) a higher level of relaxation (the user being relatively more relaxed).
As mentioned above, in embodiments, the spacing between (angular position of) left and right visual stimuli is preferably increased when it is determined that the user has a higher level of relaxation (and conversely the spacing between left and right visual stimuli is preferably decreased when it is determined that the user has a lower level of relaxation).
Increasing and/or decreasing the spacing of the left and right visual stimuli in response to the user’s level of relaxation may be done, for example, in any of the ways described above (e.g., by changing a range of one or more positions forming a cycle of visual stimuli). Accordingly, preferably the range of one or more positions at which right and left stimuli are provided (e.g. for a cycle of visual stimuli) is preferably selected based on the level of relaxation of the user.
In embodiments, during a training session, the visual stimuli are provided (activated) in synchronisation with a rhythmic beat of a soundtrack. In this regard, preferably, the position of successive visual stimuli provided (e.g. within a cycle) changes in synchronisation the beat of the soundtrack. For example, the position of successive visual stimuli could change at (exactly) on the beat of the soundtrack, or the rate of change of position of visual stimuli could be correlated to the speed of the beat.
Alternatively (or additionally) one or more qualities of the visual stimuli (e.g. colour) could be configured to change in synchronisation with a rhythmic beat of a soundtrack.
The Applicant has found that providing the visual stimuli in synchronisation with a rhythmic beat of a soundtrack has a synergistic effect of improving relaxation and allowing the user to become aware of visual stimuli provided wider within their peripheral vision.
The soundtrack provided could be a musical and/or verbal soundtrack.
A verbal soundtrack provided simultaneously with the visual stimuli may comprise instructions guiding a user though the training session (e.g. through a training program), e.g. comprising any of: informing a user of a target characteristic(s) to be identified, encouraging a user to breathe, providing guided meditation, or any other suitable and desired instructions.
Accordingly, in embodiments, the system is configured to play a soundtrack. In other embodiments, the system is configured to play the soundtrack by controlling a speaker integrated into the head mounted device or a speaker external to the head mounted device (e.g. the speakers of a mobile phone) via a suitable wired or wireless communication (e.g. Bluetooth). For example, when the head mounted system is incorporated into a pair of over-head headphones, the system is preferably configured to control the over-head headphones to play the soundtrack.
In embodiments, the user is permitted to select (the system is configured to receive a user selection for) the soundtrack which is to be played, e.g. from a library of plural different soundtracks, e.g. stored on the head-mounted device, or a portable electronic device coupled thereto, or a cloud-based music service.
As discussed above, during a training session, the position at which visual stimuli are provided may vary over time, or in response to a user input (e.g. a level of user relaxation or a user’s perceptiveness to visual stimuli). Various triggers could be used to end a training session, e.g. such as a predetermined period of time for a training session time having elapsed (e.g. a soundtrack finishing), receiving a user input indicating that a user wishes to end the training session, determining that a user has reached a particular level or relaxation, reaching a cycle of visual stimuli which is a final cycle (e.g. being a cycle with the largest spacing of visual stimuli among a predetermined set of cycles of visual stimuli), or other suitable and desired triggers for ending a training session.
As will be appreciated from the above, in the ‘training’ sequence, one or more (preferably a majority of, preferably all) of the visual stimuli are provided within the left and right monocular regions of the user’s vision, and preferably one or more (preferably a majority of, preferably all) of the visual stimuli are provided simultaneously to the left and right of the centre of the user’s vision preferably at a same angular and/or vertical position as one another.
In embodiments, (during a training session), visual stimuli forming only the ‘training' sequence of visual stimuli are provided. Alternatively, during a training session, other visual stimuli which are not part of the ‘training’ sequence of visual stimuli could be provided (e.g. for the purposes of conveying information to a user), however such visual stimuli which are not part of the ‘training’ sequence are preferably provided in a manner which does not distract from the ‘training’ sequence.
Outside of a training session, visual stimuli could be provided (the head mounted device could be configured to provide visual stimuli) in a different manner to that described herein for the ‘training sequence’ (e.g. for the purposes of conveying information to a user), and in embodiments this is done.
Thus, for example, outside of a training session and/or in addition to a training sequence, visual stimuli could be provided (the head mounted device may be configured to provide visual stimuli) which are one or more of: provided to the left and right individually (not simultaneously); provided to the binocular region of a user’s vision; provided simultaneously to the left and right at different angular positions; provided simultaneously to the left and right at different heights relative to the user’s eyes, etc.
The system described herein, including the head mounted device may operate under the control of any suitable and desired controller or controllers, for example comprising one or more processors. The one or more processors may comprise a microprocessor, a programmable FPGA (field programmable gate array), etc..
For example, a controller may be integrated into the head mounted device, e.g. for controlling the activation of visual elements to provide visual stimuli.
In embodiments, a controller (processor) integrated into the head mounted device may operate to perform the methods of the present invention independently (such that the head mounted device is configured to operate as an isolated system, without any external control).
Alternatively, the head mounted device may be configured to communicate with one or more other (external) devices having processors thereon for the purposes of implementing the methods described herein and controlling the head mounted device. The external device (which in embodiments forms part of the present system) may comprise, e.g. a portable electronic device (e.g. mobile phone or tablet), laptop, desktop computer, cloud computing service, or other device.
In a preferred embodiment, the head mounted device is configured to communicate with (and the system comprises) a portable electronic device (e.g. mobile phone or tablet) for implementing the methods described herein.
The methods in accordance with the present disclosure may be implemented at least partially using software e.g. computer programs.  It will thus be seen that the present disclosure herein may provide computer software code for performing the methods described herein when run on one or more data processors.
The computer program (computer software code) may be executed by a processor integrated within the head mounted device. Alternatively, (and preferably) one or more external devices (e.g. a mobile phone) may execute a computer program (e.g. an application, e.g. a mobile phone app) for controlling the head mounted device for implementing the methods described herein.
The present disclosure may suitably be embodied as a computer program product for use with the present system.  The computer program product may comprise a series of computer readable instructions either fixed on a tangible, non-transitory medium, such as a computer readable medium, for example, diskette, CDROM, ROM, RAM, flash memory, or hard disk. It could also comprise a series of computer readable instructions transmittable to a computer system, via a modem or other interface device, over either a tangible medium, including but not limited to optical or analogue communications lines, or intangibly using wireless techniques, including but not limited to microwave, infrared or other transmission techniques.
As will be appreciated from the above, the present system preferably comprises one or more input means for receiving user inputs.
The user input means could be provided as part of (integrated into) the head-mounted device. Alternatively, the input means could be an external device, such as a handheld device (e.g. such as a controller or joystick), or a portable electronic device (e.g. such as a mobile phone, tablet, laptop or the like), or a sensor device, or other external device.
As discussed herein, the input means may be configured to receive user inputs provided actively (consciously) or passively (subconsciously) by the user.
For example, the system may be configured to receive a user input comprising one or more of, for example a user: pressing a button or other touch sensitive input device (e.g. a button displayed on the screen of a portable electronic device), making a movement (e.g. gesture), making a sound (e.g. verbal input), actively thinking a measurable thought, or performing any other action measurable by a user input means.
The user input means may comprise, for example, any one or more of: a button or other touch sensitive input, a movement sensor (e.g. motion detector or accelerometer), a sound sensor (microphone), an electromyography (EMG) sensor (a sensor responsive by muscular motion), an Electroencephalography (EEG) sensor (a sensor responsive to brain wave activity), a breath sensor, or other desired sensor e.g. such as those described herein.
The controller(s) (processor(s)) of the present system are preferably configured to receive input data from the one or more input means, and to use the input data to implement the methods described herein.
The present system preferably also comprises one or more output means for providing an output to a user.
The output means could be provided as part of (integrated into) the head-mounted device. Alternatively, the input means could be an external device, such as a handheld device (e.g. such as a controller or joystick), or a portable electronic device (e.g. such as a mobile phone, tablet, laptop or the like), or a sensor device, or other external device.
For example, the output means could comprise one or more of: a visual element of the head mounted device, an external display (e.g. a display of a portable electronic device), a speaker, or any other suitable and desired output device. The output means may provide an output to a user comprising one or more of: an auditory output, a haptic output, a visual output, or other suitable and desired output.
The controller(s) (processor(s)) of the present system are preferably configured to control the one or more output means, to provide an output to a user as indicated in the methods described herein.
In embodiments, the output means is controlled so as to provide instructions to a user for using the system of the present invention. Preferably, one or more auditory instructions are provided to a user when using the head mounted device.
Where the system comprises one or more devices external to the head mounted device, the head mounted devices and the one or more external devices are preferably configured to share data via a suitable wired or wireless connection, e.g. such as Bluetooth, or WiFi. Preferably, the head mounted device is configured with wireless connection capability for connection to one or more external devices.
The system may comprise one or more memories for storing data for implementing the methods described herein, e.g. such as for storing computer software code, calibration data, user inputs, a record of user relaxation levels and/or user perceptiveness to visual stimuli provided during a training session, or other suitable and desired data.
The present system preferably comprises a suitable power source for powering the head mounted device. The power source may comprise a wired or wireless connection from the head mounted device to a power source, or preferably an integrated power source (e.g. battery).
Various embodiments will now be described, by way of example only, and with reference to the accompanying drawings in which:
Fig.1
shows a head-mounted training device in accordance with embodiments of the present invention, the training device integrated into a set of over-head headphones, and comprising left and right arms which are shown rotated downwards in a training position, each arm comprising light elements for providing visual stimuli in a monocular region of a user’s vision.
Fig.2
shows a top view of the head-mounted training device of .
Fig.3
illustrates the position of the arms of the device of with respect to a trainee’s visual field during a training session.
Fig.4
is a rear view of the head mounted device of illustrating example relative positions of light elements on left and right visual displays during a training session.
Fig.5
illustrates some of the positions of light elements shown in from a top view aspect.
Fig.6
shows an example screen display of a mobile app in embodiments of the present invention, during a training session.
Fig.7
is a schematic diagram of a system in accordance with embodiments of the invention.
Fig.8
shows an alternative embodiment of the head mounted training device in accordance with the present invention, the training device being mounted on the brim of a cap.
Fig. 9
shows an alternative embodiment of the head mounted device in accordance with the present invention, the training device comprising a pair of arms mountable to the arms of a pair of glasses.
Fig.10
shows an alternative embodiment of the head mounted training device in accordance with the present invention in a training position, the training device being attached to a pair of headphones.
Fig.11
shows example Electroencephalography (EEG) data, indicating a change in alpha and beta wave activity during a training session using a device in accordance with the present invention.
Fig.12
is a flowchart showing an embodiment for controlling the positions of visual stimuli based on a user’s perceptiveness to visual stimuli.
As discussed above, the technology disclosed herein relates to methods and systems for training peripheral vision, particularly by providing visual stimuli simultaneously to the left and right monocular regions of a user’s (trainee’s) vision.
Figures 1 to 3 show various views of a head-mounted training device 100 for providing visual stimuli, in accordance with embodiments of the present invention.
The training device 100 shown in Figures 1 to 3 is shown integrated into a headset in the form of over-head headphones 105.
The training device could however be incorporated into or mountable to the brim of a cap, an in embodiments this is done, as shown for example in which shows a training device 800 having elongate members 101, 102 mounted to a brim 801 of a cap. The training device could also or instead be incorporated into or mountable to a pair of glasses for example as shown in . Another configuration using headphones is also shown in . Like features among these various embodiments are indicated with like reference numerals.
Referring to , the training device 100 comprises a left elongate member 102 in the form of an arm which extends along the left-hand side of a user’s head, and a right elongate member 101 in the form of an arm which extends along the right-hand side of a user’s head.
For reference, the left L’, right R’, forwards F’ and backwards B’ horizontal directions (together forming the horizontal plane) are shown in , along with the upwards U’ and downwards D’ vertical directions. In this regard, directions are preferably defined with respect to the orientation of the user’s head, such that the left L’, right R’, forwards F’, and backwards B’ horizontal directions (and accordingly the horizontal plane), and likewise the upwards U’ and downwards D’ vertical directions, move as the user’s head moves. In other words, the horizontal and vertical directions correspond to a world orientation when a user’s head is in its usual upright position, but deviate from the world orientation if a user tilts their head.
As shown in , for example, in embodiments, the elongate members 101, 102 are attached (or attachable) to an item of headwear, e.g. headphones, by an attachment means 110 at one end of each elongate member, with the other end of the elongate member being free so that it is cantilevered. This is similarly the case in the embodiment shown in . This may be similarly the case for elongate members attached (or attachable to) other items of headwear, e.g. a cap, such as shown in .
Each of the elongate members 101, 102 respectively comprises a visual display unit 103, 104 towards the distal (forwards) end of the elongate member which is operable to provide visual stimuli to the user. In the embodiment shown, the visual display unit comprises a plurality of discrete visual elements 106, 107 which can each be activated to provide a visual stimulus. In the embodiments shown, the visual elements comprise light elements in the form of an array of colour LED lights.
The colour LED lights 106, 107 are preferably configured to provide visual stimuli with differing colour. Other qualities of the visual stimuli could also be variable, e.g. such as the intensity (brightness) of visual stimuli. In embodiments where LED lights are grouped together, then different patterns or shapes of LED lights could be illuminated simultaneously to provide different qualities of visual stimuli.
Other visual elements could be used instead of LED lights, such as for example a continuous visual element on each elongate member 101, 102, e.g. an LCD or plasma screen or light projection on each elongate member. Such visual elements may similarly provide visual stimuli with variable qualities, e.g. such as colour, intensity, texture, size, shape, or localised motion.
As can be seen in , the elongate members 101, 102 have a length L which is larger than their width W. The visual display units 103, 104 are also elongate, having a length L' which is larger than their width W’.
This allows for a relatively light-weight and compact configuration, which does not interfere with the user’s vision, so that the head-mounted device can be used as part of a daily routine.
Figures 1 to 4 show a head mounted device in a training position. In embodiments, in the training position the elongate members 101, 102 (and likewise the visual display units 103, 104 and visual elements 106, 107 thereon) extend substantially horizontally and substantially at the height (vertical position) of the user’s eyes. This is illustrated in Figures 1 and 2 for example, and also at least in .
In the training position, visual display units 103, 104 (and accordingly the visual elements, LED lights 106, 107) of the elongate members 101, 102 are provided only in the right monocular region 301 and left monocular region 302 of the user’s vision. In embodiments, the right and left elongate members 101, 102 (and likewise the right and left visual elements 106, 107) do not extend into the binocular region of the user’s vision 303. This is shown, for example, in the top view of .
(Alternatively, the visual elements could extend into the binocular region. In such embodiments, preferably the head-mounted training device is controlled so as to activate visual elements only in the monocular region during a training session. Visual elements falling within the monocular region are preferably identified in a calibration routine, or based on a user identifying which visual elements can be seen by a single eye only).
As illustrated in , the right monocular region 301 is the region of a user’s vision which is visible only to the user’s right eye, and the left monocular region 302 is the region visible only to the user’s left eye (as compared to the binocular region 303 of the user’s vision which is visible to both right and left eyes).
The right monocular region 301 for a human typically includes positions at angles α from about 60 to about 110 degrees to the right of the centre of the user’s vision 304. The left monocular region 302 for a human likewise typically includes positions at angles β from about 60 to about 110 degrees to the left of the centre of the user’s vision 304. The centre of the user’s vision in this regard can be taken to be the direction directly forwards from the bridge 305 of the user’s nose, and the angles α, β can be measured from the bridge of the user’s nose in the right and left directions respectively along a horizontal plane (i.e. being the angle along the horizontal meridian 306).
Accordingly, preferably, in the training position, the visual elements on the right and left elongate members 101, 102 are present within a range of angular positions from about 60 to about 110 degrees in the right and left monocular regions. More preferably, the visual elements span a segment along the horizontal meridian of at least 30 degrees (thus preferably, the visual elements span angular positions between 60 and at least 90 degrees to the left and right of the centre of a user’s vision).
In embodiments, the head mounted device is adjustable so as to position the visual display units 103, 104 of the elongate members 101, 102 (and accordingly the visual elements e.g. LED lights 106, 107) in the right and left unshared monocular regions of the user’s vision only.
For example, the elongate members 101, 102, may be extendible and retractable along their length (so as to be extendible and retractable forwards and backwards in the horizontal direction when in the training position). Other mechanisms could instead be used if desired. For example, the elongate members could be bendable, for example as shown in , in which the elongate members 101, 102, have a bendable section 1001 between their attachment means 110 and visual display unit 103, 104.
The head mounted device may also be adjustable to fit a user’s head, e.g. having an adjustable main body, e.g. a head band 105 with telescoping mechanism 1002.
In embodiments where the head mounted device is a set of headphones, the headphones may comprise on-ear speakers 201, 202 as shown in for example, or over-ear speakers 1004, 1005 as shown in for example, or alternatively speakers that user bone conduction technology, or other suitable and desired speaker technology.
The position of the visual display units 103, 104 and visual elements 106, 107 of the head mounted device could also (or instead) be adjusted by changing a mounting position of the elongate members 101, 102. This may be particularly suitable for a head mounted device that is mountable to a pair of glasses such as shown in . In this case, the elongate members 101, 102 are mountable to respective right and left arms 901, 902 of a pair of glasses, and can be moved forwards and backwards relative to the arms of the pair of glasses.
The elongate members could be extendible and retractable, and/or bendable, and/or mountable at different positions when provided with any suitable and desired item of headwear, such as headphones, a cap, etc.
Preferably, the elongate members 101, 102 are movable between a training position for performing training, and a stowed position when training is no longer desired to be performed. Preferably, in the stowed position, the elongate members and/or visual elements are not readily visible by the user (e.g. are positioned outside of the user’s field of vision).
In example embodiments, the elongate members 101, 102 are movable (e.g. rotatable) upwards into the stowed position, and downwards into the training position.
This may be achieved by means of a rotatable joint 110, e.g. at a proximal (rearwards) end of each elongate member, e.g. connecting the elongate member to the headwear (e.g. headphones), as shown for example in Figures 1 to 3. In a male connector (e.g. jack) 1002 and female receiver (e.g. socket) 1003 form the attachment means 110, and allow rotation of the elongate members 102, 103 when attached. Other mechanisms could instead be provided.
Although Figures 1 to 3, and show a head mounted device in the form of over-head headphones, the elongate members could equally be movable into a stowed position when mounted on or incorporated into other items of headwear, such as a cap.
In embodiments, when in the training position, the elongate members 101, 102 are configured to electrically connect with a controller (processor(s)) for controlling activation of the visual stimuli (LEDs) and/or to a power source for providing power for activating the visual stimuli. Preferably, the elongate members are electrically disconnected when in the stowed position.
Alternatively, the elongate members 101, 102 could be electrically connected with a controller (processor(s)) and/or power source regardless of their position. Alternatively, the controller (processor(s)) and/or power source could be integrated within the elongate members.
Whilst the embodiments shown have two elongate members 101, 102 with visual elements which are activatable at positions within the left and right monocular regions, a single member (e.g. a single elongate member, or e.g. a VR headset comprising a single continuous screen, or other suitable and desired display) could instead be provided with one or more visual elements activatable at (controlled so as to active at) positions within the left and right monocular regions simultaneously.
For providing training in accordance with the technology described herein, the visual elements (LED lights) 106, 107 are activatable at a plurality of angular positions α, β, as illustrated for example in Figures 4 and 5.
In the embodiments shown, an array of discrete visual elements in the form of LED lights 106, 107 are provided at a plurality of angular positions to the left and right of the centre of the user’s vision. This is achieved in embodiments by using one or more rows of LED lights, each row extending substantially in the horizontal plane close to the vertical level of the user’s eyes. In the embodiment shown in Figures 1-5, and 10, one row of LED lights are provided. Alternatively, two rows of LED lights could be provided (so as to form a ten-by-two array of coloured LEDs on each elongate member), as shown in .
Positions of visual elements in an example embodiment are shown in and labelled A to J, with a selection of the angular positions of the visual elements relative to the centre of the user’s vision shown in .
As can be seen from Figures 4 and 5 for example, visual elements (visual stimuli) which are at a larger angular position are further backwards and preferably further apart in the left L’ and right R’ directions.
Whilst the figures show a row of visual elements aligned in the horizonal plane such that each visual element in the row has approximately the same vertical position, other patterns of discrete visual elements (e.g. LED lights) could be used. For example, at each angular position e.g. A to J, a group of visual elements could be provided. Alternatively, visual stimuli could be provided by activating continuous visual element, e.g. an LCD screen, at different angular positions.
In alternative embodiments, the visual stimuli (e.g. discrete visual elements or activated positions of a continuous visual element) could differ in height among the angular positions.
As can been seen in at least Figures 4 and 5, the positions of visual elements (LED lights) 106, 107 are preferably mirror images of one another relative to the centre of a user’s vision. This allows the system described herein to activate visual elements at a same angular position in the left and right monocular regions of a user’s vision simultaneously.
Preferably, when performing a training session, the head mounted device is controlled so as to provide a sequence of visual stimuli (a ‘training sequence’) at plural angular positions in turn. Throughout the training sequence, left and right visual stimuli are provided simultaneously at the same angular position as one another.
The system of the present invention may comprise any suitable and desired arrangement for controlling the head mounted device to activate the visual elements to provide a ‘training sequence’.
The system may also comprise one or more input devices, e.g. based on which the sequence of visual stimuli can be controlled.
shows schematically a system 700 in embodiments of the present invention, in which a control system 715 integrated within the head mounted device comprises an on-board controller 701 integrated within the head mounted device (e.g. within one of both of the elongate members 101, 102) is configured to control an output module 702 which controls the left display unit 104 and right display unit 103 of the elongate members so as to provide visual stimuli. The on-board controller 701 may be considered as a central control unit, and may run updateable firmware for controlling the outputs of the left and right display units 103, 104.
The output module 702 may also control other output devices which are integrated within the head mounted device such as vibrational motors 709, and earphone speakers 710, and any other suitable and desired output devices. Such output devices may be used for providing useful outputs to the user, such as audio instructions, tactile feedback, an accompanying soundtrack for the training session or any other suitable and desired outputs.
The input module 713 may receive input data from one or more input devices indicative of a user’s level of relaxation, e.g. such as a brainwave (EEG) sensor, heart rate sensor, blood pressure sensor or other sensor.
The controller 701 may receive input data from input devices via any suitable and desired wired or wireless connection.
The control system 715 is configured to draw power from a power supply integrated within the head mounted device. The power supply may be any suitable and desired power source, e.g., a rechargeable battery 704 chargeable via a USB charging port 708.
In embodiments, the controller 701 integrated within the head mounted device controls the activation of the visual elements and controls the output devices associated with the head mounted device, based on instructions received from an external controller (controller app 712) executing a computer program (e.g. application or “app”). In embodiments, the external controller is provided as part of an external device, e.g. a portable electronic device (mobile device 716)
In embodiments, the controller 712 (e.g. processor(s) running a software application) on the external device (mobile device 716) is configured to determine the training sequence of visual stimuli which are to be provided to the user, and to transmit instructions (via a transmission/reception module 711) to the head mounted device (e.g. to a transmission/reception module 705 of the head mounted device) accordingly. The controller 712 may also provide instructions for controlling the provision of an accompanying audio soundtrack and/or instructions via speakers 710, and any other desired e.g. tactile, audio or visual feedback based on the trainee’s responses.
Thus, in embodiments, a specialist software app running on a mobile device 716 controls the training session. However in alternative embodiments this could include any kind of remote control device.
The transmission of instructions from the external device (mobile device 716) to the head-mounted device may be done using any suitable and desired technology, e.g. such as wireless (e.g. Bluetooth, Wifi, etc) or wired communication. The headset is preferably controlled via Bluetooth or other wireless technology that connects with the receiver module 705.
One or more inputs used for determining the sequence of visual stimuli may be received by the controller 712 of the external device 716.
Input data, e.g. from sensors, may be transmitted (directly) to the external device 716 (without being first received by the head mounted control device 715)
The external device may also comprise a touch screen 717 or other input or output device(s) for allowing the user to interact with the external device (e.g. such as a keyboard, button, gesture or movement sensor, camera, microphone or other suitable and desired input device). shows an example user interface for a touch screen of a mobile device, which may be provided during a training session to provide information to a user and receive user input.
shows a system in which the training sequence of visual stimuli to be provided to the user is determined by an external device comprising a mobile device 716 (e.g. mobile phone or tablet). The external device could also or instead be any other suitable and desired device, e.g. a laptop, smart watch, wearable electronic device, desktop computer, cloud or internet-based computing service, or other suitable and desired external device.
Alternatively, the head mounted device itself may have an integrated controller (processor) which is configured to determine the training sequence of visual stimuli to be provided to the user, such that the head mounted device can be operated in isolation (without requiring an external controller).
As noted above, when performing a training session, the head mounted device is controlled (e.g. by way of external controller 712 and on-board controller 701) so as to provide a sequence of visual stimuli (a ‘training sequence’).
Throughout the training sequence, left and right visual stimuli are provided at various angular positions in turn, the left and right visual stimuli being provided simultaneously at the same angular position as one another. One or more qualities (e.g. colour) of the visual stimuli provided may vary (e.g. at an angular position and/or among the different angular positions).
Referring back to Figures 4 and 5 for example, the training sequence may comprise activating right and left visual elements (LED lights) simultaneously at any of the positions A to J.
Preferably, during a training session, a spacing between left and right visual increases over time and/or based on a user response. In this way, visual stimuli are provided further apart (wider in the peripheral field) as a user becomes more relaxed and/or aware of their peripheral vision. As will be seen below, this progression from visual stimuli which are relatively close together to relatively further apart can be embodied in any and all examples described herein.
In this regard, the spacing between the left and right visual stimuli is preferably measured along the horizontal meridian, and so corresponds to the sum of the angular positions α, β of the visual stimuli. A larger spacing accordingly corresponds to a visual stimuli provided a larger angle α, β from the centre of the user’s vision (and thus further backwards B’, and further to the left L’ and right R’).
The spacing between left and right visual stimuli in the training sequence could increase over time in a predetermined manner (and not depend on any user input during the training session). Alternatively, the spacing between left and right visual stimuli could increase depending on a user input.
An example predetermined training sequence in which the spacing between visual stimuli increases could be, for example: A, B, C, D, E, F, G, H, I, J (Example 1)
(In the example sequences described herein, each of A to J indicates the left and right visual stimuli (LEDs) at that respective position being illuminated simultaneously. Positions separated by a “,” indicate visual stimuli being shown in turn, in a consecutive period of time).
Example 1 shows a possible sequence of visual stimuli of increasing spacing forming a single cycle of visual stimuli comprising positions in the range A to J.
Whilst Example 1 shows the visual stimuli being provided at each and every position in the range A to J, some positions could be skipped if desired. For example, another predetermined training sequence could be: A, B, C, E, G, J (Example 1A)
Preferably left and right visual stimuli provided at a position (e.g. a position from A to J) are provided for a period of time which is long enough for the user to be able to discern the visual stimuli.
Whilst at any particular angular position and/or among angular positions, one or more qualities (e.g. colour) of the visual stimuli may vary. For example, at position A, the colour of the left and right stimuli could progress through one or more colours such as blue, green, purple etc. in turn before progressing to position B. Visual stimuli having a variety of colours could likewise be provided at other positions, such as B, C, D, etc. The left and right visual stimuli could have the same or mis-matched. The particular colours provided could be selected by the system on a randomised basis, such that a user cannot predict which colour(s) will be shown.
Compared to Examples 1 and 1A, an increase in spacing can be performed more gradually by performing plural cycles of providing visual stimuli, wherein in each cycle visual stimuli are provided at positions within a range of one or more positions.
A sequence in embodiments of the present invention using plural cycles of visual stimuli is for example: Cycle 1 (A, A); Cycle 2 (A, B, A, B); Cycle 3 (A, B, C, A, B, C); Cycle 4 (A, B, C, D, A, B, C, D) (Example 2)
Thus, in the 1st cycle, visual stimuli are provided at positions within the range of positions consisting of position A. In the 2nd cycle the range of positions is A and B. In the 3rd cycle the range of positions is A and B and C. In the 4th cycle the range of positions is A and B and C and D. In later cycles the range could additionally include positions such as E or F or G etc. Similarly to the discussion above, at any particular angular position and/or between angular positions, one or more qualities (e.g. colour) of the visual stimuli may vary.
Thus, in Example 2, the range of positions differs in each cycle, and particularly a furthest spacing between positions of right and left visual stimuli is increased in each cycle, whilst the closest spacing in the cycle remains the same. In this example, in the 1st cycle the furthest spacing corresponds to right and left visual stimuli being provided at position A, whereas in the 2nd cycle the furthest spacing is at position B, in the 3rd cycle the furthest spacing is at position C, in the 4th cycle the furthest spacing is at position D, and so on. The closest spacing which is the same for each cycle, is at position A.
In embodiments the closest (smallest) and/or furthest (largest) spacing between right and left visual stimuli can be altered in each cycle. For example, another training sequence in embodiments of the present invention could be: A, B, C, A, B, C, A, B, C… (Cycle 1); B, C, D, B, C, D, B, C, D… (Cycle 2); C, D, E, C, D, E, C, D, E… (Cycle 3) etc. (Example 3)
In Example 3, in each successive cycle, both the closest and furthest spacing between right and left visual stimuli is altered. In this example, the closest spacing in the 1st cycle corresponds to position A, in the 2nd cycle is position B, and in the 3rd cycle is position C. The furthest spacing in 1st cycle corresponds to position C, in the 2nd cycle is position D, and in the 3rd cycle is position E.
The one or more positions forming the range of positions may overlap for successive cycles (e.g. as in Examples 2 and 3 above), such that one or more of the same positions appear in successive cycles. Alternatively, the one or more positions forming the range of positions could be non-overlapping for successive cycles, for example a 1st cycle could have a range of positions being A and B, a 2nd cycle having C and D, a 3rd cycle having E and F, etc.
In embodiments, the position(s) of (e.g. and spacing between) visual stimuli could be varied within the range of possible positions for a cycles (in addition to or alternatively to changing the closest and/or furthest spacing for the cycle). An example of where spacing between visual stimuli is also changed between cycles is: Cycle 1: A, B, C, A, B, C, A, B, C… ; Cycle 2: A, C, E, A, C, E, A, C, E… ; Cycle 3: B, D, G, B, D, G, B, D, G (Example 3A)
In Example 3A, range of positions in cycle 1 is A to C, whereas in cycle 2 the range of positions is A to E (such that the furthest spacing between left and right visual stimuli in the range has increased to position E), and in cycle 3 the range of positions is B to G (such that the closest and furthest spacing between left and right visual stimuli in the range have increased to B and G respectively).
Furthermore, in Example 3A, cycle 1 comprises stimuli at adjacent positions only. In comparison, in cycle 2 the spacing between visual stimuli is increased, such that the positions at which stimuli are provided are not adjacent within the available positions for the head mounted device, i.e. such that positions within the range for the cycle are missed out (in cycle 2 a single position B or D is missed out between visual stimuli). In cycle 3, the spacing between the visual stimuli is further increased (with a single position C being ‘missed out’ between the visual stimuli at positions B and D, and with two positions E and F being missed out between the visual stimuli at positions D and G).
In embodiments, the relative position(s) of (e.g. and spacing between) visual stimuli could be varied within the range of possible positions for a cycle so that the average angular position increases in one or more successive cycles. An example of this is: Cycle 1: A, B, E, A, B, E, A, B, E… ; Cycle 2: A, C, E, A, C, E, A, C, E… ; Cycle 3: A, D, F, A, D, F, A, D, F… (Example 3B)
As illustrated in the above examples, preferably in each cycle a ‘wave’ of visual stimuli is provided which progresses from relatively smaller angular positions (relatively closer spacings) to relatively larger angular positions (relatively further spacings). Preferably the ‘wave’ of visual stimuli progresses from a smallest angular position (smallest spacing) to a largest angular position (largest spacing) of the position(s) at which visual stimuli are provided in the range of position(s) for the cycle.
For example, the ‘wave' in Example 1 comprises the positions A through J in turn. In Example 2, the ‘wave’ comprises positions A, B in turn in cycle 2, and positions A, B, C in turn in cycle 3. In Example 3A the ‘wave’ comprises positions A, C, E, in turn in cycle 2, etc.
Consistent with the above discussion, whilst the ‘wave’ could be formed of visual stimuli at each possible angular position within the range for a cycle (e.g. at each of A to J for Example 1), alternatively the wave may comprise a selection of the possible positions from the range (e.g. comprising A, B, H, I, J) such that some positions are skipped.
Within each cycle, plural ‘waves’ of stimuli could be provided (e.g. repeated).
The range of positions forming a (each) cycle, and the time spent in a (each) cycle (e.g. the number of ‘waves’ of stimuli in each cycle) could be predetermined, such that the range of positions at which visual stimuli are provided changes over time without any user input, and in embodiments this is done.
Alternatively, the positions at which visual stimuli are provided could be selected based on a user input.
The user input could be a user selecting one or more parameters for a training sequence prior to commencing the training session, e.g. a user selecting a minimum and maximum position for the visual stimuli to be provided during the training session, and selecting a rate at which the spacing between visual stimuli is to increase during the training session. Alternatively, the user input could be a user selecting a training program (e.g. a ‘relaxation’ program, e.g. ‘relaxation level 1’ or ‘relaxation level 2’, or an ‘energising’ program), the training program having one or more pre-configured training sequences with pre-configured parameters (e.g. such as the ranges of and spacing between visual stimuli in each cycle in the sequence). Based on this (or other suitable input parameters), the system may be configured to determine the sequence of stimuli to be provided (e.g. to determine the range of one or more positions forming each cycle, and the amount of time spent in each cycle).
Alternatively the user input, based on which the positions of the visual stimuli are selected for the training sequence, could be a user input during a training session, e.g. a user input indicative of a user’s level of relaxation and/or a user input indicative of a user’s perceptiveness to the visual stimuli during the training session.
Thus, in embodiments, a controller of the system (e.g. on-board controller 701 or external controller 712) may be configured to receive input data indicative of (and to determine) a user’s level of relaxation and/or a user’s perceptiveness to visual stimuli, and to adjust the positions at which visual stimuli are provided accordingly.
As noted above, the position of visual stimuli provided can be controlled based on a user’s perceptiveness to visual stimuli provided during a training session. In embodiments, a user’s perceptiveness is determined based on the user’s accuracy in identifying target characteristics of the visual stimuli.
is a flow chart showing steps for adjusting the positions of visual stimuli during a training session based on a user’s perceptiveness to visual stimuli.
Upon starting the training session the system provides visual stimuli within an initial range of one or more positions, with target characteristic intermittently shown.
The target characteristic could be any suitable and desired quality of the visual stimuli provided. In embodiments it is a target quality (e.g. a target colour, e.g. green) provided to the left, right or both monocular regions of a user’s vision. Alternatively, the target characteristic could be a matched or mis-matched quality (e.g. colour) between visual stimuli provided to the left and right monocular regions of the user’s vision.
The target characteristic is shown intermittently, so that the target quality (or matched, or mis-matched quality) occurs less often than other qualities (or mis-matched, or matched qualities). The target characteristic is preferably shown at times which are randomised so that a user cannot predict when it will occur.
For example, the quality (e.g. colour) of visual stimuli could be changed at regular intervals in time, but with the quality (e.g. colour) varied in a randomised manner (e.g. by selecting a weighted randomised colour). The system may be configured to change the quality (e.g. colour) in unison and/or differently for the left and right sides. For a percentage of the time, the quality (e.g. colour) on both right and left may match, and for a percentage of the time the quality (e.g. colour) may differ on the right and left.
For example, for a target characteristic which is the presence of a green colour, for a cycle of operation comprising waves through positions A and B and C, a sequence of visual stimuli provided to the left and right visual stimuli could comprise in turn:
A (blue, blue), B (purple, purple), C (purple, green), D (blue, blue), A (blue, blue), B (yellow, yellow), C (green, green), D (blue, blue), A (blue, blue), B (purple, purple), C (blue, blue), D (blue, blue).
Where A(green, green) indicates green stimuli being shown simultaneously to the left and right at position A, whereas for example A(purple, green) indicates purple to the left and green to the right at position A.
The system determines whether the user has correctly perceived the target characteristic. Preferably, the system is configured to receive a user response indicating that the user has perceived the target characteristic.
Preferably, the user response comprises a user pressing a button on a screen of a mobile device when the user believes they have seen the target characteristic (e.g. such as the button 601 shown on the screen 600 of . The button should be large enough that the user can press it without having to direct their gaze away from the forwards direction.
Alternatively, any other suitable and desired user response could be used, e.g. a user actively (consciously) or passively (subconsciously) interacting with any suitable and input means of the system, e.g. a button or microphone or gesture detector or other input means.
Determining whether the user has correctly perceived the target characteristic may comprise, determining whether the user has provided a response whilst the target characteristic is being shown (or within a particular time window after the characteristic has started being shown).
Conversely, it may be determined that the user has not correctly perceived the target characteristic if the user provides a response whilst the target characteristic is not being shown (e.g. before the target characteristic is shown, without a target characteristic being shown, or after the target characteristic has stopped being shown), or outside the above mentioned time window.
One or more positions at which visual stimuli are provided may then be adjusted based on whether the user has correctly perceived the target characteristic.
In this regard, the one or more positions could be adjusted immediately in response to a correct (or incorrect) identification of a single occurrence of a target characteristic. Alternatively, the range of positions could be adjusted after a threshold number of correct (or incorrect) identifications, or based on the proportion of correctly (or incorrectly) identified target characteristics.
Preferably, adjusting one or more positions at which to provide visual stimuli based on the user’s perceptiveness to visual stimuli comprises increasing the separation between visual stimuli when the user correctly identifies one or more occurrences of the target characteristic (and may conversely comprise decreasing the spacing between left and right visual stimuli when the user incorrectly identifies one or more occurrences of the target characteristic).
Similarly to the discussion above, for a training sequence which comprises one or more cycles of operations, increasing the spacing between visual stimuli may comprise increasing the closest and/or furthest spacing between left and right positions within a cycle of visual stimuli and/or adjusting the relative positions of visual stimuli provided during a cycle (e.g. compared to an immediately preceding cycle). Conversely, decreasing the spacing between visual stimuli may comprise decreasing the closest and/or furthest spacing between left and right positions within a cycle of visual stimuli and/or adjusting the relative positions of visual stimuli provided during a cycle (e.g. compared to an immediately preceding cycle).
Alternatively, the positions of visual stimuli could be changed irrespective of whether correct (or incorrect) identifications are. In this regard, in embodiments, correct (or incorrect) identifications by the user may be recorded (e.g. and displayed on a screen of a mobile device during the training session, or communicated to the user after the session is complete as a training report), without being used to control the spacing of visual stimuli during the training session.
The training session can be ended at any suitable and desired time. For example, the training session could end when a user indicates they wish to end the training session. Alternatively, the training session could end after a predetermined amount of training time has elapsed, or a particular number of correct identifications of the target characteristic have been made, or a particular set of one or more positions for the visual stimuli is reached (e.g. a set of one or more positions for visual stimuli which includes a furthest apart spacing of visual stimuli permitted by the training device).
Thus, as can be seen, in embodiments the progression of visual stimuli towards positions which are further apart is controlled based on the success of the trainee in correctly identifying specified characteristics of the visual stimuli.
Other features of the training sequence of visual stimuli could change in response to correct (or incorrect) identifications by the user, e.g. could change when the range or one or more positions changes. For example, the rate of change of the quality (e.g. colour) of visual stimuli, or the rate of provision of visual stimuli, the rate of occurrence of the target quality, could also increase in response to correct (or incorrect) identifications by the user.
Qualities (e.g. other than colour, e.g. such as pattern, texture, localised movement) could also change in response to correct (or incorrect) identifications by the user. For example, a degree of contrast between stimuli on the left and right could be changed, for example subtler shades of colour may be introduced in response to correct identifications.
The system may allow a user to select one or more parameters for the training session. For example the user may select which quality (or qualities) are to be the target characteristics during a training session (e.g. allowing a user to select one or more target colours). The system may also be configured to receive a user selection as to the rate at which the target characteristics occur, and/or the rate at which the positions of the visual stimuli change. Alternatively, the user may be able to control various parameters (e.g. which quality (or qualities) are to be the target characteristics and/or a rate at which the target characteristics occur, and/or the rate at which the positions of the visual stimuli change) by selecting a training program from a plurality of pre-configured training programs (e.g. an ‘energising’ program or a ‘relaxing program’).
During the training session, the system may keep a record of the user’s perception of visual stimuli, which may be provided as a training report once a training session is complete. For example, as shown in , a mobile device of the system may display an indication of the proportion of target characteristics correctly identified 602, and the average time it took the user to identify each target characteristic 603. Thus, the speed and accuracy of identification of target characteristics can be measured and recorded by a mobile app 712.
If the training session ends after a predetermined amount of training time, then a final (most outwards) position of the visual stimuli at the end of the training session may provide a metric to indicate the trainee’s success rate.
Although changes in colour (or other quality) of visual stimuli are described above in the context of providing target characteristics for a user to identify to determine user perceptiveness, the system could change the quality of visual stimuli regardless of whether user perceptiveness is being monitored. The Applicant has recognised that, generally, changing the quality of visual stimuli throughout a training session may improve user attention and stop a user from losing interest. This may be particularly the case when changes in visual quality (e.g. colour) are randomised, so that a user cannot predict the quality (e.g. colour) that will next appear.
Whilst the visual quality (e.g. colour) can differ between the left and right visual stimuli provided simultaneously, the Applicant has found that left and right visual stimuli having the same qualities are more relaxing. Therefore, in embodiments left and right visual stimuli provided simultaneously preferably have identical qualities for a majority of the training session.
Regarding the position of the visual stimuli, in embodiments, the position at which the left and right visual stimuli are provided is changed in synchronisation with a rhythmic beat of a soundtrack, the soundtrack being provided e.g. by means of a suitable speaker, e.g. integrated into the head-mounted device. For example for a sequence of positions which is A, B, A, B, A, B, etc. each position may be provided on the beat of the soundtrack. Such synchronisation with a soundtrack may enhance the relaxing effect of the training sequence, and therefore facilitate relaxing of a user’s gaze away from a central focus to a wider field of peripheral vision.
The system may permit the user to select a soundtrack for a training session, e.g. from a music library stored on a mobile device or a music streaming service. The soundtrack could be, for example, binaural beats, nature sounds, music or other soundtrack. The system may be configured to provide a sequence of visual stimuli based on the selected soundtrack, e.g. with slower tempo soundtracks being used for slower paced sequences (where positions and/or qualities of visual stimuli change less often) compared to faster tempo soundtracks which are used for faster paced sequences (where positions and/or qualities of visual stimuli change more often). For example, the soundtrack may form an integral role in the selection of the training ‘programme’ on an interface (e.g. of a mobile app), with the user being able to select a soundtrack e.g. ‘Relaxing Rainforest’ or ‘Upbeat Dance’.
As will be apparent from the above discussion, the technology described herein comprises systems and methods for relaxing a user’s gaze away from a central focus to a wider field of peripheral vision. In addition to relaxing gaze and training peripheral vision, the use of such a system may provide a generally relaxing effect on the user. This is shown, for example in which shows example brainwave data measured by an EEG device whilst a user is performing a training session in accordance with the present disclosure (in this case, the EEG device is a MuseTM 2 headband, and the date is graphed using “Mindmonitor” software). The graph in shows the relative strength of brain waves on the y (vertical) axis normalised such that the total strength at any point is 1, and shows time in minutes on the horizontal (x) axis. Generally, shows that after starting a training session, alpha wave activity (associated with a more relaxed state of the user) increases, whilst beta wave activity (associated with a less relaxed state of the user) decreases.
Although the present disclosure has been described with reference to various embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made without departing from the scope of the invention as set forth in the accompanying claims.

Claims (30)

  1. A system for relaxing gaze and/or training attention to peripheral vision comprising: a head mounted device configured to provide visual stimuli simultaneously to the left and right monocular regions of a user’s peripheral vision.
  2. The system of claim 1, wherein the head mounted device is configured to provide visual stimuli to the left and right monocular regions simultaneously at an equal angular position to the left and right from the centre of a user’s vision.
  3. The system of claim 2, wherein the head mounted device is configured to provide visual stimuli at a plurality of angular positions to the left and right from the centre of a user’s vision.
  4. The system of claim 3, wherein the head mounted device is configured to provide visual stimuli at a plurality of angular positions to the left and right from the centre of a user’s vision as measured along a horizontal plane.
  5. The system of any preceding claim, wherein the head mounted device is configured to provide visual stimuli whilst the user maintains a stationary forward looking eye position.
  6. The system of claim 1 or claim 2, wherein the head mounted device comprises one or more light elements, and is configured to activate the one or more light elements to provide the visual stimuli.
  7. The system of claim 6, comprising a pair of elongate members, wherein the one or more light elements are provided on each elongate member of the pair.
  8. The system of claim 7, wherein the pair of elongate members are formed integrally with or mountable to one or more of: a pair of over-head headphones, a headband, a hat, or a pair of glasses.
  9. The system of any preceding claim, wherein when performing a training session, a spacing between the left and right visual stimuli provided simultaneously increases with increasing time.
  10. The system of any preceding claim, wherein when performing a training session, a spacing between the left and right visual stimuli provided simultaneously increases based on a user response.
  11. The system of claim 9 and/or claim 10, wherein when performing a training session, the head mounted device is configured to perform one or more cycles of providing visual stimuli, wherein in each cycle the visual stimuli are provided at one or more positions within a defined range of one or more positions, wherein the one or more positions at which visual stimuli are provided is permitted to vary between cycles by altering either or both of: a closest and/or a furthest spacing between left and right visual stimuli in the range of one or more positions for a cycle; and the one or more positions at which visual elements are provided within the range of one or more positions for a cycle.
  12. The system of any preceding claim, wherein during a training session, the head-mounted device is configured to vary one or more qualities of the visual stimuli provided, the one or more qualities of the visual stimuli comprising one or more of: colour, intensity, texture, size, shape, or localised motion.
  13. The system of claim 12, wherein the system is configured to set one or more target characteristics of the visual stimuli, the one or more target characteristics comprising a target quality for a visual stimulus or a mismatched quality between visual stimuli provided simultaneously; and wherein during a training session, the system is configured to: provide left and right stimuli having a target characteristic; receive a user input responsive to a user perceiving the target characteristic; determine whether the user has correctly perceived the target characteristic; and when the user has correctly perceived the target characteristic, alter one or more positions at which left and right stimuli are provided by the head-mounted device.
  14. The system of any preceding claim, wherein the system is configured to control one or more positions at which the left and right visual stimuli are provided by the head-mounted device based on an indicated or detected level of relaxation of a user.
  15. The system of any preceding claim, wherein the head-mounted device is configured to change a position at which the left and right visual stimuli are provided in synchronisation with a rhythmic beat of a soundtrack.
  16. A method for relaxing gaze and/or training attention to peripheral vision comprising: providing visual stimuli simultaneously to the left and right monocular regions of a subject’s peripheral vision.
  17. The method of claim 16, comprising providing the visual stimuli by activating one or more visual elements of a training device, preferably wherein the one or more visual elements are one or more light elements of a head mounted device.
  18. The method of claim 17, comprising positioning or configuring the one or more visual elements so as to present the visual stimuli to the left and right monocular regions of a subject’s peripheral vision.
  19. The method of claim 17 or 18, comprising providing the head mounted device as part of or mounting the head mounted device to any of: a hat, a headband, a pair of over-head headphones, or a pair of glasses.
  20. The method of any of claims 16 to 19, comprising providing left and right visual stimuli simultaneously at an equal angular position to the left and right from the centre of the subject’s vision, at a sequence of positions.
  21. The method of claim 20, comprising providing left and right visual stimuli at a sequence of angular positions as measured along a horizontal plane.
  22. The method of any of claims 16 to 21, comprising providing visual stimuli whilst the user maintains a stationary forward looking eye position.
  23. The method of any of claims 16 to 22, comprising during a training session, increasing a spacing between the left and right visual stimuli provided simultaneously based on increasing time.
  24. The method of any of claims 16 to 22, comprising during a training session, increasing a spacing between the left and right visual stimuli provided simultaneously based on a user response.
  25. The method of any of claims 16 to 24, comprising during a training session, performing one or more cycles of providing visual stimuli, wherein in each cycle the visual stimuli are activated at one or more positions within a defined range of one or more positions, and comprising varying the one or more positions at which visual stimuli are provided between cycles by altering either or both of: a closest and/or furthest spacing between left and right visual stimuli in the range of one or more positions for a cycle; and the one or more positions at which visual elements are provided within the range of one or more positions for a cycle.
  26. The method of any of claims 16 to 25, comprising varying one or more qualities of the visual stimuli provided, the one or more qualities of the visual stimuli comprising one or more of: colour, intensity, texture, size, shape, or localised motion.
  27. The method of claim 26, comprising setting one or more target characteristics of the visual stimuli, the one or more target characteristics comprising a target quality for a visual stimulus or a mismatched quality between visual stimuli provided simultaneously; providing left and right visual stimuli having the target characteristic; receiving a user input responsive to a user perceiving the target characteristic; determining whether the user has correctly perceived the target characteristic; and when the user has correctly perceived the target characteristic, altering one or more positions at which left and right stimuli are provided.
  28. The method of any of claims 16 to 27 comprising controlling one or more positions at which the left and right visual stimuli are provided based on an indicated or detected level of relaxation of a user.
  29. The method of any of claims 16 to 28 comprising changing a position at which the left and right visual stimuli are provided in synchronisation with a rhythmic beat of a soundtrack.
  30. A computer program comprising computer software code for performing the method of any one of claims 16 to 18 and 20 to 29, when the program is run on one or more data processors.
PCT/GB2023/052230 2022-08-31 2023-08-30 Head mounted device and methods for training peripheral vision WO2024047338A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GBGB2212691.6A GB202212691D0 (en) 2022-08-31 2022-08-31 Head worn device and methods for training wide peripheral vision
GB2212691.6 2022-08-31
GB2300972.3A GB2622119A (en) 2022-08-31 2023-01-23 Head mounted device and methods for training peripheral vision
GB2300972.3 2023-01-23

Publications (1)

Publication Number Publication Date
WO2024047338A1 true WO2024047338A1 (en) 2024-03-07

Family

ID=88188745

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2023/052230 WO2024047338A1 (en) 2022-08-31 2023-08-30 Head mounted device and methods for training peripheral vision

Country Status (1)

Country Link
WO (1) WO2024047338A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4315502A (en) * 1979-10-11 1982-02-16 Gorges Denis E Learning-relaxation device
DE19905145A1 (en) * 1999-02-08 2000-08-24 Pohlmann Bernd Michael Device for stimulating eye movements
WO2006111735A2 (en) * 2005-04-20 2006-10-26 Barts And The London Nhs Trust Device for ameliorating symptoms of gait-impaired patient
EP2075035A1 (en) * 2007-12-24 2009-07-01 Peter Carr Photic stimulation for eyes
CN201453585U (en) * 2009-06-01 2010-05-12 黄维克 Eye movement spectacle frame
US20220125299A1 (en) * 2020-10-28 2022-04-28 University Of Miami Vision testing via prediction-based setting of initial stimuli characteristics for user interface locations

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4315502A (en) * 1979-10-11 1982-02-16 Gorges Denis E Learning-relaxation device
DE19905145A1 (en) * 1999-02-08 2000-08-24 Pohlmann Bernd Michael Device for stimulating eye movements
WO2006111735A2 (en) * 2005-04-20 2006-10-26 Barts And The London Nhs Trust Device for ameliorating symptoms of gait-impaired patient
EP2075035A1 (en) * 2007-12-24 2009-07-01 Peter Carr Photic stimulation for eyes
CN201453585U (en) * 2009-06-01 2010-05-12 黄维克 Eye movement spectacle frame
US20220125299A1 (en) * 2020-10-28 2022-04-28 University Of Miami Vision testing via prediction-based setting of initial stimuli characteristics for user interface locations

Similar Documents

Publication Publication Date Title
US11977677B2 (en) Gesture based user interfaces, apparatuses and systems using eye tracking, head tracking, hand tracking, facial expressions and other user actions
US10799667B2 (en) Methods and systems for modulating stimuli to the brain with biosensors
US10548805B2 (en) Virtual reality apparatus and methods therefor
CN205597906U (en) Wearable physiological detector
US20200201434A1 (en) Bioresponsive virtual reality system and method of operating the same
US11000669B2 (en) Method of virtual reality system and implementing such method
US9872968B2 (en) Biofeedback virtual reality sleep assistant
US8511820B2 (en) Device to measure functions of the eye directly
CN104665788A (en) Wearable physiological detection device
WO2016119665A1 (en) Wearable physiological detection device
TWM553987U (en) Glasses structure and glasses combination having physiological signal capture function
TWI669102B (en) Wearable physiological detection device
CN204839505U (en) Wearing formula physiology detection device
US11724061B2 (en) Multi-modality therapeutic stimulation using virtual objects and gamification
CN104665827A (en) Wearable physiological detection device
US20230296895A1 (en) Methods, apparatus, and articles to enhance brain function via presentation of visual effects in far and/or ultra-far peripheral field
WO2024047338A1 (en) Head mounted device and methods for training peripheral vision
TWI631933B (en) Physiological resonance stimulation method and wearable system using the same
WO2017125081A1 (en) Glasses-type physiological sensing device, glasses structure having physiological signal acquisition function, and glasses combination
CN204839483U (en) Wearing formula physiology detection device
CN204765634U (en) Wearing formula physiology detection device
GB2622119A (en) Head mounted device and methods for training peripheral vision
TWI650105B (en) Wearable physiological detection device
US20190380607A1 (en) Mobile Wearable Device for Measuring Electromagnetic Brain Activity
TW201626950A (en) Wearable electrocardiogram detector

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23776433

Country of ref document: EP

Kind code of ref document: A1